source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Principal subalgebra#0
|
In mathematics, a principal subalgebra of a complex simple Lie algebra is a 3-dimensional simple subalgebra whose non-zero elements are regular. A finite-dimensional complex simple Lie algebra has a unique conjugacy class of principal subalgebras, each of which is the span of an sl2-triple. == References == Bourbaki, Nicolas (2005) [1975], Lie groups and Lie algebras. Chapters 7–9, Elements of Mathematics (Berlin), Berlin, New York: Springer-Verlag, ISBN 978-3-540-68851-8, MR 2109105
|
Wikipedia:Principle of distributivity#0
|
The principle of distributivity states that the algebraic distributive law is valid, where both logical conjunction and logical disjunction are distributive over each other so that for any propositions A, B and C the equivalences A ∧ ( B ∨ C ) ⟺ ( A ∧ B ) ∨ ( A ∧ C ) {\displaystyle A\land (B\lor C)\iff (A\land B)\lor (A\land C)} and A ∨ ( B ∧ C ) ⟺ ( A ∨ B ) ∧ ( A ∨ C ) {\displaystyle A\lor (B\land C)\iff (A\lor B)\land (A\lor C)} hold. The principle of distributivity is valid in classical logic, but both valid and invalid in quantum logic. The article "Is Logic Empirical?" discusses the case that quantum logic is the correct, empirical logic, on the grounds that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena. == References ==
|
Wikipedia:Principles of Hindu Reckoning#0
|
Principles of Hindu Reckoning (Kitab fi usul hisab al-hind) is a mathematics book written by the 10th- and 11th-century Persian mathematician Kushyar ibn Labban. It is the second-oldest book extant in Arabic about Hindu arithmetic using Hindu-Arabic numerals ( ० ۱ ۲ ۳ ۴ ۵ ۶ ۷ ۸ ۹), preceded by Kitab al-Fusul fi al-Hisub al-Hindi by Abul al-Hassan Ahmad ibn Ibrahim al-Uglidis, written in 952. Although Al-Khwarizmi also wrote a book about Hindu arithmetic in 825, his Arabic original was lost, and only a 12th-century translation is extant.: 3 In his opening sentence, Ibn Labban describes his book as one on the principles of Hindu arithmetic. Principles of Hindu Reckoning was one of the foreign sources for Hindu Reckoning in the 10th and 11th century in India. It was translated into English by Martin Levey and Marvin Petruck in 1963 from the only extant Arabic manuscript at that time: Istanbul, Aya Sophya Library, MS 4857 and a Hebrew translation and commentary by Shālôm ben Joseph 'Anābī.: 4 == Indian dust board == Hindu arithmetic was conducted on a dust board similar to the Chinese counting board. A dust board is a flat surface with a layer of sand and lined with grids. Very much like the Chinese counting rod numerals, a blank on a sand board grid stood for zero, and zero sign was not necessary. Shifting of digits involves erasing and rewriting, unlike the counting board. == Content == There is only one Arabic copy extant, now kept in the Hagia Sophia Library in Istanbul. There is also a Hebrew translation with commentary, kept in the Bodleian Library of Oxford University. In 1965 University of Wisconsin Press published an English edition of this book translated by Martin Levey and Marvin Petruck, based on both the Arabic and Hebrew editions. This English translation included 31 plates of facsimile of original Arabic text. Principles of Hindu Reckoning consists of two parts dealing with arithmetics in two numerals system in India at his time. Part I mainly dealt with decimal algorithm of subtraction, multiplication, division, extraction of square root and cubic root in place value Hindu-numeral system. However, a section on "halving", was treated differently, i.e., with a hybrid of decimal and sexagesimal numeral. The similarity between decimal Hindu algorithm with Chinese algorithm in Sunzi Suanjing are striking, except the operation halving, as there was no hybrid decimal/sexagesimal calculation in China. Part II dealt with operation of subtraction, multiplication, division, extraction of square root and cubic root in sexagesimal number system. There was only positional decimal arithmetic in China, never any sexagesimal arithmetic. Unlike Abu'l-Hasan al-Uqlidisi's Kitab al-Fusul fi al-Hisab al-Hindi (The Arithmetics of Al-Uqlidisi) where the basic mathematical operation of addition, subtraction, multiplication and division were described in words, ibn Labban's book provided actual calculation procedures expressed in Hindu-Arabic numerals. == Decimal arithmetics == === Addition === Kushyar ibn Labban described in detail the addition of two numbers. The Hindu addition is identical to rod numeral addition in Sunzi Suanjing There was a minor difference in the treatment of second row, in Hindu reckoning, the second row digits drawn on sand board remained in place from beginning to end, while in rod calculus, rods from lower rows were physically removed and add to upper row, digit by digit. === Subtraction === In the 3rd section of his book, Kushyar ibn Labban provided step by step algorithm for subtraction of 839 from 5625. Second row digits remained in place at all time. In rod calculus, digit from second row was removed digit by digit in calculation, leaving only the result in one row. === Multiplication === Kushyar ibn Labban multiplication is a variation of Sunzi multiplication. === Division === Professor Lam Lay Yong discovered that the Hindu division method describe by Kushyar ibn Labban is totally identical to rod calculus division in the 5th-century Sunzi Suanjing. Besides the totally identical format, procedure and remainder fraction, one telltale sign which discloses the origin of this division algorithm is in the missing 0 after 243, which in true Hindu numeral should be written as 2430, not 243blank; blank space is a feature of rod numerals (and abacus). === Divide by 2 === Divide by 2 or "halving" in Hindu reckoning was treated with a hybrid of decimal and sexagesimal numerals: It was calculated not from left to right as decimal arithmetics, but from right to left: After halving the first digit 5 to get 21⁄2, replace the 5 with 2, and write 30 under it: 5622 30 Final result: 2812 30 === Extraction of square root === Kushyar ibn Labban described the algorithm for extraction of square root with example of ( 63342 ) = 255 371 511 {\displaystyle {\sqrt {(}}63342)=255{\frac {371}{511}}} Kushyar ibn Labban square root extraction algorithm is basically the same as Sunzi algorithm The approximation of non perfect square root using Sunzi algorithm yields result slightly higher than the true value in decimal part, the square root approximation of Labban gave slightly lower value, the integer part are the same. == Sexagesimal arithmetics == === Multiplication === The Hindu sexagesimal multiplication format was completely different from Hindu decimal arithmetics. Kushyar ibn Labban's example of 25 degree 42 minutes multiplied by 18 degrees 36 minutes was written vertically as 18| |25 36| |42 with a blank space in between: 80 == Influence == Kushyar ibn Labban's Principles of Hindu Reckoning exerted strong influence on later Arabic algorists. His student al-Nasawi followed his teacher's method. Algorist of the 13th century, Jordanus de Nemore's work was influenced by al-Nasawi. As late as 16th century, ibn Labban's name was still mentioned.: 40–42 == References == == External links == Media related to Principles of Hindu Reckoning at Wikimedia Commons The Development of Hindu-Arabic and Traditional Chinese Arithmetic, Chinese Science 13 1996, 35-54
|
Wikipedia:Principles of Mathematical Analysis#0
|
Principles of Mathematical Analysis, colloquially known as "PMA" or "Baby Rudin," is an undergraduate real analysis textbook written by Walter Rudin. Initially published by McGraw Hill in 1953, it is one of the most famous mathematics textbooks ever written. == History == As a C. L. E. Moore instructor, Rudin taught the real analysis course at MIT in the 1951–1952 academic year. After he commented to W. T. Martin, who served as a consulting editor for McGraw Hill, that there were no textbooks covering the course material in a satisfactory manner, Martin suggested Rudin write one himself. After completing an outline and a sample chapter, he received a contract from McGraw Hill. He completed the manuscript in the spring of 1952, and it was published the year after. Rudin noted that in writing his textbook, his purpose was "to present a beautiful area of mathematics in a well-organized readable way, concisely, efficiently, with complete and correct proofs. It was an aesthetic pleasure to work on it." The text was revised twice: first in 1964 (second edition) and then in 1976 (third edition). It has been translated into several languages, including Russian, Chinese, Spanish, French, German, Italian, Greek, Persian, Portuguese, and Polish. == Contents == Rudin's text was the first modern English text on classical real analysis, and its organization of topics has been frequently imitated. In Chapter 1, he constructs the real and complex numbers and outlines their properties. (In the third edition, the Dedekind cut construction is sent to an appendix for pedagogical reasons.) Chapter 2 discusses the topological properties of the real numbers as a metric space. The rest of the text covers topics such as continuous functions, differentiation, the Riemann–Stieltjes integral, sequences and series of functions (in particular uniform convergence), and outlines examples such as power series, the exponential and logarithmic functions, the fundamental theorem of algebra, and Fourier series. After this single-variable treatment, Rudin goes in detail about real analysis in more than one dimension, with discussion of the implicit and inverse function theorems, differential forms, the generalized Stokes theorem, and the Lebesgue integral. == References == == External links == Principles of Mathematical Analysis at McGraw-Hill Education Supplemental comments and exercises to Chapters 1-7 of Rudin, written by George Bergman
|
Wikipedia:Probability bounds analysis#0
|
Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions (rather than densities or mass functions). This bounding approach permits analysts to make calculations without requiring overly precise assumptions about parameter values, dependence among variables, or even distribution shape. Probability bounds analysis is essentially a combination of the methods of standard interval analysis and classical probability theory. Probability bounds analysis gives the same answer as interval analysis does when only range information is available. It also gives the same answers as Monte Carlo simulation does when information is abundant enough to precisely specify input distributions and their dependencies. Thus, it is a generalization of both interval analysis and probability theory. The diverse methods comprising probability bounds analysis provide algorithms to evaluate mathematical expressions when there is uncertainty about the input values, their dependencies, or even the form of mathematical expression itself. The calculations yield results that are guaranteed to enclose all possible distributions of the output variable if the input p-boxes were also sure to enclose their respective distributions. In some cases, a calculated p-box will also be best-possible in the sense that the bounds could be no tighter without excluding some of the possible distributions. P-boxes are usually merely bounds on possible distributions. The bounds often also enclose distributions that are not themselves possible. For instance, the set of probability distributions that could result from adding random values without the independence assumption from two (precise) distributions is generally a proper subset of all the distributions enclosed by the p-box computed for the sum. That is, there are distributions within the output p-box that could not arise under any dependence between the two input distributions. The output p-box will, however, always contain all distributions that are possible, so long as the input p-boxes were sure to enclose their respective underlying distributions. This property often suffices for use in risk analysis and other fields requiring calculations under uncertainty. == History of bounding probability == The idea of bounding probability has a very long tradition throughout the history of probability theory. Indeed, in 1854 George Boole used the notion of interval bounds on probability in his The Laws of Thought. Also dating from the latter half of the 19th century, the inequality attributed to Chebyshev described bounds on a distribution when only the mean and variance of the variable are known, and the related inequality attributed to Markov found bounds on a positive variable when only the mean is known. Kyburg reviewed the history of interval probabilities and traced the development of the critical ideas through the 20th century, including the important notion of incomparable probabilities favored by Keynes. Of particular note is Fréchet's derivation in the 1930s of bounds on calculations involving total probabilities without dependence assumptions. Bounding probabilities has continued to the present day (e.g., Walley's theory of imprecise probability.) The methods of probability bounds analysis that could be routinely used in risk assessments were developed in the 1980s. Hailperin described a computational scheme for bounding logical calculations extending the ideas of Boole. Yager described the elementary procedures by which bounds on convolutions can be computed under an assumption of independence. At about the same time, Makarov, and independently, Rüschendorf solved the problem, originally posed by Kolmogorov, of how to find the upper and lower bounds for the probability distribution of a sum of random variables whose marginal distributions, but not their joint distribution, are known. Frank et al. generalized the result of Makarov and expressed it in terms of copulas. Since that time, formulas and algorithms for sums have been generalized and extended to differences, products, quotients and other binary and unary functions under various dependence assumptions. == Arithmetic expressions == Arithmetic expressions involving operations such as additions, subtractions, multiplications, divisions, minima, maxima, powers, exponentials, logarithms, square roots, absolute values, etc., are commonly used in risk analyses and uncertainty modeling. Convolution is the operation of finding the probability distribution of a sum of independent random variables specified by probability distributions. We can extend the term to finding distributions of other mathematical functions (products, differences, quotients, and more complex functions) and other assumptions about the intervariable dependencies. There are convenient algorithms for computing these generalized convolutions under a variety of assumptions about the dependencies among the inputs. === Mathematical details === Let D {\displaystyle \mathbb {D} } denote the space of distribution functions on the real numbers R , {\displaystyle \mathbb {R} ,} i.e., D = { D | D : R → [ 0 , 1 ] , D ( x ) ≤ D ( y ) for all x < y } . {\displaystyle \mathbb {D} =\{D|D:\mathbb {R} \to [0,1],D(x)\leq D(y){\text{ for all }}x<y\}.} A p-box is a quintuple { F ¯ , F _ , m , v , F } , {\displaystyle \left\{{\overline {F}},{\underline {F}},m,v,\mathbf {F} \right\},} where F ¯ {\displaystyle {\overline {F}}} and F _ ∈ D {\displaystyle {\underline {F}}\in \mathbb {D} } , m {\displaystyle m} and v {\displaystyle v} are real intervals, and F ⊂ D {\displaystyle \mathbf {F} \subset \mathbb {D} } . This quintuple denotes the set of distribution functions F ∈ F ⊂ D {\displaystyle F\in \mathbf {F} \subset \mathbb {D} } such that: ∀ x ∈ R : F ¯ ( x ) ≤ F ( x ) ≤ F _ ( x ) ∫ R x d F ( x ) ∈ m expectation condition ∫ R x 2 d F ( x ) − ( ∫ R x d F ( x ) ) 2 ∈ v variance condition {\displaystyle {\begin{aligned}\forall x\in \mathbb {R} :\qquad &{\overline {F}}(x)\leq F(x)\leq {\underline {F}}(x)\\[6pt]&\int _{\mathbb {R} }xdF(x)\in m&&{\text{expectation condition}}\\&\int _{\mathbb {R} }x^{2}dF(x)-\left(\int _{\mathbb {R} }xdF(x)\right)^{2}\in v&&{\text{variance condition}}\end{aligned}}} If a function satisfies all the conditions above it is said to be inside the p-box. In some cases, there may be no information about the moments or distribution family other than what is encoded in the two distribution functions that constitute the edges of the p-box. Then the quintuple representing the p-box { B 1 , B 2 , [ − ∞ , ∞ ] , [ 0 , ∞ ] , D } {\displaystyle \{B_{1},B_{2},[-\infty ,\infty ],[0,\infty ],\mathbb {D} \}} can be denoted more compactly as [B1, B2]. This notation harkens to that of intervals on the real line, except that the endpoints are distributions rather than points. The notation X ∼ F {\displaystyle X\sim F} denotes the fact that X ∈ R {\displaystyle X\in \mathbb {R} } is a random variable governed by the distribution function F, that is, { F : R → [ 0 , 1 ] x ↦ Pr ( X ≤ x ) {\displaystyle {\begin{cases}F:\mathbb {R} \to [0,1]\\x\mapsto \Pr(X\leq x)\end{cases}}} Let us generalize the tilde notation for use with p-boxes. We will write X ~ B to mean that X is a random variable whose distribution function is unknown except that it is inside B. Thus, X ~ F ∈ B can be contracted to X ~ B without mentioning the distribution function explicitly. If X and Y are independent random variables with distributions F and G respectively, then X + Y = Z ~ H given by H ( z ) = ∫ z = x + y F ( x ) G ( y ) d z = ∫ R F ( x ) G ( z − x ) d x = F ∗ G . {\displaystyle H(z)=\int _{z=x+y}F(x)G(y)dz=\int _{\mathbb {R} }F(x)G(z-x)dx=F*G.} This operation is called a convolution on F and G. The analogous operation on p-boxes is straightforward for sums. Suppose X ∼ A = [ A 1 , A 2 ] , and Y ∼ B = [ B 1 , B 2 ] . {\displaystyle X\sim A=[A_{1},A_{2}],\quad {\text{and}}\quad Y\sim B=[B_{1},B_{2}].} If X and Y are stochastically independent, then the distribution of Z = X + Y is inside the p-box [ A 1 ∗ B 1 , A 2 ∗ B 2 ] . {\displaystyle \left[A_{1}*B_{1},A_{2}*B_{2}\right].} Finding bounds on the distribution of sums Z = X + Y without making any assumption about the dependence between X and Y is actually easier than the problem assuming independence. Makarov showed that Z ∼ [ sup z = x + y max ( F ( x ) + G ( y ) − 1 , 0 ) , inf z = x + y min ( F ( x ) + G ( y ) , 1 ) ] {\displaystyle Z\sim \left[\sup _{z=x+y}\max(F(x)+G(y)-1,0),\inf _{z=x+y}\min(F(x)+G(y),1)\right]} These bounds are implied by the Fréchet–Hoeffding copula bounds. The problem can also be solved using the methods of mathematical programming. The convolution under the intermediate assumption that X and Y have positive dependence is likewise easy to compute, as is the convolution under the extreme assumptions of perfect positive or perfect negative dependency between X and Y. Generalized convolutions for other operations such as subtraction, multiplication, division, etc., can be derived using transformations. For instance, p-box subtraction A − B can be defined as A + (−B), where the negative of a p-box B = [B1, B2] is [B2(−x), B1(−x)]. == Logical expressions == Logical or Boolean expressions involving conjunctions (AND operations), disjunctions (OR operations), exclusive disjunctions, equivalences, conditionals, etc. arise in the analysis of fault trees and event trees common in risk assessments. If the probabilities of events are characterized by intervals, as suggested by Boole and Keynes among others, these binary operations are straightforward to evaluate. For example, if the probability of an event A is in the interval P(A) = a = [0.2, 0.25], and the probability of the event B is in P(B) = b = [0.1, 0.3], then the probability of the conjunction is surely in the interval P(A & B) = a × b = [0.2, 0.25] × [0.1, 0.3] = [0.2 × 0.1, 0.25 × 0.3] = [0.02, 0.075] so long as A and B can be assumed to be independent events. If they are not independent, we can still bound the conjunction using the classical Fréchet inequality. In this case, we can infer at least that the probability of the joint event A & B is surely within the interval P(A & B) = env(max(0, a+b−1), min(a, b)) = env(max(0, [0.2, 0.25]+[0.1, 0.3]−1), min([0.2, 0.25], [0.1, 0.3])) = env([max(0, 0.2+0.1–1), max(0, 0.25+0.3–1)], [min(0.2,0.1), min(0.25, 0.3)]) = env([0,0], [0.1, 0.25]) = [0, 0.25] where env([x1,x2], [y1,y2]) is [min(x1,y1), max(x2,y2)]. Likewise, the probability of the disjunction is surely in the interval P(A v B) = a + b − a × b = 1 − (1 − a) × (1 − b) = 1 − (1 − [0.2, 0.25]) × (1 − [0.1, 0.3]) = 1 − [0.75, 0.8] × [0.7, 0.9] = 1 − [0.525, 0.72] = [0.28, 0.475] if A and B are independent events. If they are not independent, the Fréchet inequality bounds the disjunction P(A v B) = env(max(a, b), min(1, a + b)) = env(max([0.2, 0.25], [0.1, 0.3]), min(1, [0.2, 0.25] + [0.1, 0.3])) = env([0.2, 0.3], [0.3, 0.55]) = [0.2, 0.55]. It is also possible to compute interval bounds on the conjunction or disjunction under other assumptions about the dependence between A and B. For instance, one might assume they are positively dependent, in which case the resulting interval is not as tight as the answer assuming independence but tighter than the answer given by the Fréchet inequality. Comparable calculations are used for other logical functions such as negation, exclusive disjunction, etc. When the Boolean expression to be evaluated becomes complex, it may be necessary to evaluate it using the methods of mathematical programming to get best-possible bounds on the expression. A similar problem one presents in the case of probabilistic logic (see for example Gerla 1994). If the probabilities of the events are characterized by probability distributions or p-boxes rather than intervals, then analogous calculations can be done to obtain distributional or p-box results characterizing the probability of the top event. == Magnitude comparisons == The probability that an uncertain number represented by a p-box D is less than zero is the interval Pr(D < 0) = [F(0), F̅(0)], where F̅(0) is the left bound of the probability box D and F(0) is its right bound, both evaluated at zero. Two uncertain numbers represented by probability boxes may then be compared for numerical magnitude with the following encodings: A < B = Pr(A − B < 0), A > B = Pr(B − A < 0), A ≤ B = Pr(A − B ≤ 0), and A ≥ B = Pr(B − A ≤ 0). Thus the probability that A is less than B is the same as the probability that their difference is less than zero, and this probability can be said to be the value of the expression A < B. Like arithmetic and logical operations, these magnitude comparisons generally depend on the stochastic dependence between A and B, and the subtraction in the encoding should reflect that dependence. If their dependence is unknown, the difference can be computed without making any assumption using the Fréchet operation. == Sampling-based computation == Some analysts use sampling-based approaches to computing probability bounds, including Monte Carlo simulation, Latin hypercube methods or importance sampling. These approaches cannot assure mathematical rigor in the result because such simulation methods are approximations, although their performance can generally be improved simply by increasing the number of replications in the simulation. Thus, unlike the analytical theorems or methods based on mathematical programming, sampling-based calculations usually cannot produce verified computations. However, sampling-based methods can be very useful in addressing a variety of problems which are computationally difficult to solve analytically or even to rigorously bound. One important example is the use of Cauchy-deviate sampling to avoid the curse of dimensionality in propagating interval uncertainty through high-dimensional problems. == Relationship to other uncertainty propagation approaches == PBA belongs to a class of methods that use imprecise probabilities to simultaneously represent aleatoric and epistemic uncertainties. PBA is a generalization of both interval analysis and probabilistic convolution such as is commonly implemented with Monte Carlo simulation. PBA is also closely related to robust Bayes analysis, which is sometimes called Bayesian sensitivity analysis. PBA is an alternative to second-order Monte Carlo simulation. == Applications == P-boxes and probability bounds analysis have been used in many applications spanning many disciplines in engineering and environmental science, including: Engineering design Expert elicitation Analysis of species sensitivity distributions Sensitivity analysis in aerospace engineering of the buckling load of the frontskirt of the Ariane 5 launcher ODE models of chemical reactor dynamics Pharmacokinetic variability of inhaled VOCs Groundwater modeling Bounding failure probability for series systems Heavy metal contamination in soil at an ironworks brownfield Uncertainty propagation for salinity risk models Power supply system safety assessment Contaminated land risk assessment Engineered systems for drinking water treatment Computing soil screening levels Human health and ecological risk analysis by the U.S. EPA of PCB contamination at the Housatonic River Superfund site Environmental assessment for the Calcasieu Estuary Superfund site Aerospace engineering for supersonic nozzle thrust Verification and validation in scientific computation for engineering problems Toxicity to small mammals of environmental mercury contamination Modeling travel time of pollution in groundwater Reliability analysis Endangered species assessment for reintroduction of Leadbeater's possum Exposure of insectivorous birds to an agricultural pesticide Climate change projections Waiting time in queuing systems Extinction risk analysis for spotted owl on the Olympic Peninsula Biosecurity against introduction of invasive species or agricultural pests Finite-element structural analysis Cost estimates Nuclear stockpile certification Fracking risks to water pollution == See also == Probability box Robust Bayes analysis Imprecise probability Second-order Monte Carlo simulation Monte Carlo simulation Interval analysis Probability theory Risk analysis == References == == Further references == Bernardini, Alberto; Tonon, Fulvio (2010). Bounding Uncertainty in Civil Engineering: Theoretical Background. Berlin: Springer. ISBN 978-3-642-11189-1. Ferson, Scott (2002). RAMAS Risk Calc 4.0 Software : Risk Assessment with Uncertain Numbers. Boca Raton, Florida: Lewis Publishers. ISBN 978-1-56670-576-9. Gerla, G. (1994). "Inferences in Probability Logic". Artificial Intelligence. 70 (1–2): 33–52. doi:10.1016/0004-3702(94)90102-3. Oberkampf, William L.; Roy, Christopher J. (2010). Verification and Validation in Scientific Computing. New York: Cambridge University Press. ISBN 978-0-521-11360-1. == External links == Probability bounds analysis in environmental risk assessments Intervals and probability distributions Epistemic uncertainty project The Society for Imprecise Probability: Theories and Applications
|
Wikipedia:Problem of Apollonius#0
|
In Euclidean plane geometry, Apollonius's problem is to construct circles that are tangent to three given circles in a plane (Figure 1). Apollonius of Perga (c. 262 BC – c. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, "Tangencies"); this work has been lost, but a 4th-century AD report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them (Figure 2), a pair of solutions for each way to divide the three given circles in two subsets (there are 4 ways to divide a set of cardinality 3 in 2 parts). In the 16th century, Adriaan van Roomen solved the problem using intersecting hyperbolas, but this solution does not use only straightedge and compass constructions. François Viète found such a solution by exploiting limiting cases: any of the three given circles can be shrunk to zero radius (a point) or expanded to infinite radius (a line). Viète's approach, which uses simpler limiting cases to solve more complicated ones, is considered a plausible reconstruction of Apollonius' method. The method of van Roomen was simplified by Isaac Newton, who showed that Apollonius' problem is equivalent to finding a position from the differences of its distances to three known points. This has applications in navigation and positioning systems such as LORAN. Later mathematicians introduced algebraic methods, which transform a geometric problem into algebraic equations. These methods were simplified by exploiting symmetries inherent in the problem of Apollonius: for instance solution circles generically occur in pairs, with one solution enclosing the given circles that the other excludes (Figure 2). Joseph Diaz Gergonne used this symmetry to provide an elegant straightedge and compass solution, while other mathematicians used geometrical transformations such as reflection in a circle to simplify the configuration of the given circles. These developments provide a geometrical setting for algebraic methods (using Lie sphere geometry) and a classification of solutions according to 33 essentially different configurations of the given circles. Apollonius' problem has stimulated much further work. Generalizations to three dimensions—constructing a sphere tangent to four given spheres—and beyond have been studied. The configuration of three mutually tangent circles has received particular attention. René Descartes gave a formula relating the radii of the solution circles and the given circles, now known as Descartes' theorem. Solving Apollonius' problem iteratively in this case leads to the Apollonian gasket, which is one of the earliest fractals to be described in print, and is important in number theory via Ford circles and the Hardy–Littlewood circle method. == Statement of the problem == The general statement of Apollonius' problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size. These objects may be arranged in any way and may cross one another; however, they are usually taken to be distinct, meaning that they do not coincide. Solutions to Apollonius' problem are sometimes called Apollonius circles, although the term is also used for other types of circles associated with Apollonius. The property of tangency is defined as follows. First, a point, line or circle is assumed to be tangent to itself; hence, if a given circle is already tangent to the other two given objects, it is counted as a solution to Apollonius' problem. Two distinct geometrical objects are said to intersect if they have a point in common. By definition, a point is tangent to a circle or a line if it intersects them, that is, if it lies on them; thus, two distinct points cannot be tangent. If the angle between lines or circles at an intersection point is zero, they are said to be tangent; the intersection point is called a tangent point or a point of tangency. (The word "tangent" derives from the Latin present participle, tangens, meaning "touching".) In practice, two distinct circles are tangent if they intersect at only one point; if they intersect at zero or two points, they are not tangent. The same holds true for a line and a circle. Two distinct lines cannot be tangent in the plane, although two parallel lines can be considered as tangent at a point at infinity in inversive geometry (see below). The solution circle may be either internally or externally tangent to each of the given circles. An external tangency is one where the two circles bend away from each other at their point of contact; they lie on opposite sides of the tangent line at that point, and they exclude one another. The distance between their centers equals the sum of their radii. By contrast, an internal tangency is one in which the two circles curve in the same way at their point of contact; the two circles lie on the same side of the tangent line, and one circle encloses the other. In this case, the distance between their centers equals the difference of their radii. As an illustration, in Figure 1, the pink solution circle is internally tangent to the medium-sized given black circle on the right, whereas it is externally tangent to the smallest and largest given circles on the left. Apollonius' problem can also be formulated as the problem of locating one or more points such that the differences of its distances to three given points equal three known values. Consider a solution circle of radius rs and three given circles of radii r1, r2 and r3. If the solution circle is externally tangent to all three given circles, the distances between the center of the solution circle and the centers of the given circles equal d1 = r1 + rs, d2 = r2 + rs and d3 = r3 + rs, respectively. Therefore, differences in these distances are constants, such as d1 − d2 = r1 − r2; they depend only on the known radii of the given circles and not on the radius rs of the solution circle, which cancels out. This second formulation of Apollonius' problem can be generalized to internally tangent solution circles (for which the center-center distance equals the difference of radii), by changing the corresponding differences of distances to sums of distances, so that the solution-circle radius rs again cancels out. The re-formulation in terms of center-center distances is useful in the solutions below of Adriaan van Roomen and Isaac Newton, and also in hyperbolic positioning or trilateration, which is the task of locating a position from differences in distances to three known points. For example, navigation systems such as LORAN identify a receiver's position from the differences in arrival times of signals from three fixed positions, which correspond to the differences in distances to those transmitters. == History == A rich repertoire of geometrical and algebraic methods have been developed to solve Apollonius' problem, which has been called "the most famous of all" geometry problems. The original approach of Apollonius of Perga has been lost, but reconstructions have been offered by François Viète and others, based on the clues in the description by Pappus of Alexandria. The first new solution method was published in 1596 by Adriaan van Roomen, who identified the centers of the solution circles as the intersection points of two hyperbolas. Van Roomen's method was refined in 1687 by Isaac Newton in his Principia, and by John Casey in 1881. Although successful in solving Apollonius' problem, van Roomen's method has a drawback. A prized property in classical Euclidean geometry is the ability to solve problems using only a compass and a straightedge. Many constructions are impossible using only these tools, such as dividing an angle in three equal parts. However, many such "impossible" problems can be solved by intersecting curves such as hyperbolas, ellipses and parabolas (conic sections). For example, doubling the cube (the problem of constructing a cube of twice the volume of a given cube) cannot be done using only a straightedge and compass, but Menaechmus showed that the problem can be solved by using the intersections of two parabolas. Therefore, van Roomen's solution—which uses the intersection of two hyperbolas—did not determine if the problem satisfied the straightedge-and-compass property. Van Roomen's friend François Viète, who had urged van Roomen to work on Apollonius' problem in the first place, developed a method that used only compass and straightedge. Prior to Viète's solution, Regiomontanus doubted whether Apollonius' problem could be solved by straightedge and compass. Viète first solved some simple special cases of Apollonius' problem, such as finding a circle that passes through three given points which has only one solution if the points are distinct; he then built up to solving more complicated special cases, in some cases by shrinking or swelling the given circles. According to the 4th-century report of Pappus, Apollonius' own book on this problem—entitled Ἐπαφαί (Epaphaí, "Tangencies"; Latin: De tactionibus, De contactibus)—followed a similar progressive approach. Hence, Viète's solution is considered to be a plausible reconstruction of Apollonius' solution, although other reconstructions have been published independently by three different authors. Several other geometrical solutions to Apollonius' problem were developed in the 19th century. The most notable solutions are those of Jean-Victor Poncelet (1811) and of Joseph Diaz Gergonne (1814). Whereas Poncelet's proof relies on homothetic centers of circles and the power of a point theorem, Gergonne's method exploits the conjugate relation between lines and their poles in a circle. Methods using circle inversion were pioneered by Julius Petersen in 1879; one example is the annular solution method of HSM Coxeter. Another approach uses Lie sphere geometry, which was developed by Sophus Lie. Algebraic solutions to Apollonius' problem were pioneered in the 17th century by René Descartes and Princess Elisabeth of Bohemia, although their solutions were rather complex. Practical algebraic methods were developed in the late 18th and 19th centuries by several mathematicians, including Leonhard Euler, Nicolas Fuss, Carl Friedrich Gauss, Lazare Carnot, and Augustin Louis Cauchy. == Solution methods == === Intersecting hyperbolas === The solution of Adriaan van Roomen (1596) is based on the intersection of two hyperbolas. Let the given circles be denoted as C1, C2 and C3. Van Roomen solved the general problem by solving a simpler problem, that of finding the circles that are tangent to two given circles, such as C1 and C2. He noted that the center of a circle tangent to both given circles must lie on a hyperbola whose foci are the centers of the given circles. To understand this, let the radii of the solution circle and the two given circles be denoted as rs, r1 and r2, respectively (Figure 3). The distance d1 between the centers of the solution circle and C1 is either rs + r1 or rs − r1, depending on whether these circles are chosen to be externally or internally tangent, respectively. Similarly, the distance d2 between the centers of the solution circle and C2 is either rs + r2 or rs − r2, again depending on their chosen tangency. Thus, the difference d1 − d2 between these distances is always a constant that is independent of rs. This property, of having a fixed difference between the distances to the foci, characterizes hyperbolas, so the possible centers of the solution circle lie on a hyperbola. A second hyperbola can be drawn for the pair of given circles C2 and C3, where the internal or external tangency of the solution and C2 should be chosen consistently with that of the first hyperbola. An intersection of these two hyperbolas (if any) gives the center of a solution circle that has the chosen internal and external tangencies to the three given circles. The full set of solutions to Apollonius' problem can be found by considering all possible combinations of internal and external tangency of the solution circle to the three given circles. Isaac Newton (1687) refined van Roomen's solution, so that the solution-circle centers were located at the intersections of a line with a circle. Newton formulates Apollonius' problem as a problem in trilateration: to locate a point Z from three given points A, B and C, such that the differences in distances from Z to the three given points have known values. These four points correspond to the center of the solution circle (Z) and the centers of the three given circles (A, B and C). Instead of solving for the two hyperbolas, Newton constructs their directrix lines instead. For any hyperbola, the ratio of distances from a point Z to a focus A and to the directrix is a fixed constant called the eccentricity. The two directrices intersect at a point T, and from their two known distance ratios, Newton constructs a line passing through T on which Z must lie. However, the ratio of distances TZ/TA is also known; hence, Z also lies on a known circle, since Apollonius had shown that a circle can be defined as the set of points that have a given ratio of distances to two fixed points. (As an aside, this definition is the basis of bipolar coordinates.) Thus, the solutions to Apollonius' problem are the intersections of a line with a circle. === Viète's reconstruction === As described below, Apollonius' problem has ten special cases, depending on the nature of the three given objects, which may be a circle (C), line (L) or point (P). By custom, these ten cases are distinguished by three letter codes such as CCP. Viète solved all ten of these cases using only compass and straightedge constructions, and used the solutions of simpler cases to solve the more complex cases. Viète began by solving the PPP case (three points) following the method of Euclid in his Elements. From this, he derived a lemma corresponding to the power of a point theorem, which he used to solve the LPP case (a line and two points). Following Euclid a second time, Viète solved the LLL case (three lines) using the angle bisectors. He then derived a lemma for constructing the line perpendicular to an angle bisector that passes through a point, which he used to solve the LLP problem (two lines and a point). This accounts for the first four cases of Apollonius' problem, those that do not involve circles. To solve the remaining problems, Viète exploited the fact that the given circles and the solution circle may be re-sized in tandem while preserving their tangencies (Figure 4). If the solution-circle radius is changed by an amount Δr, the radius of its internally tangent given circles must be likewise changed by Δr, whereas the radius of its externally tangent given circles must be changed by −Δr. Thus, as the solution circle swells, the internally tangent given circles must swell in tandem, whereas the externally tangent given circles must shrink, to maintain their tangencies. Viète used this approach to shrink one of the given circles to a point, thus reducing the problem to a simpler, already solved case. He first solved the CLL case (a circle and two lines) by shrinking the circle into a point, rendering it an LLP case. He then solved the CLP case (a circle, a line and a point) using three lemmas. Again shrinking one circle to a point, Viète transformed the CCL case into a CLP case. He then solved the CPP case (a circle and two points) and the CCP case (two circles and a point), the latter case by two lemmas. Finally, Viète solved the general CCC case (three circles) by shrinking one circle to a point, rendering it a CCP case. === Algebraic solutions === Apollonius' problem can be framed as a system of three equations for the center and radius of the solution circle. Since the three given circles and any solution circle must lie in the same plane, their positions can be specified in terms of the (x, y) coordinates of their centers. For example, the center positions of the three given circles may be written as (x1, y1), (x2, y2) and (x3, y3), whereas that of a solution circle can be written as (xs, ys). Similarly, the radii of the given circles and a solution circle can be written as r1, r2, r3 and rs, respectively. The requirement that a solution circle must exactly touch each of the three given circles can be expressed as three coupled quadratic equations for xs, ys and rs: ( x s − x 1 ) 2 + ( y s − y 1 ) 2 = ( r s − s 1 r 1 ) 2 {\displaystyle \left(x_{s}-x_{1}\right)^{2}+\left(y_{s}-y_{1}\right)^{2}=\left(r_{s}-s_{1}r_{1}\right)^{2}} ( x s − x 2 ) 2 + ( y s − y 2 ) 2 = ( r s − s 2 r 2 ) 2 {\displaystyle \left(x_{s}-x_{2}\right)^{2}+\left(y_{s}-y_{2}\right)^{2}=\left(r_{s}-s_{2}r_{2}\right)^{2}} ( x s − x 3 ) 2 + ( y s − y 3 ) 2 = ( r s − s 3 r 3 ) 2 . {\displaystyle \left(x_{s}-x_{3}\right)^{2}+\left(y_{s}-y_{3}\right)^{2}=\left(r_{s}-s_{3}r_{3}\right)^{2}.} The three numbers s1, s2 and s3 on the right-hand side, called signs, may equal ±1, and specify whether the desired solution circle should touch the corresponding given circle internally (s = 1) or externally (s = −1). For example, in Figures 1 and 4, the pink solution is internally tangent to the medium-sized given circle on the right and externally tangent to the smallest and largest given circles on the left; if the given circles are ordered by radius, the signs for this solution are "− + −". Since the three signs may be chosen independently, there are eight possible sets of equations (2 × 2 × 2 = 8), each set corresponding to one of the eight types of solution circles. The general system of three equations may be solved by the method of resultants. When multiplied out, all three equations have xs2 + ys2 on the left-hand side, and rs2 on the right-hand side. Subtracting one equation from another eliminates these quadratic terms; the remaining linear terms may be re-arranged to yield formulae for the coordinates xs and ys x s = M + N r s {\displaystyle x_{s}=M+Nr_{s}} y s = P + Q r s {\displaystyle y_{s}=P+Qr_{s}} where M, N, P and Q are known functions of the given circles and the choice of signs. Substitution of these formulae into one of the initial three equations gives a quadratic equation for rs, which can be solved by the quadratic formula. Substitution of the numerical value of rs into the linear formulae yields the corresponding values of xs and ys. The signs s1, s2 and s3 on the right-hand sides of the equations may be chosen in eight possible ways, and each choice of signs gives up to two solutions, since the equation for rs is quadratic. This might suggest (incorrectly) that there are up to sixteen solutions of Apollonius' problem. However, due to a symmetry of the equations, if (rs, xs, ys) is a solution, with signs si, then so is (−rs, xs, ys), with opposite signs −si, which represents the same solution circle. Therefore, Apollonius' problem has at most eight independent solutions (Figure 2). One way to avoid this double-counting is to consider only solution circles with non-negative radius. The two roots of any quadratic equation may be of three possible types: two different real numbers, two identical real numbers (i.e., a degenerate double root), or a pair of complex conjugate roots. The first case corresponds to the usual situation; each pair of roots corresponds to a pair of solutions that are related by circle inversion, as described below (Figure 6). In the second case, both roots are identical, corresponding to a solution circle that transforms into itself under inversion. In this case, one of the given circles is itself a solution to the Apollonius problem, and the number of distinct solutions is reduced by one. The third case of complex conjugate radii does not correspond to a geometrically possible solution for Apollonius' problem, since a solution circle cannot have an imaginary radius; therefore, the number of solutions is reduced by two. Apollonius' problem cannot have seven solutions, although it may have any other number of solutions from zero to eight. === Lie sphere geometry === The same algebraic equations can be derived in the context of Lie sphere geometry. That geometry represents circles, lines and points in a unified way, as a five-dimensional vector X = (v, cx, cy, w, sr), where c = (cx, cy) is the center of the circle, and r is its (non-negative) radius. If r is not zero, the sign s may be positive or negative; for visualization, s represents the orientation of the circle, with counterclockwise circles having a positive s and clockwise circles having a negative s. The parameter w is zero for a straight line, and one otherwise. In this five-dimensional world, there is a bilinear product similar to the dot product: ( X 1 ∣ X 2 ) := v 1 w 2 + v 2 w 1 + c 1 ⋅ c 2 − s 1 s 2 r 1 r 2 . {\displaystyle \left(X_{1}\mid X_{2}\right):=v_{1}w_{2}+v_{2}w_{1}+\mathbf {c} _{1}\cdot \mathbf {c} _{2}-s_{1}s_{2}r_{1}r_{2}.} The Lie quadric is defined as those vectors whose product with themselves (their square norm) is zero, (X|X) = 0. Let X1 and X2 be two vectors belonging to this quadric; the norm of their difference equals ( X 1 − X 2 ∣ X 1 − X 2 ) = 2 ( v 1 − v 2 ) ( w 1 − w 2 ) + ( c 1 − c 2 ) ⋅ ( c 1 − c 2 ) − ( s 1 r 1 − s 2 r 2 ) 2 . {\displaystyle \left(X_{1}-X_{2}\mid X_{1}-X_{2}\right)=2\left(v_{1}-v_{2}\right)\left(w_{1}-w_{2}\right)+\left(\mathbf {c} _{1}-\mathbf {c} _{2}\right)\cdot \left(\mathbf {c} _{1}-\mathbf {c} _{2}\right)-\left(s_{1}r_{1}-s_{2}r_{2}\right)^{2}.} The product distributes over addition and subtraction (more precisely, it is bilinear): ( X 1 − X 2 ∣ X 1 − X 2 ) = ( X 1 ∣ X 1 ) − 2 ( X 1 ∣ X 2 ) + ( X 2 ∣ X 2 ) . {\displaystyle \left(X_{1}-X_{2}\mid X_{1}-X_{2}\right)=\left(X_{1}\mid X_{1}\right)-2\left(X_{1}\mid X_{2}\right)+\left(X_{2}\mid X_{2}\right).} Since (X1|X1) = (X2|X2) = 0 (both belong to the Lie quadric) and since w1 = w2 = 1 for circles, the product of any two such vectors on the quadric equals − 2 ( X 1 ∣ X 2 ) = | c 1 − c 2 | 2 − ( s 1 r 1 − s 2 r 2 ) 2 . {\displaystyle -2\left(X_{1}\mid X_{2}\right)=\left|\mathbf {c} _{1}-\mathbf {c} _{2}\right|^{2}-\left(s_{1}r_{1}-s_{2}r_{2}\right)^{2}.} where the vertical bars sandwiching c1 − c2 represent the length of that difference vector, i.e., the Euclidean norm. This formula shows that if two quadric vectors X1 and X2 are orthogonal (perpendicular) to one another—that is, if (X1|X2) = 0—then their corresponding circles are tangent. For if the two signs s1 and s2 are the same (i.e. the circles have the same "orientation"), the circles are internally tangent; the distance between their centers equals the difference in the radii | c 1 − c 2 | 2 = ( r 1 − r 2 ) 2 . {\displaystyle \left|\mathbf {c} _{1}-\mathbf {c} _{2}\right|^{2}=\left(r_{1}-r_{2}\right)^{2}.} Conversely, if the two signs s1 and s2 are different (i.e. the circles have opposite "orientations"), the circles are externally tangent; the distance between their centers equals the sum of the radii | c 1 − c 2 | 2 = ( r 1 + r 2 ) 2 . {\displaystyle \left|\mathbf {c} _{1}-\mathbf {c} _{2}\right|^{2}=\left(r_{1}+r_{2}\right)^{2}.} Therefore, Apollonius' problem can be re-stated in Lie geometry as a problem of finding perpendicular vectors on the Lie quadric; specifically, the goal is to identify solution vectors Xsol that belong to the Lie quadric and are also orthogonal (perpendicular) to the vectors X1, X2 and X3 corresponding to the given circles. ( X s o l ∣ X s o l ) = ( X s o l ∣ X 1 ) = ( X s o l ∣ X 2 ) = ( X s o l ∣ X 3 ) = 0 {\displaystyle \left(X_{\mathrm {sol} }\mid X_{\mathrm {sol} }\right)=\left(X_{\mathrm {sol} }\mid X_{1}\right)=\left(X_{\mathrm {sol} }\mid X_{2}\right)=\left(X_{\mathrm {sol} }\mid X_{3}\right)=0} The advantage of this re-statement is that one can exploit theorems from linear algebra on the maximum number of linearly independent, simultaneously perpendicular vectors. This gives another way to calculate the maximum number of solutions and extend the theorem to higher-dimensional spaces. === Inversive methods === A natural setting for problem of Apollonius is inversive geometry. The basic strategy of inversive methods is to transform a given Apollonius problem into another Apollonius problem that is simpler to solve; the solutions to the original problem are found from the solutions of the transformed problem by undoing the transformation. Candidate transformations must change one Apollonius problem into another; therefore, they must transform the given points, circles and lines to other points, circles and lines, and no other shapes. Circle inversion has this property and allows the center and radius of the inversion circle to be chosen judiciously. Other candidates include the Euclidean plane isometries; however, they do not simplify the problem, since they merely shift, rotate, and mirror the original problem. Inversion in a circle with center O and radius R consists of the following operation (Figure 5): every point P is mapped into a new point P' such that O, P, and P' are collinear, and the product of the distances of P and P' to the center O equal the radius R squared O P ¯ ⋅ O P ′ ¯ = R 2 . {\displaystyle {\overline {\mathbf {OP} }}\cdot {\overline {\mathbf {OP^{\prime }} }}=R^{2}.} Thus, if P lies outside the circle, then P' lies within, and vice versa. When P is the same as O, the inversion is said to send P to infinity. (In complex analysis, "infinity" is defined in terms of the Riemann sphere.) Inversion has the useful property that lines and circles are always transformed into lines and circles, and points are always transformed into points. Circles are generally transformed into other circles under inversion; however, if a circle passes through the center of the inversion circle, it is transformed into a straight line, and vice versa. Importantly, if a circle crosses the circle of inversion at right angles (intersects perpendicularly), it is left unchanged by the inversion; it is transformed into itself. Circle inversions correspond to a subset of Möbius transformations on the Riemann sphere. The planar Apollonius problem can be transferred to the sphere by an inverse stereographic projection; hence, solutions of the planar Apollonius problem also pertain to its counterpart on the sphere. Other inversive solutions to the planar problem are possible besides the common ones described below. === Pairs of solutions by inversion === Solutions to Apollonius's problem generally occur in pairs; for each solution circle, there is a conjugate solution circle (Figure 6). One solution circle excludes the given circles that are enclosed by its conjugate solution, and vice versa. For example, in Figure 6, one solution circle (pink, upper left) encloses two given circles (black), but excludes a third; conversely, its conjugate solution (also pink, lower right) encloses that third given circle, but excludes the other two. The two conjugate solution circles are related by inversion, by the following argument. In general, any three distinct circles have a unique circle—the radical circle—that intersects all of them perpendicularly; the center of that circle is the radical center of the three circles. For illustration, the orange circle in Figure 6 crosses the black given circles at right angles. Inversion in the radical circle leaves the given circles unchanged, but transforms the two conjugate pink solution circles into one another. Under the same inversion, the corresponding points of tangency of the two solution circles are transformed into one another; for illustration, in Figure 6, the two blue points lying on each green line are transformed into one another. Hence, the lines connecting these conjugate tangent points are invariant under the inversion; therefore, they must pass through the center of inversion, which is the radical center (green lines intersecting at the orange dot in Figure 6). ==== Inversion to an annulus ==== If two of the three given circles do not intersect, a center of inversion can be chosen so that those two given circles become concentric. Under this inversion, the solution circles must fall within the annulus between the two concentric circles. Therefore, they belong to two one-parameter families. In the first family (Figure 7), the solutions do not enclose the inner concentric circle, but rather revolve like ball bearings in the annulus. In the second family (Figure 8), the solution circles enclose the inner concentric circle. There are generally four solutions for each family, yielding eight possible solutions, consistent with the algebraic solution. When two of the given circles are concentric, Apollonius's problem can be solved easily using a method of Gauss. The radii of the three given circles are known, as is the distance dnon from the common concentric center to the non-concentric circle (Figure 7). The solution circle can be determined from its radius rs, the angle θ, and the distances ds and dT from its center to the common concentric center and the center of the non-concentric circle, respectively. The radius and distance ds are known (Figure 7), and the distance dT = rs ± rnon, depending on whether the solution circle is internally or externally tangent to the non-concentric circle. Therefore, by the law of cosines, cos θ = d s 2 + d n o n 2 − d T 2 2 d s d n o n ≡ C ± . {\displaystyle \cos \theta ={\frac {d_{\mathrm {s} }^{2}+d_{\mathrm {non} }^{2}-d_{\mathrm {T} }^{2}}{2d_{\mathrm {s} }d_{\mathrm {non} }}}\equiv C_{\pm }.} Here, a new constant C has been defined for brevity, with the subscript indicating whether the solution is externally or internally tangent. A simple trigonometric rearrangement yields the four solutions θ = ± 2 arctan ( 1 − C 1 + C ) . {\displaystyle \theta =\pm 2\arctan \left({\sqrt {\frac {1-C}{1+C}}}\right).} This formula represents four solutions, corresponding to the two choices of the sign of θ, and the two choices for C. The remaining four solutions can be obtained by the same method, using the substitutions for rs and ds indicated in Figure 8. Thus, all eight solutions of the general Apollonius problem can be found by this method. Any initial two disjoint given circles can be rendered concentric as follows. The radical axis of the two given circles is constructed; choosing two arbitrary points P and Q on this radical axis, two circles can be constructed that are centered on P and Q and that intersect the two given circles orthogonally. These two constructed circles intersect each other in two points. Inversion in one such intersection point F renders the constructed circles into straight lines emanating from F and the two given circles into concentric circles, with the third given circle becoming another circle (in general). This follows because the system of circles is equivalent to a set of Apollonian circles, forming a bipolar coordinate system. ==== Resizing and inversion ==== The usefulness of inversion can be increased significantly by resizing. As noted in Viète's reconstruction, the three given circles and the solution circle can be resized in tandem while preserving their tangencies. Thus, the initial Apollonius problem is transformed into another problem that may be easier to solve. For example, the four circles can be resized so that one given circle is shrunk to a point; alternatively, two given circles can often be resized so that they are tangent to one another. Thirdly, given circles that intersect can be resized so that they become non-intersecting, after which the method for inverting to an annulus can be applied. In all such cases, the solution of the original Apollonius problem is obtained from the solution of the transformed problem by undoing the resizing and inversion. ===== Shrinking one given circle to a point ===== In the first approach, the given circles are shrunk or swelled (appropriately to their tangency) until one given circle is shrunk to a point P. In that case, Apollonius' problem degenerates to the CCP limiting case, which is the problem of finding a solution circle tangent to the two remaining given circles that passes through the point P. Inversion in a circle centered on P transforms the two given circles into new circles, and the solution circle into a line. Therefore, the transformed solution is a line that is tangent to the two transformed given circles. There are four such solution lines, which may be constructed from the external and internal homothetic centers of the two circles. Re-inversion in P and undoing the resizing transforms such a solution line into the desired solution circle of the original Apollonius problem. All eight general solutions can be obtained by shrinking and swelling the circles according to the differing internal and external tangencies of each solution; however, different given circles may be shrunk to a point for different solutions. ===== Resizing two given circles to tangency ===== In the second approach, the radii of the given circles are modified appropriately by an amount Δr so that two of them are tangential (touching). Their point of tangency is chosen as the center of inversion in a circle that intersects each of the two touching circles in two places. Upon inversion, the touching circles become two parallel lines: Their only point of intersection is sent to infinity under inversion, so they cannot meet. The same inversion transforms the third circle into another circle. The solution of the inverted problem must either be (1) a straight line parallel to the two given parallel lines and tangent to the transformed third given circle; or (2) a circle of constant radius that is tangent to the two given parallel lines and the transformed given circle. Re-inversion and adjusting the radii of all circles by Δr produces a solution circle tangent to the original three circles. === Gergonne's solution === Gergonne's approach is to consider the solution circles in pairs. Let a pair of solution circles be denoted as CA and CB (the pink circles in Figure 6), and let their tangent points with the three given circles be denoted as A1, A2, A3, and B1, B2, B3, respectively. Gergonne's solution aims to locate these six points, and thus solve for the two solution circles. Gergonne's insight was that if a line L1 could be constructed such that A1 and B1 were guaranteed to fall on it, those two points could be identified as the intersection points of L1 with the given circle C1 (Figure 6). The remaining four tangent points would be located similarly, by finding lines L2 and L3 that contained A2 and B2, and A3 and B3, respectively. To construct a line such as L1, two points must be identified that lie on it; but these points need not be the tangent points. Gergonne was able to identify two other points for each of the three lines. One of the two points has already been identified: the radical center G lies on all three lines (Figure 6). To locate a second point on the lines L1, L2 and L3, Gergonne noted a reciprocal relationship between those lines and the radical axis R of the solution circles, CA and CB. To understand this reciprocal relationship, consider the two tangent lines to the circle C1 drawn at its tangent points A1 and B1 with the solution circles; the intersection of these tangent lines is the pole point of L1 in C1. Since the distances from that pole point to the tangent points A1 and B1 are equal, this pole point must also lie on the radical axis R of the solution circles, by definition (Figure 9). The relationship between pole points and their polar lines is reciprocal; if the pole of L1 in C1 lies on R, the pole of R in C1 must conversely lie on L1. Thus, if we can construct R, we can find its pole P1 in C1, giving the needed second point on L1 (Figure 10). Gergonne found the radical axis R of the unknown solution circles as follows. Any pair of circles has two centers of similarity; these two points are the two possible intersections of two tangent lines to the two circles. Therefore, the three given circles have six centers of similarity, two for each distinct pair of given circles. Remarkably, these six points lie on four lines, three points on each line; moreover, each line corresponds to the radical axis of a potential pair of solution circles. To show this, Gergonne considered lines through corresponding points of tangency on two of the given circles, e.g., the line defined by A1/A2 and the line defined by B1/B2. Let X3 be a center of similitude for the two circles C1 and C2; then, A1/A2 and B1/B2 are pairs of antihomologous points, and their lines intersect at X3. It follows, therefore, that the products of distances are equal X 3 A 1 ¯ ⋅ X 3 A 2 ¯ = X 3 B 1 ¯ ⋅ X 3 B 2 ¯ {\displaystyle {\overline {X_{3}A_{1}}}\cdot {\overline {X_{3}A_{2}}}={\overline {X_{3}B_{1}}}\cdot {\overline {X_{3}B_{2}}}} which implies that X3 lies on the radical axis of the two solution circles. The same argument can be applied to the other pairs of circles, so that three centers of similitude for the given three circles must lie on the radical axes of pairs of solution circles. In summary, the desired line L1 is defined by two points: the radical center G of the three given circles and the pole in C1 of one of the four lines connecting the homothetic centers. Finding the same pole in C2 and C3 gives L2 and L3, respectively; thus, all six points can be located, from which one pair of solution circles can be found. Repeating this procedure for the remaining three homothetic-center lines yields six more solutions, giving eight solutions in all. However, if a line Lk does not intersect its circle Ck for some k, there is no pair of solutions for that homothetic-center line. === Intersection theory === The techniques of modern algebraic geometry, and in particular intersection theory, can be used to solve Apollonius's problem. In this approach, the problem is reinterpreted as a statement about circles in the complex projective plane. Solutions involving complex numbers are allowed and degenerate situations are counted with multiplicity. When this is done, there are always eight solutions to the problem. Every quadratic equation in X, Y, and Z determines a unique conic, its vanishing locus. Conversely, every conic in the complex projective plane has an equation, and that equation is unique up to an overall scaling factor (because rescaling an equation does not change its vanishing locus). Therefore, the set of all conics may be parametrized by five-dimensional projective space P5, where the correspondence is { [ X : Y : Z ] ∈ P 2 : A X 2 + B X Y + C Y 2 + D X Z + E Y Z + F Z 2 = 0 } ↔ [ A : B : C : D : E : F ] ∈ P 5 . {\displaystyle \{[X:Y:Z]\in \mathbf {P} ^{2}\colon AX^{2}+BXY+CY^{2}+DXZ+EYZ+FZ^{2}=0\}\leftrightarrow [A:B:C:D:E:F]\in \mathbf {P} ^{5}.} A circle in the complex projective plane is defined to be a conic that passes through the two points O+ = [1 : i : 0] and O− = [1 : −i : 0], where i denotes a square root of −1. The points O+ and O− are called the circular points. The projective variety of all circles is the subvariety of P5 consisting of those points which correspond to conics passing through the circular points. Substituting the circular points into the equation for a generic conic yields the two equations A + B i − C = 0 , {\displaystyle A+Bi-C=0,} A − B i − C = 0. {\displaystyle A-Bi-C=0.} Taking the sum and difference of these equations shows that it is equivalent to impose the conditions A = C {\displaystyle A=C} and B = 0 {\displaystyle B=0} . Therefore, the variety of all circles is a three-dimensional linear subspace of P5. After rescaling and completing the square, these equations also demonstrate that every conic passing through the circular points has an equation of the form ( X − a Z ) 2 + ( Y − b Z ) 2 = r 2 Z 2 , {\displaystyle (X-aZ)^{2}+(Y-bZ)^{2}=r^{2}Z^{2},} which is the homogenization of the usual equation of a circle in the affine plane. Therefore, studying circles in the above sense is nearly equivalent to studying circles in the conventional sense. The only difference is that the above sense permits degenerate circles which are the union of two lines. The non-degenerate circles are called smooth circles, while the degenerate ones are called singular circles. There are two types of singular circles. One is the union of the line at infinity Z = 0 with another line in the projective plane (possibly the line at infinity again), and the other is union of two lines in the projective plane, one through each of the two circular points. These are the limits of smooth circles as the radius r tends to +∞ and 0, respectively. In the latter case, no point on either of the two lines has real coordinates except for the origin [0 : 0 : 1]. Let D be a fixed smooth circle. If C is any other circle, then, by the definition of a circle, C and D intersect at the circular points O+ and O−. Because C and D are conics, Bézout's theorem implies C and D intersect in four points total, when those points are counted with the proper intersection multiplicity. That is, there are four points of intersection O+, O−, P, and Q, but some of these points might collide. Appolonius' problem is concerned with the situation where P = Q, meaning that the intersection multiplicity at that point is 2; if P is also equal to a circular point, this should be interpreted as the intersection multiplicity being 3. Let ZD be the variety of circles tangent to D. This variety is a quadric cone in the P3 of all circles. To see this, consider the incidence correspondence Φ = { ( r , C ) ∈ D × P 3 : C is tangent to D at r } . {\displaystyle \Phi =\{(r,C)\in D\times \mathbf {P} ^{3}\colon C{\text{ is tangent to }}D{\text{ at }}r\}.} For a curve that is the vanishing locus of a single equation f = 0, the condition that the curve meets D at r with multiplicity m means that the Taylor series expansion of f|D vanishes to order m at r; it is therefore m linear conditions on the coefficients of f. This shows that, for each r, the fiber of Φ over r is a P1 cut out by two linear equations in the space of circles. Consequently, Φ is irreducible of dimension 2. Since it is possible to exhibit a circle that is tangent to D at only a single point, a generic element of ZD must be tangent at only a single point. Therefore, the projection Φ → P2 sending (r, C) to C is a birational morphism. It follows that the image of Φ, which is ZD, is also irreducible and two dimensional. To determine the shape of ZD, fix two distinct circles C0 and C∞, not necessarily tangent to D. These two circles determine a pencil, meaning a line L in the P3 of circles. If the equations of C0 and C∞ are f and g, respectively, then the points on L correspond to the circles whose equations are Sf + Tg, where [S : T] is a point of P1. The points where L meets ZD are precisely the circles in the pencil that are tangent to D. There are two possibilities for the number of points of intersections. One is that either f or g, say f, is the equation for D. In this case, L is a line through D. If C∞ is tangent to D, then so is every circle in the pencil, and therefore L is contained in ZD. The other possibility is that neither f nor g is the equation for D. In this case, the function (f / g)|D is a quotient of quadratics, neither of which vanishes identically. Therefore, it vanishes at two points and has poles at two points. These are the points in C0 ∩ D and C∞ ∩ D, respectively, counted with multiplicity and with the circular points deducted. The rational function determines a morphism D → P1 of degree two. The fiber over [S : T] ∈ P1 is the set of points P for which f(P)T = g(P)S. These are precisely the points at which the circle whose equation is Tf − Sg meets D. The branch points of this morphism are the circles tangent to D. By the Riemann–Hurwitz formula, there are precisely two branch points, and therefore L meets ZD in two points. Together, these two possibilities for the intersection of L and ZD demonstrate that ZD is a quadric cone. All such cones in P3 are the same up to a change of coordinates, so this completely determines the shape of ZD. To conclude the argument, let D1, D2, and D3 be three circles. If the intersection ZD1 ∩ ZD2 ∩ ZD3 is finite, then it has degree 23 = 8, and therefore there are eight solutions to the problem of Apollonius, counted with multiplicity. To prove that the intersection is generically finite, consider the incidence correspondence Ψ = { ( D 1 , D 2 , D 3 , C ) ∈ ( P 3 ) 4 : C is tangent to all D i } . {\displaystyle \Psi =\{(D_{1},D_{2},D_{3},C)\in (\mathbf {P} ^{3})^{4}\colon C{\text{ is tangent to all }}D_{i}\}.} There is a morphism which projects Ψ onto its final factor of P3. The fiber over C is ZC3. This has dimension 6, so Ψ has dimension 9. Because (P3)3 also has dimension 9, the generic fiber of the projection from Ψ to the first three factors cannot have positive dimension. This proves that generically, there are eight solutions counted with multiplicity. Since it is possible to exhibit a configuration where the eight solutions are distinct, the generic configuration must have all eight solutions distinct. == Radii == In the generic problem with eight solution circles, The reciprocals of the radii of four of the solution circles sum to the same value as do the reciprocals of the radii of the other four solution circles == Special cases == === Ten combinations of points, circles, and lines === Apollonius problem is to construct one or more circles tangent to three given objects in a plane, which may be circles, points, or lines. This gives rise to ten types of Apollonius' problem, one corresponding to each combination of circles, lines and points, which may be labeled with three letters, either C, L, or P, to denote whether the given elements are a circle, line or point, respectively (Table 1). As an example, the type of Apollonius problem with a given circle, line, and point is denoted as CLP. Some of these special cases are much easier to solve than the general case of three given circles. The two simplest cases are the problems of drawing a circle through three given points (PPP) or tangent to three lines (LLL), which were solved first by Euclid in his Elements. For example, the PPP problem can be solved as follows. The center of the solution circle is equally distant from all three points, and therefore must lie on the perpendicular bisector line of any two. Hence, the center is the point of intersection of any two perpendicular bisectors. Similarly, in the LLL case, the center must lie on a line bisecting the angle at the three intersection points between the three given lines; hence, the center lies at the intersection point of two such angle bisectors. Since there are two such bisectors at every intersection point of the three given lines, there are four solutions to the general LLL problem (the incircle and excircles of the triangle formed by the three lines). Points and lines may be viewed as special cases of circles; a point can be considered as a circle of infinitely small radius, and a line may be thought of an infinitely large circle whose center is also at infinity. From this perspective, the general Apollonius problem is that of constructing circles tangent to three given circles. The nine other cases involving points and lines may be viewed as limiting cases of the general problem. These limiting cases often have fewer solutions than the general problem; for example, the replacement of a given circle by a given point halves the number of solutions, since a point can be construed as an infinitesimal circle that is either internally or externally tangent. === Number of solutions === The problem of counting the number of solutions to different types of Apollonius' problem belongs to the field of enumerative geometry. The general number of solutions for each of the ten types of Apollonius' problem is given in Table 1 above. However, special arrangements of the given elements may change the number of solutions. For illustration, Apollonius' problem has no solution if one circle separates the two (Figure 11); to touch both the solid given circles, the solution circle would have to cross the dashed given circle; but that it cannot do, if it is to touch the dashed circle tangentially. Conversely, if three given circles are all tangent at the same point, then any circle tangent at the same point is a solution; such Apollonius problems have an infinite number of solutions. If any of the given circles are identical, there is likewise an infinity of solutions. If only two given circles are identical, there are only two distinct given circles; the centers of the solution circles form a hyperbola, as used in one solution to Apollonius' problem. An exhaustive enumeration of the number of solutions for all possible configurations of three given circles, points or lines was first undertaken by Muirhead in 1896, although earlier work had been done by Stoll and Study. However, Muirhead's work was incomplete; it was extended in 1974 and a definitive enumeration, with 33 distinct cases, was published in 1983. Although solutions to Apollonius' problem generally occur in pairs related by inversion, an odd number of solutions is possible in some cases, e.g., the single solution for PPP, or when one or three of the given circles are themselves solutions. (An example of the latter is given in the section on Descartes' theorem.) However, there are no Apollonius problems with seven solutions. Alternative solutions based on the geometry of circles and spheres have been developed and used in higher dimensions. === Mutually tangent given circles: Soddy's circles and Descartes' theorem === If the three given circles are mutually tangent, Apollonius' problem has five solutions. Three solutions are the given circles themselves, since each is tangent to itself and to the other two given circles. The remaining two solutions (shown in red in Figure 12) correspond to the inscribed and circumscribed circles, and are called Soddy's circles. This special case of Apollonius' problem is also known as the four coins problem. The three given circles of this Apollonius problem form a Steiner chain tangent to the two Soddy's circles. Either Soddy circle, when taken together with the three given circles, produces a set of four circles that are mutually tangent at six points. The radii of these four circles are related by an equation known as Descartes' theorem. In a 1643 letter to Princess Elizabeth of Bohemia, René Descartes showed that ( k 1 + k 2 + k 3 + k s ) 2 = 2 ( k 1 2 + k 2 2 + k 3 2 + k s 2 ) {\displaystyle (k_{1}+k_{2}+k_{3}+k_{s})^{2}=2(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{s}^{2})} where ks = 1/rs and rs are the curvature and radius of the solution circle, respectively, and similarly for the curvatures k1, k2 and k3 and radii r1, r2 and r3 of the three given circles. For every set of four mutually tangent circles, there is a second set of four mutually tangent circles that are tangent at the same six points. Descartes' theorem was rediscovered independently in 1826 by Jakob Steiner, in 1842 by Philip Beecroft, and again in 1936 by Frederick Soddy. Soddy published his findings in the scientific journal Nature as a poem, The Kiss Precise, of which the first two stanzas are reproduced below. The first stanza describes Soddy's circles, whereas the second stanza gives Descartes' theorem. In Soddy's poem, two circles are said to "kiss" if they are tangent, whereas the term "bend" refers to the curvature k of the circle. Sundry extensions of Descartes' theorem have been derived by Daniel Pedoe. == Generalizations == Apollonius' problem can be extended to construct all the circles that intersect three given circles at a precise angle θ, or at three specified crossing angles θ1, θ2 and θ3; the ordinary Apollonius' problem corresponds to a special case in which the crossing angle is zero for all three given circles. Another generalization is the dual of the first extension, namely, to construct circles with three specified tangential distances from the three given circles. Apollonius' problem can be extended from the plane to the sphere and other quadratic surfaces. For the sphere, the problem is to construct all the circles (the boundaries of spherical caps) that are tangent to three given circles on the sphere. This spherical problem can be rendered into a corresponding planar problem using stereographic projection. Once the solutions to the planar problem have been constructed, the corresponding solutions to the spherical problem can be determined by inverting the stereographic projection. Even more generally, one can consider the problem of four tangent curves that result from the intersections of an arbitrary quadratic surface and four planes, a problem first considered by Charles Dupin. By solving Apollonius' problem repeatedly to find the inscribed circle, the interstices between mutually tangential circles can be filled arbitrarily finely, forming an Apollonian gasket, also known as a Leibniz packing or an Apollonian packing. This gasket is a fractal, being self-similar and having a dimension d that is not known exactly but is roughly 1.3, which is higher than that of a regular (or rectifiable) curve (d = 1) but less than that of a plane (d = 2). The Apollonian gasket was first described by Gottfried Leibniz in the 17th century, and is a curved precursor of the 20th-century Sierpiński triangle. The Apollonian gasket also has deep connections to other fields of mathematics; for example, it is the limit set of Kleinian groups. The configuration of a circle tangent to four circles in the plane has special properties, which have been elucidated by Larmor (1891) and Lachlan (1893). Such a configuration is also the basis for Casey's theorem, itself a generalization of Ptolemy's theorem. The extension of Apollonius' problem to three dimensions, namely, the problem of finding a fifth sphere that is tangent to four given spheres, can be solved by analogous methods. For example, the given and solution spheres can be resized so that one given sphere is shrunk to point while maintaining tangency. Inversion in this point reduces Apollonius' problem to finding a plane that is tangent to three given spheres. There are in general eight such planes, which become the solutions to the original problem by reversing the inversion and the resizing. This problem was first considered by Pierre de Fermat, and many alternative solution methods have been developed over the centuries. Apollonius' problem can even be extended to d dimensions, to construct the hyperspheres tangent to a given set of d + 1 hyperspheres. Following the publication of Frederick Soddy's re-derivation of the Descartes' theorem in 1936, several people solved (independently) the mutually tangent case corresponding to Soddy's circles in d dimensions. == Applications == The principal application of Apollonius' problem, as formulated by Isaac Newton, is hyperbolic trilateration, which seeks to determine a position from the differences in distances to at least three points. For example, a ship may seek to determine its position from the differences in arrival times of signals from three synchronized transmitters. Solutions to Apollonius' problem were used in World War I to determine the location of an artillery piece from the time a gunshot was heard at three different positions, and hyperbolic trilateration is the principle used by the Decca Navigator System and LORAN. Similarly, the location of an aircraft may be determined from the difference in arrival times of its transponder signal at four receiving stations. This multilateration problem is equivalent to the three-dimensional generalization of Apollonius' problem and applies to global navigation satellite systems (see GPS#Geometric interpretation). It is also used to determine the position of calling animals (such as birds and whales), although Apollonius' problem does not pertain if the speed of sound varies with direction (i.e., the transmission medium not isotropic). Apollonius' problem has other applications. In Book 1, Proposition 21 in his Principia, Isaac Newton used his solution of Apollonius' problem to construct an orbit in celestial mechanics from the center of attraction and observations of tangent lines to the orbit corresponding to instantaneous velocity. The special case of the problem of Apollonius when all three circles are tangent is used in the Hardy–Littlewood circle method of analytic number theory to construct Hans Rademacher's contour for complex integration, given by the boundaries of an infinite set of Ford circles each of which touches several others. Finally, Apollonius' problem has been applied to some types of packing problems, which arise in disparate fields such as the error-correcting codes used on DVDs and the design of pharmaceuticals that bind in a particular enzyme of a pathogenic bacterium. == See also == Apollonius point Apollonius' theorem Isodynamic point of a triangle == References == == Further reading == Boyd, DW (1973). "The osculatory packing of a three-dimensional sphere". Canadian Journal of Mathematics. 25 (2): 303–322. doi:10.4153/CJM-1973-030-5. S2CID 120042053. Callandreau, Édouard (1949). Célèbres problèmes mathématiques (in French). Paris: Albin Michel. pp. 219–226. OCLC 61042170. Camerer, JG (1795). Apollonii de Tactionibus, quae supersunt, ac maxime lemmata Pappi, in hos libros Graece nunc primum edita, e codicibus manuscriptis, cum Vietae librorum Apollonii restitutione, adjectis observationibus, computationibus, ac problematis Apolloniani historia (in Latin). Gothae: Ettinger. Gisch D, Ribando JM (2004). "Apollonius' Problem: A Study of Solutions and Their Connections" (PDF). American Journal of Undergraduate Research. 3: 15–25. doi:10.33697/ajur.2004.010. Archived from the original (PDF) on 2008-04-15. Retrieved 2009-04-16. Pappus of Alexandria (1933). Pappus d'Alexandrie: La collection mathématique (in French). Paris. OCLC 67245614.{{cite book}}: CS1 maint: location missing publisher (link) Trans., introd., and notes by Paul Ver Eecke. Simon, M (1906). Über die Entwicklung der Elementargeometrie im XIX. Jahrhundert (in German). Berlin: Teubner. pp. 97–105. Wells, D (1991). The Penguin Dictionary of Curious and Interesting Geometry. New York: Penguin Books. pp. 3–5. ISBN 0-14-011813-6. == External links == "Ask Dr. Math solution". Mathforum. Retrieved 2008-05-05. Weisstein, Eric W. "Apollonius' problem". MathWorld. "Apollonius' Problem". Cut The Knot. Retrieved 2008-05-05. Kunkel, Paul. "Tangent Circles". Whistler Alley. Retrieved 2008-05-05. Austin, David (March 2006). "When kissing involves trigonometry". Feature Column at the American Mathematical Society website. Retrieved 2008-05-05.
|
Wikipedia:Product rule#0
|
In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as ( u ⋅ v ) ′ = u ′ ⋅ v + u ⋅ v ′ {\displaystyle (u\cdot v)'=u'\cdot v+u\cdot v'} or in Leibniz's notation as d d x ( u ⋅ v ) = d u d x ⋅ v + u ⋅ d v d x . {\displaystyle {\frac {d}{dx}}(u\cdot v)={\frac {du}{dx}}\cdot v+u\cdot {\frac {dv}{dx}}.} The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts. == Discovery == Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using "infinitesimals" (a precursor to the modern differential). (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u and v be functions. Then d(uv) is the same thing as the difference between two successive uv's; let one of these be uv, and the other u+du times v+dv; then: d ( u ⋅ v ) = ( u + d u ) ⋅ ( v + d v ) − u ⋅ v = u ⋅ d v + v ⋅ d u + d u ⋅ d v . {\displaystyle {\begin{aligned}d(u\cdot v)&{}=(u+du)\cdot (v+dv)-u\cdot v\\&{}=u\cdot dv+v\cdot du+du\cdot dv.\end{aligned}}} Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that d ( u ⋅ v ) = v ⋅ d u + u ⋅ d v {\displaystyle d(u\cdot v)=v\cdot du+u\cdot dv} and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain d d x ( u ⋅ v ) = v ⋅ d u d x + u ⋅ d v d x {\displaystyle {\frac {d}{dx}}(u\cdot v)=v\cdot {\frac {du}{dx}}+u\cdot {\frac {dv}{dx}}} which can also be written in Lagrange's notation as ( u ⋅ v ) ′ = v ⋅ u ′ + u ⋅ v ′ . {\displaystyle (u\cdot v)'=v\cdot u'+u\cdot v'.} == Examples == Suppose we want to differentiate f ( x ) = x 2 sin ( x ) . {\displaystyle f(x)=x^{2}{\text{sin}}(x).} By using the product rule, one gets the derivative f ′ ( x ) = 2 x ⋅ sin ( x ) + x 2 cos ( x ) {\displaystyle f'(x)=2x\cdot {\text{sin}}(x)+x^{2}{\text{cos}}(x)} (since the derivative of x 2 {\displaystyle x^{2}} is 2 x , {\displaystyle 2x,} and the derivative of the sine function is the cosine function). One special case of the product rule is the constant multiple rule, which states: if c is a number, and f ( x ) {\displaystyle f(x)} is a differentiable function, then c ⋅ f ( x ) {\displaystyle c\cdot f(x)} is also differentiable, and its derivative is ( c f ) ′ ( x ) = c ⋅ f ′ ( x ) . {\displaystyle (cf)'(x)=c\cdot f'(x).} This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear. The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is if it is differentiable.) == Proofs == === Limit definition of derivative === Let h(x) = f(x)g(x) and suppose that f and g are each differentiable at x. We want to prove that h is differentiable at x and that its derivative, h′(x), is given by f′(x)g(x) + f(x)g′(x). To do this, f ( x ) g ( x + Δ x ) − f ( x ) g ( x + Δ x ) {\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)} (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. h ′ ( x ) = lim Δ x → 0 h ( x + Δ x ) − h ( x ) Δ x = lim Δ x → 0 f ( x + Δ x ) g ( x + Δ x ) − f ( x ) g ( x ) Δ x = lim Δ x → 0 f ( x + Δ x ) g ( x + Δ x ) − f ( x ) g ( x + Δ x ) + f ( x ) g ( x + Δ x ) − f ( x ) g ( x ) Δ x = lim Δ x → 0 [ f ( x + Δ x ) − f ( x ) ] ⋅ g ( x + Δ x ) + f ( x ) ⋅ [ g ( x + Δ x ) − g ( x ) ] Δ x = lim Δ x → 0 f ( x + Δ x ) − f ( x ) Δ x ⋅ lim Δ x → 0 g ( x + Δ x ) + lim Δ x → 0 f ( x ) ⋅ lim Δ x → 0 g ( x + Δ x ) − g ( x ) Δ x = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . {\displaystyle {\begin{aligned}h'(x)&=\lim _{\Delta x\to 0}{\frac {h(x+\Delta x)-h(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x+\Delta x)+f(x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {{\big [}f(x+\Delta x)-f(x){\big ]}\cdot g(x+\Delta x)+f(x)\cdot {\big [}g(x+\Delta x)-g(x){\big ]}}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\cdot \lim _{\Delta x\to 0}g(x+\Delta x)+\lim _{\Delta x\to 0}f(x)\cdot \lim _{\Delta x\to 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}\\[5pt]&=f'(x)g(x)+f(x)g'(x).\end{aligned}}} The fact that lim Δ x → 0 g ( x + Δ x ) = g ( x ) {\displaystyle \lim _{\Delta x\to 0}g(x+\Delta x)=g(x)} follows from the fact that differentiable functions are continuous. === Linear approximations === By definition, if f , g : R → R {\displaystyle f,g:\mathbb {R} \to \mathbb {R} } are differentiable at x {\displaystyle x} , then we can write linear approximations: f ( x + h ) = f ( x ) + f ′ ( x ) h + ε 1 ( h ) {\displaystyle f(x+h)=f(x)+f'(x)h+\varepsilon _{1}(h)} and g ( x + h ) = g ( x ) + g ′ ( x ) h + ε 2 ( h ) , {\displaystyle g(x+h)=g(x)+g'(x)h+\varepsilon _{2}(h),} where the error terms are small with respect to h: that is, lim h → 0 ε 1 ( h ) h = lim h → 0 ε 2 ( h ) h = 0 , {\textstyle \lim _{h\to 0}{\frac {\varepsilon _{1}(h)}{h}}=\lim _{h\to 0}{\frac {\varepsilon _{2}(h)}{h}}=0,} also written ε 1 , ε 2 ∼ o ( h ) {\displaystyle \varepsilon _{1},\varepsilon _{2}\sim o(h)} . Then: f ( x + h ) g ( x + h ) − f ( x ) g ( x ) = ( f ( x ) + f ′ ( x ) h + ε 1 ( h ) ) ( g ( x ) + g ′ ( x ) h + ε 2 ( h ) ) − f ( x ) g ( x ) = f ( x ) g ( x ) + f ′ ( x ) g ( x ) h + f ( x ) g ′ ( x ) h − f ( x ) g ( x ) + error terms = f ′ ( x ) g ( x ) h + f ( x ) g ′ ( x ) h + o ( h ) . {\displaystyle {\begin{aligned}f(x+h)g(x+h)-f(x)g(x)&=(f(x)+f'(x)h+\varepsilon _{1}(h))(g(x)+g'(x)h+\varepsilon _{2}(h))-f(x)g(x)\\[.5em]&=f(x)g(x)+f'(x)g(x)h+f(x)g'(x)h-f(x)g(x)+{\text{error terms}}\\[.5em]&=f'(x)g(x)h+f(x)g'(x)h+o(h).\end{aligned}}} The "error terms" consist of items such as f ( x ) ε 2 ( h ) , f ′ ( x ) g ′ ( x ) h 2 {\displaystyle f(x)\varepsilon _{2}(h),f'(x)g'(x)h^{2}} and h f ′ ( x ) ε 1 ( h ) {\displaystyle hf'(x)\varepsilon _{1}(h)} which are easily seen to have magnitude o ( h ) . {\displaystyle o(h).} Dividing by h {\displaystyle h} and taking the limit h → 0 {\displaystyle h\to 0} gives the result. === Quarter squares === This proof uses the chain rule and the quarter square function q ( x ) = 1 4 x 2 {\displaystyle q(x)={\tfrac {1}{4}}x^{2}} with derivative q ′ ( x ) = 1 2 x {\displaystyle q'(x)={\tfrac {1}{2}}x} . We have: u v = q ( u + v ) − q ( u − v ) , {\displaystyle uv=q(u+v)-q(u-v),} and differentiating both sides gives: f ′ = q ′ ( u + v ) ( u ′ + v ′ ) − q ′ ( u − v ) ( u ′ − v ′ ) = ( 1 2 ( u + v ) ( u ′ + v ′ ) ) − ( 1 2 ( u − v ) ( u ′ − v ′ ) ) = 1 2 ( u u ′ + v u ′ + u v ′ + v v ′ ) − 1 2 ( u u ′ − v u ′ − u v ′ + v v ′ ) = v u ′ + u v ′ . {\displaystyle {\begin{aligned}f'&=q'(u+v)(u'+v')-q'(u-v)(u'-v')\\[4pt]&=\left({\tfrac {1}{2}}(u+v)(u'+v')\right)-\left({\tfrac {1}{2}}(u-v)(u'-v')\right)\\[4pt]&={\tfrac {1}{2}}(uu'+vu'+uv'+vv')-{\tfrac {1}{2}}(uu'-vu'-uv'+vv')\\[4pt]&=vu'+uv'.\end{aligned}}} === Multivariable chain rule === The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function m ( u , v ) = u v {\displaystyle m(u,v)=uv} : d ( u v ) d x = ∂ ( u v ) ∂ u d u d x + ∂ ( u v ) ∂ v d v d x = v d u d x + u d v d x . {\displaystyle {d(uv) \over dx}={\frac {\partial (uv)}{\partial u}}{\frac {du}{dx}}+{\frac {\partial (uv)}{\partial v}}{\frac {dv}{dx}}=v{\frac {du}{dx}}+u{\frac {dv}{dx}}.} === Non-standard analysis === Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-standard analysis, specifically the hyperreal numbers. Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives d ( u v ) d x = st ( ( u + d u ) ( v + d v ) − u v d x ) = st ( u v + u ⋅ d v + v ⋅ d u + d u ⋅ d v − u v d x ) = st ( u ⋅ d v + v ⋅ d u + d u ⋅ d v d x ) = st ( u d v d x + ( v + d v ) d u d x ) = u d v d x + v d u d x . {\displaystyle {\begin{aligned}{\frac {d(uv)}{dx}}&=\operatorname {st} \left({\frac {(u+du)(v+dv)-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {uv+u\cdot dv+v\cdot du+du\cdot dv-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {u\cdot dv+v\cdot du+du\cdot dv}{dx}}\right)\\&=\operatorname {st} \left(u{\frac {dv}{dx}}+(v+dv){\frac {du}{dx}}\right)\\&=u{\frac {dv}{dx}}+v{\frac {du}{dx}}.\end{aligned}}} This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above). === Smooth infinitesimal analysis === In the context of Lawvere's approach to infinitesimals, let d x {\displaystyle dx} be a nilsquare infinitesimal. Then d u = u ′ d x {\displaystyle du=u'\ dx} and d v = v ′ d x {\displaystyle dv=v'\ dx} , so that d ( u v ) = ( u + d u ) ( v + d v ) − u v = u v + u ⋅ d v + v ⋅ d u + d u ⋅ d v − u v = u ⋅ d v + v ⋅ d u + d u ⋅ d v = u ⋅ d v + v ⋅ d u {\displaystyle {\begin{aligned}d(uv)&=(u+du)(v+dv)-uv\\&=uv+u\cdot dv+v\cdot du+du\cdot dv-uv\\&=u\cdot dv+v\cdot du+du\cdot dv\\&=u\cdot dv+v\cdot du\end{aligned}}} since d u d v = u ′ v ′ ( d x ) 2 = 0. {\displaystyle du\,dv=u'v'(dx)^{2}=0.} Dividing by d x {\displaystyle dx} then gives d ( u v ) d x = u d v d x + v d u d x {\displaystyle {\frac {d(uv)}{dx}}=u{\frac {dv}{dx}}+v{\frac {du}{dx}}} or ( u v ) ′ = u ⋅ v ′ + v ⋅ u ′ {\displaystyle (uv)'=u\cdot v'+v\cdot u'} . === Logarithmic differentiation === Let h ( x ) = f ( x ) g ( x ) {\displaystyle h(x)=f(x)g(x)} . Taking the absolute value of each function and the natural log of both sides of the equation, ln | h ( x ) | = ln | f ( x ) g ( x ) | {\displaystyle \ln |h(x)|=\ln |f(x)g(x)|} Applying properties of the absolute value and logarithms, ln | h ( x ) | = ln | f ( x ) | + ln | g ( x ) | {\displaystyle \ln |h(x)|=\ln |f(x)|+\ln |g(x)|} Taking the logarithmic derivative of both sides and then solving for h ′ ( x ) {\displaystyle h'(x)} : h ′ ( x ) h ( x ) = f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) {\displaystyle {\frac {h'(x)}{h(x)}}={\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}} Solving for h ′ ( x ) {\displaystyle h'(x)} and substituting back f ( x ) g ( x ) {\displaystyle f(x)g(x)} for h ( x ) {\displaystyle h(x)} gives: h ′ ( x ) = h ( x ) ( f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) ) = f ( x ) g ( x ) ( f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) ) = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . {\displaystyle {\begin{aligned}h'(x)&=h(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f(x)g(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f'(x)g(x)+f(x)g'(x).\end{aligned}}} Note: Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because d d x ( ln | u | ) = u ′ u {\displaystyle {\tfrac {d}{dx}}(\ln |u|)={\tfrac {u'}{u}}} , which justifies taking the absolute value of the functions for logarithmic differentiation. == Generalizations == === Product of more than two factors === The product rule can be generalized to products of more than two factors. For example, for three factors we have d ( u v w ) d x = d u d x v w + u d v d x w + u v d w d x . {\displaystyle {\frac {d(uvw)}{dx}}={\frac {du}{dx}}vw+u{\frac {dv}{dx}}w+uv{\frac {dw}{dx}}.} For a collection of functions f 1 , … , f k {\displaystyle f_{1},\dots ,f_{k}} , we have d d x [ ∏ i = 1 k f i ( x ) ] = ∑ i = 1 k ( ( d d x f i ( x ) ) ∏ j = 1 , j ≠ i k f j ( x ) ) = ( ∏ i = 1 k f i ( x ) ) ( ∑ i = 1 k f i ′ ( x ) f i ( x ) ) . {\displaystyle {\frac {d}{dx}}\left[\prod _{i=1}^{k}f_{i}(x)\right]=\sum _{i=1}^{k}\left(\left({\frac {d}{dx}}f_{i}(x)\right)\prod _{j=1,j\neq i}^{k}f_{j}(x)\right)=\left(\prod _{i=1}^{k}f_{i}(x)\right)\left(\sum _{i=1}^{k}{\frac {f'_{i}(x)}{f_{i}(x)}}\right).} The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion. The logarithmic derivative of a function f, denoted here Logder(f), is the derivative of the logarithm of the function. It follows that Logder ( f ) = f ′ f . {\displaystyle \operatorname {Logder} (f)={\frac {f'}{f}}.} Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately Logder ( f 1 ⋯ f k ) = ∑ i = 1 k Logder ( f i ) . {\displaystyle \operatorname {Logder} (f_{1}\cdots f_{k})=\sum _{i=1}^{k}\operatorname {Logder} (f_{i}).} The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the f i . {\displaystyle f_{i}.} === Higher derivatives === It can also be generalized to the general Leibniz rule for the nth derivative of a product of two factors, by symbolically expanding according to the binomial theorem: d n ( u v ) = ∑ k = 0 n ( n k ) ⋅ d ( n − k ) ( u ) ⋅ d ( k ) ( v ) . {\displaystyle d^{n}(uv)=\sum _{k=0}^{n}{n \choose k}\cdot d^{(n-k)}(u)\cdot d^{(k)}(v).} Applied at a specific point x, the above formula gives: ( u v ) ( n ) ( x ) = ∑ k = 0 n ( n k ) ⋅ u ( n − k ) ( x ) ⋅ v ( k ) ( x ) . {\displaystyle (uv)^{(n)}(x)=\sum _{k=0}^{n}{n \choose k}\cdot u^{(n-k)}(x)\cdot v^{(k)}(x).} Furthermore, for the nth derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients: ( ∏ i = 1 k f i ) ( n ) = ∑ j 1 + j 2 + ⋯ + j k = n ( n j 1 , j 2 , … , j k ) ∏ i = 1 k f i ( j i ) . {\displaystyle \left(\prod _{i=1}^{k}f_{i}\right)^{\!\!(n)}=\sum _{j_{1}+j_{2}+\cdots +j_{k}=n}{n \choose j_{1},j_{2},\ldots ,j_{k}}\prod _{i=1}^{k}f_{i}^{(j_{i})}.} === Higher partial derivatives === For partial derivatives, we have ∂ n ∂ x 1 ⋯ ∂ x n ( u v ) = ∑ S ∂ | S | u ∏ i ∈ S ∂ x i ⋅ ∂ n − | S | v ∏ i ∉ S ∂ x i {\displaystyle {\partial ^{n} \over \partial x_{1}\,\cdots \,\partial x_{n}}(uv)=\sum _{S}{\partial ^{|S|}u \over \prod _{i\in S}\partial x_{i}}\cdot {\partial ^{n-|S|}v \over \prod _{i\not \in S}\partial x_{i}}} where the index S runs through all 2n subsets of {1, ..., n}, and |S| is the cardinality of S. For example, when n = 3, ∂ 3 ∂ x 1 ∂ x 2 ∂ x 3 ( u v ) = u ⋅ ∂ 3 v ∂ x 1 ∂ x 2 ∂ x 3 + ∂ u ∂ x 1 ⋅ ∂ 2 v ∂ x 2 ∂ x 3 + ∂ u ∂ x 2 ⋅ ∂ 2 v ∂ x 1 ∂ x 3 + ∂ u ∂ x 3 ⋅ ∂ 2 v ∂ x 1 ∂ x 2 + ∂ 2 u ∂ x 1 ∂ x 2 ⋅ ∂ v ∂ x 3 + ∂ 2 u ∂ x 1 ∂ x 3 ⋅ ∂ v ∂ x 2 + ∂ 2 u ∂ x 2 ∂ x 3 ⋅ ∂ v ∂ x 1 + ∂ 3 u ∂ x 1 ∂ x 2 ∂ x 3 ⋅ v . {\displaystyle {\begin{aligned}&{\partial ^{3} \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}(uv)\\[1ex]={}&u\cdot {\partial ^{3}v \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{1}}\cdot {\partial ^{2}v \over \partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{2}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{3}}+{\partial u \over \partial x_{3}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{2}}\\[1ex]&+{\partial ^{2}u \over \partial x_{1}\,\partial x_{2}}\cdot {\partial v \over \partial x_{3}}+{\partial ^{2}u \over \partial x_{1}\,\partial x_{3}}\cdot {\partial v \over \partial x_{2}}+{\partial ^{2}u \over \partial x_{2}\,\partial x_{3}}\cdot {\partial v \over \partial x_{1}}+{\partial ^{3}u \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}\cdot v.\\[-3ex]&\end{aligned}}} === Banach space === Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X × Y → Z is a continuous bilinear operator. Then B is differentiable, and its derivative at the point (x,y) in X × Y is the linear map D(x,y)B : X × Y → Z given by ( D ( x , y ) B ) ( u , v ) = B ( u , y ) + B ( x , v ) ∀ ( u , v ) ∈ X × Y . {\displaystyle (D_{\left(x,y\right)}\,B)\left(u,v\right)=B\left(u,y\right)+B\left(x,v\right)\qquad \forall (u,v)\in X\times Y.} This result can be extended to more general topological vector spaces. === In vector calculus === The product rule extends to various product operations of vector functions on R n {\displaystyle \mathbb {R} ^{n}} : For scalar multiplication: ( f ⋅ g ) ′ = f ′ ⋅ g + f ⋅ g ′ {\displaystyle (f\cdot \mathbf {g} )'=f'\cdot \mathbf {g} +f\cdot \mathbf {g} '} For dot product: ( f ⋅ g ) ′ = f ′ ⋅ g + f ⋅ g ′ {\displaystyle (\mathbf {f} \cdot \mathbf {g} )'=\mathbf {f} '\cdot \mathbf {g} +\mathbf {f} \cdot \mathbf {g} '} For cross product of vector functions on R 3 {\displaystyle \mathbb {R} ^{3}} : ( f × g ) ′ = f ′ × g + f × g ′ {\displaystyle (\mathbf {f} \times \mathbf {g} )'=\mathbf {f} '\times \mathbf {g} +\mathbf {f} \times \mathbf {g} '} There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient: ∇ ( f ⋅ g ) = ∇ f ⋅ g + f ⋅ ∇ g {\displaystyle \nabla (f\cdot g)=\nabla f\cdot g+f\cdot \nabla g} Such a rule will hold for any continuous bilinear product operation. Let B : X × Y → Z be a continuous bilinear map between vector spaces, and let f and g be differentiable functions into X and Y, respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation, H ( f , g ) ′ = H ( f ′ , g ) + H ( f , g ′ ) . {\displaystyle H(f,g)'=H(f',g)+H(f,g').} This is also a special case of the product rule for bilinear maps in Banach space. === Derivations in abstract algebra and differential geometry === In abstract algebra, the product rule is the defining property of a derivation. In this terminology, the product rule states that the derivative operator is a derivation on functions. In differential geometry, a tangent vector to a manifold M at a point p may be defined abstractly as an operator on real-valued functions which behaves like a directional derivative at p: that is, a linear functional v which is a derivation, v ( f g ) = v ( f ) g ( p ) + f ( p ) v ( g ) . {\displaystyle v(fg)=v(f)\,g(p)+f(p)\,v(g).} Generalizing (and dualizing) the formulas of vector calculus to an n-dimensional manifold M, one may take differential forms of degrees k and l, denoted α ∈ Ω k ( M ) , β ∈ Ω ℓ ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\beta \in \Omega ^{\ell }(M)} , with the wedge or exterior product operation α ∧ β ∈ Ω k + ℓ ( M ) {\displaystyle \alpha \wedge \beta \in \Omega ^{k+\ell }(M)} , as well as the exterior derivative d : Ω m ( M ) → Ω m + 1 ( M ) {\displaystyle d:\Omega ^{m}(M)\to \Omega ^{m+1}(M)} . Then one has the graded Leibniz rule: d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .} == Applications == Among the applications of the product rule is a proof that d d x x n = n x n − 1 {\displaystyle {d \over dx}x^{n}=nx^{n-1}} when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n, then for the next value, n + 1, we have d x n + 1 d x = d d x ( x n ⋅ x ) = x d d x x n + x n d d x x (the product rule is used here) = x ( n x n − 1 ) + x n ⋅ 1 (the induction hypothesis is used here) = ( n + 1 ) x n . {\displaystyle {\begin{aligned}{\frac {dx^{n+1}}{dx}}&{}={\frac {d}{dx}}\left(x^{n}\cdot x\right)\\[1ex]&{}=x{\frac {d}{dx}}x^{n}+x^{n}{\frac {d}{dx}}x&{\text{(the product rule is used here)}}\\[1ex]&{}=x\left(nx^{n-1}\right)+x^{n}\cdot 1&{\text{(the induction hypothesis is used here)}}\\[1ex]&{}=\left(n+1\right)x^{n}.\end{aligned}}} Therefore, if the proposition is true for n, it is true also for n + 1, and therefore for all natural n. == See also == Differentiation of integrals – Problem in mathematics Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function Differentiation rules – Rules for computing derivatives of functions Distribution (mathematics) – Mathematical term generalizing the concept of function General Leibniz rule – Generalization of the product rule in calculus Integration by parts – Mathematical method in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property Power rule – Method of differentiating single term polynomials Quotient rule – Formula for the derivative of a ratio of functions Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == References ==
|
Wikipedia:Productive matrix#0
|
In linear algebra, a square nonnegative matrix A {\displaystyle A} of order n {\displaystyle n} is said to be productive, or to be a Leontief matrix, if there exists a n × 1 {\displaystyle n\times 1} nonnegative column matrix P {\displaystyle P} such as P − A P {\displaystyle P-AP} is a positive matrix. == History == The concept of productive matrix was developed by the economist Wassily Leontief (Nobel Prize in Economics in 1973) in order to model and analyze the relations between the different sectors of an economy. The interdependency linkages between the latter can be examined by the input-output model with empirical data. == Explicit definition == The matrix A ∈ M n , n ( R ) {\displaystyle A\in \mathrm {M} _{n,n}(\mathbb {R} )} is productive if and only if A ⩾ 0 {\displaystyle A\geqslant 0} and ∃ P ∈ M n , 1 ( R ) , P > 0 {\displaystyle \exists P\in \mathrm {M} _{n,1}(\mathbb {R} ),P>0} such as P − A P > 0 {\displaystyle P-AP>0} . Here M r , c ( R ) {\displaystyle \mathrm {M} _{r,c}(\mathbb {R} )} denotes the set of r×c matrices of real numbers, whereas > 0 {\displaystyle >0} and ⩾ 0 {\displaystyle \geqslant 0} indicates a positive and a nonnegative matrix, respectively. == Properties == The following properties are proven e.g. in the textbook (Michel 1984). === Characterization === Theorem A nonnegative matrix A ∈ M n , n ( R ) {\displaystyle A\in \mathrm {M} _{n,n}(\mathbb {R} )} is productive if and only if I n − A {\displaystyle I_{n}-A} is invertible with a nonnegative inverse, where I n {\displaystyle I_{n}} denotes the n × n {\displaystyle n\times n} identity matrix. Proof "If" : Let I n − A {\displaystyle I_{n}-A} be invertible with a nonnegative inverse, Let U ∈ M n , 1 ( R ) {\displaystyle U\in \mathrm {M} _{n,1}(\mathbb {R} )} be an arbitrary column matrix with U > 0 {\displaystyle U>0} . Then the matrix P = ( I n − A ) − 1 U {\displaystyle P=(I_{n}-A)^{-1}U} is nonnegative since it is the product of two nonnegative matrices. Moreover, P − A P = ( I n − A ) P = ( I n − A ) ( I n − A ) − 1 U = U > 0 {\displaystyle P-AP=(I_{n}-A)P=(I_{n}-A)(I_{n}-A)^{-1}U=U>0} . Therefore A {\displaystyle A} is productive. "Only if" : Let A {\displaystyle A} be productive, let P > 0 {\displaystyle P>0} such that V = P − A P > 0 {\displaystyle V=P-AP>0} . The proof proceeds by reductio ad absurdum. First, assume for contradiction I n − A {\displaystyle I_{n}-A} is singular. The endomorphism canonically associated with I n − A {\displaystyle I_{n}-A} can not be injective by singularity of the matrix. Thus some non-zero column matrix Z ∈ M n , 1 ( R ) {\displaystyle Z\in \mathrm {M} _{n,1}(\mathbb {R} )} exists such that ( I n − A ) Z = 0 {\displaystyle (I_{n}-A)Z=0} . The matrix − Z {\displaystyle -Z} has the same properties as Z {\displaystyle Z} , therefore we can choose Z {\displaystyle Z} as an element of the kernel with at least one positive entry. Hence c = sup i ∈ [ | 1 , n | ] z i p i {\displaystyle c=\sup _{i\in [|1,n|]}{\frac {z_{i}}{p_{i}}}} is nonnegative and reached with at least one value k ∈ [ | 1 , n | ] {\displaystyle k\in [|1,n|]} . By definition of V {\displaystyle V} and of Z {\displaystyle Z} , we can infer that: c v k = c ( p k − ∑ i = 1 n a k i p i ) = c p k − ∑ i = 1 n a k i c p i {\displaystyle cv_{k}=c(p_{k}-\sum _{i=1}^{n}a_{ki}p_{i})=cp_{k}-\sum _{i=1}^{n}a_{ki}cp_{i}} c p k = z k = ∑ i = 1 n a k i z i {\displaystyle cp_{k}=z_{k}=\sum _{i=1}^{n}a_{ki}z_{i}} , using that Z = A Z {\displaystyle Z=AZ} by construction. Thus c v k = ∑ i = 1 n a k i ( z i − c p i ) ≤ 0 {\displaystyle cv_{k}=\sum _{i=1}^{n}a_{ki}(z_{i}-cp_{i})\leq \ 0} , using that z i ≤ c p i {\displaystyle z_{i}\leq cp_{i}} by definition of c {\displaystyle c} . This contradicts c > 0 {\displaystyle c>0} and v k > 0 {\displaystyle v_{k}>0} , hence I n − A {\displaystyle I_{n}-A} is necessarily invertible. Second, assume for contradiction I n − A {\displaystyle I_{n}-A} is invertible but with at least one negative entry in its inverse. Hence ∃ X ∈ M n , 1 ( R ) , X ⩾ 0 {\displaystyle \exists X\in \mathrm {M} _{n,1}(\mathbb {R} ),X\geqslant 0} such that there is at least one negative entry in Y = ( I n − A ) − 1 X {\displaystyle Y=(I_{n}-A)^{-1}X} . Then c = sup i ∈ [ | 1 , n | ] − y i p i {\displaystyle c=\sup _{i\in [|1,n|]}-{\frac {y_{i}}{p_{i}}}} is positive and reached with at least one value k ∈ [ | 1 , n | ] {\displaystyle k\in [|1,n|]} . By definition of V {\displaystyle V} and of X {\displaystyle X} , we can infer that: c v k = c ( p k − ∑ i = 1 n a k i p i ) = − y k − ∑ i = 1 n a k i c p i {\displaystyle cv_{k}=c(p_{k}-\sum _{i=1}^{n}a_{ki}p_{i})=-y_{k}-\sum _{i=1}^{n}a_{ki}cp_{i}} x k = y k − ∑ i = 1 n a k i y i {\displaystyle x_{k}=y_{k}-\sum _{i=1}^{n}a_{ki}y_{i}} , using that X = ( I n − A ) Y {\displaystyle X=(I_{n}-A)Y} by construction c v k + x k = − ∑ i = 1 n a k i ( c p i + y i ) ⩾ 0 {\displaystyle cv_{k}+x_{k}=-\sum _{i=1}^{n}a_{ki}(cp_{i}+y_{i})\geqslant 0} using that − y i ⩽ c p i {\displaystyle -y_{i}\leqslant cp_{i}} by definition of c {\displaystyle c} . Thus x k ≤ − c v k < 0 {\displaystyle x_{k}\leq -cv_{k}<0} , contradicting X ⩾ 0 {\displaystyle X\geqslant 0} . Therefore ( I n − A ) − 1 {\displaystyle (I_{n}-A)^{-1}} is necessarily nonnegative. === Transposition === Proposition The transpose of a productive matrix is productive. Proof Let A ∈ M n , n ( R ) {\displaystyle A\in \mathrm {M} _{n,n}(\mathbb {R} )} a productive matrix. Then ( I n − A ) − 1 {\displaystyle (I_{n}-A)^{-1}} exists and is nonnegative. Yet ( I n − A T ) − 1 = ( ( I n − A ) T ) − 1 = ( ( I n − A ) − 1 ) T {\displaystyle (I_{n}-A^{T})^{-1}=((I_{n}-A)^{T})^{-1}=((I_{n}-A)^{-1})^{T}} Hence ( I n − A T ) {\displaystyle (I_{n}-A^{T})} is invertible with a nonnegative inverse. Therefore A T {\displaystyle A^{T}} is productive. == Application == With a matrix approach of the input-output model, the consumption matrix is productive if it is economically viable and if the latter and the demand vector are nonnegative. == References ==
|
Wikipedia:Professor of Mathematical Statistics (Cambridge)#0
|
The Professorship of Mathematical Statistics at the University of Cambridge was established in 1961 with the support of the Royal Statistical Society and the aid of donations from various companies and banks. It was the first professorship in the Statistical Laboratory, and the first in Cambridge University explicitly intended for the study of statistics. Until 1973 the professor was ex officio Director of the Statistical Laboratory. == List of professors of mathematical statistics == 1962–1985 David Kendall 1985–1992 David Williams 1992– Geoffrey Grimmett == References ==
|
Wikipedia:Professor of Mathematics (Glasgow)#0
|
The Chair of Mathematics in the University of Glasgow in Scotland was established in 1691. Previously, under James VI's Nova Erectio, the teaching of Mathematics had been the responsibility of the Regents. == List of Mathematics Professors == George Sinclair MA (1691–1696) Robert Sinclair MA MD (1699) Robert Simson MA MD (1711) Rev Prof James Williamson FRSE MA DD (1761) James Millar MA (1796) James Thomson MA LLD (1832) Hugh Blackburn MA (1849) William Jack MA LLD (1879) George Alexander Gibson MA LLD (1909) Thomas Murray MacRobert MA DSc LLD (1927) Robert Alexander Rankin MA PhD DSc FRSE (1954–1982) Robert Winston Keith Odoni BSc PhD FRSE (1989–2001) Peter Kropholler (2003–2013) Michael Wemyss (2016–) == References == Who, What and Where: The History and Constitution of the University of Glasgow. Compiled by Michael Moss, Moira Rankin and Lesley Richmond) https://www.universitystory.gla.ac.uk/biography/?id=WH1773&type=P https://www.maths.gla.ac.uk/~mwemyss/ == See also == List of Professorships at the University of Glasgow
|
Wikipedia:Professor of Statistical Science (Cambridge)#0
|
The Professorship of Statistical Science is a professorship at the University of Cambridge. It was established in 1994 as the third professorship within the Cambridge Statistical Laboratory. == List of Professors of Statistical Science == 1994–1996, Richard L. Smith 2002–2015, L. C. G. Rogers 2017–present, Richard Samworth == References ==
|
Wikipedia:Professorship of Mathematical Finance#0
|
The position of Professor of Mathematical Finance in the Mathematical Institute of the University of Oxford was established in 2002. It is one of the six Statutory professorships in Mathematics at Oxford. From 2005 to 2015, the position was designated as 'Nomura Chair of Mathematical Finance' and endowed by Nomura. The post is associated with a professorial fellowship at St. Hugh's College, Oxford. == List of Professors of Mathematical Finance == The holders of the Chair have been: XunYu Zhou, 2008-2016. Rama Cont, 2018- == References == == See also == List of professorships at the University of Oxford
|
Wikipedia:Project Euler#0
|
Project Euler (named after Leonhard Euler) is a website dedicated to a series of computational problems intended to be solved with computer programs. The project attracts graduates and students interested in mathematics and computer programming. Since its creation in 2001 by Colin Hughes, Project Euler has gained notability and popularity worldwide. It includes 929 problems as of March 31 2025, with a new one added approximately every week. Problems are of varying difficulty, but each is solvable in less than a minute of CPU time using an efficient algorithm on a modestly powered computer. == Features of the site == A forum specific to each question may be viewed after the user has correctly answered the given question. Problems can be sorted on ID, number solved and difficulty. Participants can track their progress through achievement levels based on the number of problems solved. A new level is reached for every 25 problems solved. Special awards exist for solving special combinations of problems. For instance, there is an award for solving fifty prime numbered problems. A special "Eulerians" level exists to track achievement based on the fastest fifty solvers of recent problems so that newer members can compete without solving older problems. == Example problem and solutions == The first Project Euler problem is Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. It is a 5% rated problem, indicating it is one of the easiest on the site. The initial approach a beginner can come up with is a bruteforce attempt. Given the upper bound of 1000 in this case, a bruteforce is easily achievable for most current home computers. A Python code that solves it is presented below. This solution has a Big O Notation of O ( n ) {\displaystyle O(n)} . A user could keep refining their solution for any given problem further. In this case, there exists a constant time solution for the problem. The inclusion-exclusion principle claims that if there are two finite sets A , B {\displaystyle A,B} , the number of elements in their union can be expressed as | A ∪ B | = | A | + | B | − | A ∩ B | {\displaystyle |A\cup B|=|A|+|B|-|A\cap B|} . This is a pretty popular combinatorics result. One can extend this result and express a relation for the sum of their elements, namely ∑ x ∈ A ∪ B x = ∑ x ∈ A x + ∑ x ∈ B x − ∑ x ∈ A ∩ B x {\displaystyle \sum _{x\in A\cup B}x=\sum _{x\in A}x+\sum _{x\in B}x-\sum _{x\in A\cap B}x} Applying this to the problem, have A {\displaystyle A} denote the multiples of 3 up to n {\displaystyle n} and B {\displaystyle B} the multiples of 5 up to n {\displaystyle n} , the problem can be reduced to summing the multiples of 3, adding the sum of the multiples of 5, and subtracting the sum of the multiples of 15. For an arbitrarily selected k {\displaystyle k} , one can compute the multiples of k {\displaystyle k} up to n {\displaystyle n} via k + 2 k + 3 k + … + ⌊ n / k ⌋ k = k ( 1 + 2 + 3 + … + ⌊ n / k ⌋ ) = k ⌊ n / k ⌋ ( ⌊ n / k ⌋ + 1 ) 2 {\displaystyle k+2k+3k+\ldots +\lfloor n/k\rfloor k=k(1+2+3+\ldots +\lfloor n/k\rfloor )=k{\frac {\lfloor n/k\rfloor (\lfloor n/k\rfloor +1)}{2}}} Later problems progress (non-linearly) in difficulty, requiring more creative methodology and higher understanding of the mathematical principles behind the problems. == Project Euler Community == Project Euler fosters a community on its official website where members engage in discussion after entering the correct solution. Many also share their detailed approaches on external platforms such as GitHub and personal websites, promoting collaborative learning and the development of even more creative solutions. == See also == List of computer science awards List of things named after Leonhard Euler == References == == External links == Official website Project Euler forum Project Euler translations into several other languages
|
Wikipedia:Project NExT#0
|
MAA Project NExT (New Experiences in Teaching) is a program sponsored by the Mathematical Association of America (MAA) to aid in the professional development of mathematicians, statisticians, and mathematics educators after they receive their PhDs. It involves workshops and lectures on teaching, academic research, academic scholarship, and professional activities. The participants in the program are called Project NExT Fellows or sometimes Dots, and the program also provides ample networking opportunities for them. Each fellow is also provided with a consultant, who serves as a mentor for them. == History == Project NExT was founded by James (Jim) Leitzel (Ohio State University) and Chris Stevens (Saint Louis University). The first fellows were selected in 1994. Jim Leitzel died in 1998, and Aparna Higgins (University of Dayton) and Joe Gallian (University of Minnesota Duluth) became co-directors of Project NExT. Chris Stevens stepped down as director in 2010, and was succeeded by Aparna Higgins and Joe Gallian. Judith Covington (Louisiana State University, Shreveport) and Gavin LaRose (University of Michigan) first served as Associate Co-Directors and later became Co-Directors. In 2007, the total number of fellows surpassed 1000. By 2017 the total number of fellows reached 1700. In 2023 Christine Kelley became director. == Selection of fellows == The program is aimed at faculty who are in the early stages of their higher ed teaching career, in a mathematics (or closely related) department, after receiving their doctorate. Fellows are selected based on an application, including a short curriculum vitae, a research statement, and a teaching statement expressing interest in the program. The application also requires a letter from the applicant's department chair guaranteeing funding to attend several conferences. The number of selected fellows depends on funding. Currently, just under 100 are selected each year. == The program == Project NExT is a professional development program for college-level faculty interested in teaching. The program provides workshops and an electronic mailing list for its members. Fellows participate in MathFest during the year of their selection and the year after, and in the Joint Mathematics Meeting in the January after their selection as fellows. Each fellow is also assigned a consultant outside of their own institution. NExT fellows organize several sessions at the Joint Meeting and MathFest, on topics of their choosing. Since 2016, all MAA Project NExT events at the Joint Mathematics Meetings have been open to all conference attendees. == Affiliation with other professional development organizations == The national Project NExT program is strongly affiliated with Section NExT programs, which are run by local sections of the MAA, and involve many of the same activities. Section NExT fellows can also participate in the national workshops. Project NExT is also strongly associated with the Young Mathematicians Network. == See also == Mathematical Association of America == References == == External links == Homepage of Project NExT
|
Wikipedia:Projection (linear algebra)#0
|
In linear algebra and functional analysis, a projection is a linear transformation P {\displaystyle P} from a vector space to itself (an endomorphism) such that P ∘ P = P {\displaystyle P\circ P=P} . That is, whenever P {\displaystyle P} is applied twice to any vector, it gives the same result as if it were applied once (i.e. P {\displaystyle P} is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. == Definitions == A projection on a vector space V {\displaystyle V} is a linear operator P : V → V {\displaystyle P\colon V\to V} such that P 2 = P {\displaystyle P^{2}=P} . When V {\displaystyle V} has an inner product and is complete, i.e. when V {\displaystyle V} is a Hilbert space, the concept of orthogonality can be used. A projection P {\displaystyle P} on a Hilbert space V {\displaystyle V} is called an orthogonal projection if it satisfies ⟨ P x , y ⟩ = ⟨ x , P y ⟩ {\displaystyle \langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P\mathbf {y} \rangle } for all x , y ∈ V {\displaystyle \mathbf {x} ,\mathbf {y} \in V} . A projection on a Hilbert space that is not orthogonal is called an oblique projection. === Projection matrix === A square matrix P {\displaystyle P} is called a projection matrix if it is equal to its square, i.e. if P 2 = P {\displaystyle P^{2}=P} .: p. 38 A square matrix P {\displaystyle P} is called an orthogonal projection matrix if P 2 = P = P T {\displaystyle P^{2}=P=P^{\mathrm {T} }} for a real matrix, and respectively P 2 = P = P ∗ {\displaystyle P^{2}=P=P^{*}} for a complex matrix, where P T {\displaystyle P^{\mathrm {T} }} denotes the transpose of P {\displaystyle P} and P ∗ {\displaystyle P^{*}} denotes the adjoint or Hermitian transpose of P {\displaystyle P} .: p. 223 A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix. The eigenvalues of a projection matrix must be 0 or 1. == Examples == === Orthogonal projection === For example, the function which maps the point ( x , y , z ) {\displaystyle (x,y,z)} in three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} to the point ( x , y , 0 ) {\displaystyle (x,y,0)} is an orthogonal projection onto the xy-plane. This function is represented by the matrix P = [ 1 0 0 0 1 0 0 0 0 ] . {\displaystyle P={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}.} The action of this matrix on an arbitrary vector is P [ x y z ] = [ x y 0 ] . {\displaystyle P{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}.} To see that P {\displaystyle P} is indeed a projection, i.e., P = P 2 {\displaystyle P=P^{2}} , we compute P 2 [ x y z ] = P [ x y 0 ] = [ x y 0 ] = P [ x y z ] . {\displaystyle P^{2}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=P{\begin{bmatrix}x\\y\\0\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}=P{\begin{bmatrix}x\\y\\z\end{bmatrix}}.} Observing that P T = P {\displaystyle P^{\mathrm {T} }=P} shows that the projection is an orthogonal projection. === Oblique projection === A simple example of a non-orthogonal (oblique) projection is P = [ 0 0 α 1 ] . {\displaystyle P={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}.} Via matrix multiplication, one sees that P 2 = [ 0 0 α 1 ] [ 0 0 α 1 ] = [ 0 0 α 1 ] = P . {\displaystyle P^{2}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}{\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}=P.} showing that P {\displaystyle P} is indeed a projection. The projection P {\displaystyle P} is orthogonal if and only if α = 0 {\displaystyle \alpha =0} because only then P T = P . {\displaystyle P^{\mathrm {T} }=P.} == Properties and classification == === Idempotence === By definition, a projection P {\displaystyle P} is idempotent (i.e. P 2 = P {\displaystyle P^{2}=P} ). === Open map === Every projection is an open map onto its image, meaning that it maps each open set in the domain to an open set in the subspace topology of the image. That is, for any vector x {\displaystyle \mathbf {x} } and any ball B x {\displaystyle B_{\mathbf {x} }} (with positive radius) centered on x {\displaystyle \mathbf {x} } , there exists a ball B P x {\displaystyle B_{P\mathbf {x} }} (with positive radius) centered on P x {\displaystyle P\mathbf {x} } that is wholly contained in the image P ( B x ) {\displaystyle P(B_{\mathbf {x} })} . === Complementarity of image and kernel === Let W {\displaystyle W} be a finite-dimensional vector space and P {\displaystyle P} be a projection on W {\displaystyle W} . Suppose the subspaces U {\displaystyle U} and V {\displaystyle V} are the image and kernel of P {\displaystyle P} respectively. Then P {\displaystyle P} has the following properties: P {\displaystyle P} is the identity operator I {\displaystyle I} on U {\displaystyle U} : ∀ x ∈ U : P x = x . {\displaystyle \forall \mathbf {x} \in U:P\mathbf {x} =\mathbf {x} .} We have a direct sum W = U ⊕ V {\displaystyle W=U\oplus V} . Every vector x ∈ W {\displaystyle \mathbf {x} \in W} may be decomposed uniquely as x = u + v {\displaystyle \mathbf {x} =\mathbf {u} +\mathbf {v} } with u = P x {\displaystyle \mathbf {u} =P\mathbf {x} } and v = x − P x = ( I − P ) x {\displaystyle \mathbf {v} =\mathbf {x} -P\mathbf {x} =\left(I-P\right)\mathbf {x} } , and where u ∈ U , v ∈ V . {\displaystyle \mathbf {u} \in U,\mathbf {v} \in V.} The image and kernel of a projection are complementary, as are P {\displaystyle P} and Q = I − P {\displaystyle Q=I-P} . The operator Q {\displaystyle Q} is also a projection as the image and kernel of P {\displaystyle P} become the kernel and image of Q {\displaystyle Q} and vice versa. We say P {\displaystyle P} is a projection along V {\displaystyle V} onto U {\displaystyle U} (kernel/image) and Q {\displaystyle Q} is a projection along U {\displaystyle U} onto V {\displaystyle V} . === Spectrum === In infinite-dimensional vector spaces, the spectrum of a projection is contained in { 0 , 1 } {\displaystyle \{0,1\}} as ( λ I − P ) − 1 = 1 λ I + 1 λ ( λ − 1 ) P . {\displaystyle (\lambda I-P)^{-1}={\frac {1}{\lambda }}I+{\frac {1}{\lambda (\lambda -1)}}P.} Only 0 or 1 can be an eigenvalue of a projection. This implies that an orthogonal projection P {\displaystyle P} is always a positive semi-definite matrix. In general, the corresponding eigenspaces are (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspace V {\displaystyle V} , there may be many projections whose range (or kernel) is V {\displaystyle V} . If a projection is nontrivial it has minimal polynomial x 2 − x = x ( x − 1 ) {\displaystyle x^{2}-x=x(x-1)} , which factors into distinct linear factors, and thus P {\displaystyle P} is diagonalizable. === Product of projections === The product of projections is not in general a projection, even if they are orthogonal. If two projections commute then their product is a projection, but the converse is false: the product of two non-commuting projections may be a projection. If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjoint endomorphisms commute if and only if their product is self-adjoint). === Orthogonal projections === When the vector space W {\displaystyle W} has an inner product and is complete (is a Hilbert space) the concept of orthogonality can be used. An orthogonal projection is a projection for which the range U {\displaystyle U} and the kernel V {\displaystyle V} are orthogonal subspaces. Thus, for every x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} , ⟨ P x , ( y − P y ) ⟩ = ⟨ ( x − P x ) , P y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =\langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =0} . Equivalently: ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ . {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle .} A projection is orthogonal if and only if it is self-adjoint. Using the self-adjoint and idempotent properties of P {\displaystyle P} , for any x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} we have P x ∈ U {\displaystyle P\mathbf {x} \in U} , y − P y ∈ V {\displaystyle \mathbf {y} -P\mathbf {y} \in V} , and ⟨ P x , y − P y ⟩ = ⟨ x , ( P − P 2 ) y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,\mathbf {y} -P\mathbf {y} \rangle =\langle \mathbf {x} ,\left(P-P^{2}\right)\mathbf {y} \rangle =0} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product associated with W {\displaystyle W} . Therefore, P {\displaystyle P} and I − P {\displaystyle I-P} are orthogonal projections. The other direction, namely that if P {\displaystyle P} is orthogonal then it is self-adjoint, follows from the implication from ⟨ ( x − P x ) , P y ⟩ = ⟨ P x , ( y − P y ) ⟩ = 0 {\displaystyle \langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =\langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =0} to ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ = ⟨ x , P ∗ y ⟩ {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P^{*}\mathbf {y} \rangle } for every x {\displaystyle x} and y {\displaystyle y} in W {\displaystyle W} ; thus P = P ∗ {\displaystyle P=P^{*}} . The existence of an orthogonal projection onto a closed subspace follows from the Hilbert projection theorem. ==== Properties and special cases ==== An orthogonal projection is a bounded operator. This is because for every v {\displaystyle \mathbf {v} } in the vector space we have, by the Cauchy–Schwarz inequality: ‖ P v ‖ 2 = ⟨ P v , P v ⟩ = ⟨ P v , v ⟩ ≤ ‖ P v ‖ ⋅ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|^{2}=\langle P\mathbf {v} ,P\mathbf {v} \rangle =\langle P\mathbf {v} ,\mathbf {v} \rangle \leq \left\|P\mathbf {v} \right\|\cdot \left\|\mathbf {v} \right\|} Thus ‖ P v ‖ ≤ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|\leq \left\|\mathbf {v} \right\|} . For finite-dimensional complex or real vector spaces, the standard inner product can be substituted for ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } . ===== Formulas ===== A simple case occurs when the orthogonal projection is onto a line. If u {\displaystyle \mathbf {u} } is a unit vector on the line, then the projection is given by the outer product P u = u u T . {\displaystyle P_{\mathbf {u} }=\mathbf {u} \mathbf {u} ^{\mathsf {T}}.} (If u {\displaystyle \mathbf {u} } is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leaves u invariant, and it annihilates all vectors orthogonal to u {\displaystyle \mathbf {u} } , proving that it is indeed the orthogonal projection onto the line containing u. A simple way to see this is to consider an arbitrary vector x {\displaystyle \mathbf {x} } as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it, x = x ∥ + x ⊥ {\displaystyle \mathbf {x} =\mathbf {x} _{\parallel }+\mathbf {x} _{\perp }} . Applying projection, we get P u x = u u T x ∥ + u u T x ⊥ = u ( sgn ( u T x ∥ ) ‖ x ∥ ‖ ) + u ⋅ 0 = x ∥ {\displaystyle P_{\mathbf {u} }\mathbf {x} =\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }+\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\perp }=\mathbf {u} \left(\operatorname {sgn} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }\right)\left\|\mathbf {x} _{\parallel }\right\|\right)+\mathbf {u} \cdot \mathbf {0} =\mathbf {x} _{\parallel }} by the properties of the dot product of parallel and perpendicular vectors. This formula can be generalized to orthogonal projections on a subspace of arbitrary dimension. Let u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} be an orthonormal basis of the subspace U {\displaystyle U} , with the assumption that the integer k ≥ 1 {\displaystyle k\geq 1} , and let A {\displaystyle A} denote the n × k {\displaystyle n\times k} matrix whose columns are u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} , i.e., A = [ u 1 ⋯ u k ] {\displaystyle A={\begin{bmatrix}\mathbf {u} _{1}&\cdots &\mathbf {u} _{k}\end{bmatrix}}} . Then the projection is given by: P A = A A T {\displaystyle P_{A}=AA^{\mathsf {T}}} which can be rewritten as P A = ∑ i ⟨ u i , ⋅ ⟩ u i . {\displaystyle P_{A}=\sum _{i}\langle \mathbf {u} _{i},\cdot \rangle \mathbf {u} _{i}.} The matrix A T {\displaystyle A^{\mathsf {T}}} is the partial isometry that vanishes on the orthogonal complement of U {\displaystyle U} , and A {\displaystyle A} is the isometry that embeds U {\displaystyle U} into the underlying vector space. The range of P A {\displaystyle P_{A}} is therefore the final space of A {\displaystyle A} . It is also clear that A A T {\displaystyle AA^{\mathsf {T}}} is the identity operator on U {\displaystyle U} . The orthonormality condition can also be dropped. If u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} is a (not necessarily orthonormal) basis with k ≥ 1 {\displaystyle k\geq 1} , and A {\displaystyle A} is the matrix with these vectors as columns, then the projection is: P A = A ( A T A ) − 1 A T . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}.} The matrix A {\displaystyle A} still embeds U {\displaystyle U} into the underlying vector space but is no longer an isometry in general. The matrix ( A T A ) − 1 {\displaystyle \left(A^{\mathsf {T}}A\right)^{-1}} is a "normalizing factor" that recovers the norm. For example, the rank-1 operator u u T {\displaystyle \mathbf {u} \mathbf {u} ^{\mathsf {T}}} is not a projection if ‖ u ‖ ≠ 1. {\displaystyle \left\|\mathbf {u} \right\|\neq 1.} After dividing by u T u = ‖ u ‖ 2 , {\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {u} =\left\|\mathbf {u} \right\|^{2},} we obtain the projection u ( u T u ) − 1 u T {\displaystyle \mathbf {u} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {u} \right)^{-1}\mathbf {u} ^{\mathsf {T}}} onto the subspace spanned by u {\displaystyle u} . In the general case, we can have an arbitrary positive definite matrix D {\displaystyle D} defining an inner product ⟨ x , y ⟩ D = y † D x {\displaystyle \langle x,y\rangle _{D}=y^{\dagger }Dx} , and the projection P A {\displaystyle P_{A}} is given by P A x = argmin y ∈ range ( A ) ‖ x − y ‖ D 2 {\textstyle P_{A}x=\operatorname {argmin} _{y\in \operatorname {range} (A)}\left\|x-y\right\|_{D}^{2}} . Then P A = A ( A T D A ) − 1 A T D . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}DA\right)^{-1}A^{\mathsf {T}}D.} When the range space of the projection is generated by a frame (i.e. the number of generators is greater than its dimension), the formula for the projection takes the form: P A = A A + {\displaystyle P_{A}=AA^{+}} . Here A + {\displaystyle A^{+}} stands for the Moore–Penrose pseudoinverse. This is just one of many ways to construct the projection operator. If [ A B ] {\displaystyle {\begin{bmatrix}A&B\end{bmatrix}}} is a non-singular matrix and A T B = 0 {\displaystyle A^{\mathsf {T}}B=0} (i.e., B {\displaystyle B} is the null space matrix of A {\displaystyle A} ), the following holds: I = [ A B ] [ A B ] − 1 [ A T B T ] − 1 [ A T B T ] = [ A B ] ( [ A T B T ] [ A B ] ) − 1 [ A T B T ] = [ A B ] [ A T A O O B T B ] − 1 [ A T B T ] = A ( A T A ) − 1 A T + B ( B T B ) − 1 B T {\displaystyle {\begin{aligned}I&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}\left({\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}\right)^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A^{\mathsf {T}}A&O\\O&B^{\mathsf {T}}B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\[4pt]&=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}+B\left(B^{\mathsf {T}}B\right)^{-1}B^{\mathsf {T}}\end{aligned}}} If the orthogonal condition is enhanced to A T W B = A T W T B = 0 {\displaystyle A^{\mathsf {T}}WB=A^{\mathsf {T}}W^{\mathsf {T}}B=0} with W {\displaystyle W} non-singular, the following holds: I = [ A B ] [ ( A T W A ) − 1 A T ( B T W B ) − 1 B T ] W . {\displaystyle I={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}\left(A^{\mathsf {T}}WA\right)^{-1}A^{\mathsf {T}}\\\left(B^{\mathsf {T}}WB\right)^{-1}B^{\mathsf {T}}\end{bmatrix}}W.} All these formulas also hold for complex inner product spaces, provided that the conjugate transpose is used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014). Also see Banerjee (2004) for application of sums of projectors in basic spherical trigonometry. === Oblique projections === The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection. A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection. ==== A matrix representation formula for a nonzero projection operator ==== Let P : V → V {\displaystyle P\colon V\to V} be a linear operator such that P 2 = P {\displaystyle P^{2}=P} and assume that P {\displaystyle P} is not the zero operator. Let the vectors u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} form a basis for the range of P {\displaystyle P} , and assemble these vectors in the n × k {\displaystyle n\times k} matrix A {\displaystyle A} . Then k ≥ 1 {\displaystyle k\geq 1} , otherwise k = 0 {\displaystyle k=0} and P {\displaystyle P} is the zero operator. The range and the kernel are complementary spaces, so the kernel has dimension n − k {\displaystyle n-k} . It follows that the orthogonal complement of the kernel has dimension k {\displaystyle k} . Let v 1 , … , v k {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{k}} form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrix B {\displaystyle B} . Then the projection P {\displaystyle P} (with the condition k ≥ 1 {\displaystyle k\geq 1} ) is given by P = A ( B T A ) − 1 B T . {\displaystyle P=A\left(B^{\mathsf {T}}A\right)^{-1}B^{\mathsf {T}}.} This expression generalizes the formula for orthogonal projections given above. A standard proof of this expression is the following. For any vector x {\displaystyle \mathbf {x} } in the vector space V {\displaystyle V} , we can decompose x = x 1 + x 2 {\displaystyle \mathbf {x} =\mathbf {x} _{1}+\mathbf {x} _{2}} , where vector x 1 = P ( x ) {\displaystyle \mathbf {x} _{1}=P(\mathbf {x} )} is in the image of P {\displaystyle P} , and vector x 2 = x − P ( x ) . {\displaystyle \mathbf {x} _{2}=\mathbf {x} -P(\mathbf {x} ).} So P ( x 2 ) = P ( x ) − P 2 ( x ) = 0 {\displaystyle P(\mathbf {x} _{2})=P(\mathbf {x} )-P^{2}(\mathbf {x} )=\mathbf {0} } , and then x 2 {\displaystyle \mathbf {x} _{2}} is in the kernel of P {\displaystyle P} , which is the null space of A . {\displaystyle A.} In other words, the vector x 1 {\displaystyle \mathbf {x} _{1}} is in the column space of A , {\displaystyle A,} so x 1 = A w {\displaystyle \mathbf {x} _{1}=A\mathbf {w} } for some k {\displaystyle k} dimension vector w {\displaystyle \mathbf {w} } and the vector x 2 {\displaystyle \mathbf {x} _{2}} satisfies B T x 2 = 0 {\displaystyle B^{\mathsf {T}}\mathbf {x} _{2}=\mathbf {0} } by the construction of B {\displaystyle B} . Put these conditions together, and we find a vector w {\displaystyle \mathbf {w} } so that B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } . Since matrices A {\displaystyle A} and B {\displaystyle B} are of full rank k {\displaystyle k} by their construction, the k × k {\displaystyle k\times k} -matrix B T A {\displaystyle B^{\mathsf {T}}A} is invertible. So the equation B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } gives the vector w = ( B T A ) − 1 B T x . {\displaystyle \mathbf {w} =(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} .} In this way, P x = x 1 = A w = A ( B T A ) − 1 B T x {\displaystyle P\mathbf {x} =\mathbf {x} _{1}=A\mathbf {w} =A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} } for any vector x ∈ V {\displaystyle \mathbf {x} \in V} and hence P = A ( B T A ) − 1 B T {\displaystyle P=A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}} . In the case that P {\displaystyle P} is an orthogonal projection, we can take A = B {\displaystyle A=B} , and it follows that P = A ( A T A ) − 1 A T {\displaystyle P=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}} . By using this formula, one can easily check that P = P T {\displaystyle P=P^{\mathsf {T}}} . In general, if the vector space is over complex number field, one then uses the Hermitian transpose A ∗ {\displaystyle A^{*}} and has the formula P = A ( A ∗ A ) − 1 A ∗ {\displaystyle P=A\left(A^{*}A\right)^{-1}A^{*}} . Recall that one can express the Moore–Penrose inverse of the matrix A {\displaystyle A} by A + = ( A ∗ A ) − 1 A ∗ {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}} since A {\displaystyle A} has full column rank, so P = A A + {\displaystyle P=AA^{+}} . ==== Singular values ==== I − P {\displaystyle I-P} is also an oblique projection. The singular values of P {\displaystyle P} and I − P {\displaystyle I-P} can be computed by an orthonormal basis of A {\displaystyle A} . Let Q A {\displaystyle Q_{A}} be an orthonormal basis of A {\displaystyle A} and let Q A ⊥ {\displaystyle Q_{A}^{\perp }} be the orthogonal complement of Q A {\displaystyle Q_{A}} . Denote the singular values of the matrix Q A T A ( B T A ) − 1 B T Q A ⊥ {\displaystyle Q_{A}^{T}A(B^{T}A)^{-1}B^{T}Q_{A}^{\perp }} by the positive values γ 1 ≥ γ 2 ≥ … ≥ γ k {\displaystyle \gamma _{1}\geq \gamma _{2}\geq \ldots \geq \gamma _{k}} . With this, the singular values for P {\displaystyle P} are: σ i = { 1 + γ i 2 1 ≤ i ≤ k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\0&{\text{otherwise}}\end{cases}}} and the singular values for I − P {\displaystyle I-P} are σ i = { 1 + γ i 2 1 ≤ i ≤ k 1 k + 1 ≤ i ≤ n − k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\1&k+1\leq i\leq n-k\\0&{\text{otherwise}}\end{cases}}} This implies that the largest singular values of P {\displaystyle P} and I − P {\displaystyle I-P} are equal, and thus that the matrix norm of the oblique projections are the same. However, the condition number satisfies the relation κ ( I − P ) = σ 1 1 ≥ σ 1 σ k = κ ( P ) {\displaystyle \kappa (I-P)={\frac {\sigma _{1}}{1}}\geq {\frac {\sigma _{1}}{\sigma _{k}}}=\kappa (P)} , and is therefore not necessarily equal. === Finding projection with an inner product === Let V {\displaystyle V} be a vector space (in this case a plane) spanned by orthogonal vectors u 1 , u 2 , … , u p {\displaystyle \mathbf {u} _{1},\mathbf {u} _{2},\dots ,\mathbf {u} _{p}} . Let y {\displaystyle y} be a vector. One can define a projection of y {\displaystyle \mathbf {y} } onto V {\displaystyle V} as proj V y = y ⋅ u i u i ⋅ u i u i {\displaystyle \operatorname {proj} _{V}\mathbf {y} ={\frac {\mathbf {y} \cdot \mathbf {u} ^{i}}{\mathbf {u} ^{i}\cdot \mathbf {u} ^{i}}}\mathbf {u} ^{i}} where repeated indices are summed over (Einstein sum notation). The vector y {\displaystyle \mathbf {y} } can be written as an orthogonal sum such that y = proj V y + z {\displaystyle \mathbf {y} =\operatorname {proj} _{V}\mathbf {y} +\mathbf {z} } . proj V y {\displaystyle \operatorname {proj} _{V}\mathbf {y} } is sometimes denoted as y ^ {\displaystyle {\hat {\mathbf {y} }}} . There is a theorem in linear algebra that states that this z {\displaystyle \mathbf {z} } is the smallest distance (the orthogonal distance) from y {\displaystyle \mathbf {y} } to V {\displaystyle V} and is commonly used in areas such as machine learning. == Canonical forms == Any projection P = P 2 {\displaystyle P=P^{2}} on a vector space of dimension d {\displaystyle d} over a field is a diagonalizable matrix, since its minimal polynomial divides x 2 − x {\displaystyle x^{2}-x} , which splits into distinct linear factors. Thus there exists a basis in which P {\displaystyle P} has the form P = I r ⊕ 0 d − r {\displaystyle P=I_{r}\oplus 0_{d-r}} where r {\displaystyle r} is the rank of P {\displaystyle P} . Here I r {\displaystyle I_{r}} is the identity matrix of size r {\displaystyle r} , 0 d − r {\displaystyle 0_{d-r}} is the zero matrix of size d − r {\displaystyle d-r} , and ⊕ {\displaystyle \oplus } is the direct sum operator. If the vector space is complex and equipped with an inner product, then there is an orthonormal basis in which the matrix of P is P = [ 1 σ 1 0 0 ] ⊕ ⋯ ⊕ [ 1 σ k 0 0 ] ⊕ I m ⊕ 0 s . {\displaystyle P={\begin{bmatrix}1&\sigma _{1}\\0&0\end{bmatrix}}\oplus \cdots \oplus {\begin{bmatrix}1&\sigma _{k}\\0&0\end{bmatrix}}\oplus I_{m}\oplus 0_{s}.} where σ 1 ≥ σ 2 ≥ ⋯ ≥ σ k > 0 {\displaystyle \sigma _{1}\geq \sigma _{2}\geq \dots \geq \sigma _{k}>0} . The integers k , s , m {\displaystyle k,s,m} and the real numbers σ i {\displaystyle \sigma _{i}} are uniquely determined. 2 k + s + m = d {\displaystyle 2k+s+m=d} . The factor I m ⊕ 0 s {\displaystyle I_{m}\oplus 0_{s}} corresponds to the maximal invariant subspace on which P {\displaystyle P} acts as an orthogonal projection (so that P itself is orthogonal if and only if k = 0 {\displaystyle k=0} ) and the σ i {\displaystyle \sigma _{i}} -blocks correspond to the oblique components. == Projections on normed vector spaces == When the underlying vector space X {\displaystyle X} is a (not necessarily finite-dimensional) normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now X {\displaystyle X} is a Banach space. Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition of X {\displaystyle X} into complementary subspaces still specifies a projection, and vice versa. If X {\displaystyle X} is the direct sum X = U ⊕ V {\displaystyle X=U\oplus V} , then the operator defined by P ( u + v ) = u {\displaystyle P(u+v)=u} is still a projection with range U {\displaystyle U} and kernel V {\displaystyle V} . It is also clear that P 2 = P {\displaystyle P^{2}=P} . Conversely, if P {\displaystyle P} is projection on X {\displaystyle X} , i.e. P 2 = P {\displaystyle P^{2}=P} , then it is easily verified that ( 1 − P ) 2 = ( 1 − P ) {\displaystyle (1-P)^{2}=(1-P)} . In other words, 1 − P {\displaystyle 1-P} is also a projection. The relation P 2 = P {\displaystyle P^{2}=P} implies 1 = P + ( 1 − P ) {\displaystyle 1=P+(1-P)} and X {\displaystyle X} is the direct sum rg ( P ) ⊕ rg ( 1 − P ) {\displaystyle \operatorname {rg} (P)\oplus \operatorname {rg} (1-P)} . However, in contrast to the finite-dimensional case, projections need not be continuous in general. If a subspace U {\displaystyle U} of X {\displaystyle X} is not closed in the norm topology, then the projection onto U {\displaystyle U} is not continuous. In other words, the range of a continuous projection P {\displaystyle P} must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a continuous projection P {\displaystyle P} gives a decomposition of X {\displaystyle X} into two complementary closed subspaces: X = rg ( P ) ⊕ ker ( P ) = ker ( 1 − P ) ⊕ ker ( P ) {\displaystyle X=\operatorname {rg} (P)\oplus \ker(P)=\ker(1-P)\oplus \ker(P)} . The converse holds also, with an additional assumption. Suppose U {\displaystyle U} is a closed subspace of X {\displaystyle X} . If there exists a closed subspace V {\displaystyle V} such that X = U ⊕ V, then the projection P {\displaystyle P} with range U {\displaystyle U} and kernel V {\displaystyle V} is continuous. This follows from the closed graph theorem. Suppose xn → x and Pxn → y. One needs to show that P x = y {\displaystyle Px=y} . Since U {\displaystyle U} is closed and {Pxn} ⊂ U, y lies in U {\displaystyle U} , i.e. Py = y. Also, xn − Pxn = (I − P)xn → x − y. Because V {\displaystyle V} is closed and {(I − P)xn} ⊂ V, we have x − y ∈ V {\displaystyle x-y\in V} , i.e. P ( x − y ) = P x − P y = P x − y = 0 {\displaystyle P(x-y)=Px-Py=Px-y=0} , which proves the claim. The above argument makes use of the assumption that both U {\displaystyle U} and V {\displaystyle V} are closed. In general, given a closed subspace U {\displaystyle U} , there need not exist a complementary closed subspace V {\displaystyle V} , although for Hilbert spaces this can always be done by taking the orthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of Hahn–Banach theorem. Let U {\displaystyle U} be the linear span of u {\displaystyle u} . By Hahn–Banach, there exists a bounded linear functional φ {\displaystyle \varphi } such that φ(u) = 1. The operator P ( x ) = φ ( x ) u {\displaystyle P(x)=\varphi (x)u} satisfies P 2 = P {\displaystyle P^{2}=P} , i.e. it is a projection. Boundedness of φ {\displaystyle \varphi } implies continuity of P {\displaystyle P} and therefore ker ( P ) = rg ( I − P ) {\displaystyle \ker(P)=\operatorname {rg} (I-P)} is a closed complementary subspace of U {\displaystyle U} . == Applications and further considerations == Projections (orthogonal and otherwise) play a major role in algorithms for certain linear algebra problems: QR decomposition (see Householder transformation and Gram–Schmidt decomposition); Singular value decomposition Reduction to Hessenberg form (the first step in many eigenvalue algorithms) Linear regression Projective elements of matrix algebras are used in the construction of certain K-groups in Operator K-theory As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of characteristic functions. Idempotents are used in classifying, for instance, semisimple algebras, while measure theory begins with considering characteristic functions of measurable sets. Therefore, as one can imagine, projections are very often encountered in the context of operator algebras. In particular, a von Neumann algebra is generated by its complete lattice of projections. == Generalizations == More generally, given a map between normed vector spaces T : V → W , {\displaystyle T\colon V\to W,} one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that ( ker T ) ⊥ → W {\displaystyle (\ker T)^{\perp }\to W} be an isometry (compare Partial isometry); in particular it must be onto. The case of an orthogonal projection is when W is a subspace of V. In Riemannian geometry, this is used in the definition of a Riemannian submersion. == See also == Centering matrix, which is an example of a projection matrix. Dykstra's projection algorithm to compute the projection onto an intersection of sets Invariant subspace Least-squares spectral analysis Orthogonalization Properties of trace == Notes == == References == Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Dunford, N.; Schwartz, J. T. (1958). Linear Operators, Part I: General Theory. Interscience. Meyer, Carl D. (2000). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-454-8. Brezinski, Claude: Projection Methods for Systems of Equations, North-Holland, ISBN 0-444-82777-3 (1997). == External links == MIT Linear Algebra Lecture on Projection Matrices on YouTube, from MIT OpenCourseWare Linear Algebra 15d: The Projection Transformation on YouTube, by Pavel Grinfeld. Planar Geometric Projections Tutorial – a simple-to-follow tutorial explaining the different types of planar geometric projections.
|
Wikipedia:Projection-valued measure#0
|
In mathematics, particularly in functional analysis, a projection-valued measure, or spectral measure, is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space. Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state. == Definition == Let H {\displaystyle H} denote a separable complex Hilbert space and ( X , M ) {\displaystyle (X,M)} a measurable space consisting of a set X {\displaystyle X} and a Borel σ-algebra M {\displaystyle M} on X {\displaystyle X} . A projection-valued measure π {\displaystyle \pi } is a map from M {\displaystyle M} to the set of bounded self-adjoint operators on H {\displaystyle H} satisfying the following properties: π ( E ) {\displaystyle \pi (E)} is an orthogonal projection for all E ∈ M . {\displaystyle E\in M.} π ( ∅ ) = 0 {\displaystyle \pi (\emptyset )=0} and π ( X ) = I {\displaystyle \pi (X)=I} , where ∅ {\displaystyle \emptyset } is the empty set and I {\displaystyle I} the identity operator. If E 1 , E 2 , E 3 , … {\displaystyle E_{1},E_{2},E_{3},\dotsc } in M {\displaystyle M} are disjoint, then for all v ∈ H {\displaystyle v\in H} , π ( ⋃ j = 1 ∞ E j ) v = ∑ j = 1 ∞ π ( E j ) v . {\displaystyle \pi \left(\bigcup _{j=1}^{\infty }E_{j}\right)v=\sum _{j=1}^{\infty }\pi (E_{j})v.} π ( E 1 ∩ E 2 ) = π ( E 1 ) π ( E 2 ) {\displaystyle \pi (E_{1}\cap E_{2})=\pi (E_{1})\pi (E_{2})} for all E 1 , E 2 ∈ M . {\displaystyle E_{1},E_{2}\in M.} The second and fourth property show that if E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are disjoint, i.e., E 1 ∩ E 2 = ∅ {\displaystyle E_{1}\cap E_{2}=\emptyset } , the images π ( E 1 ) {\displaystyle \pi (E_{1})} and π ( E 2 ) {\displaystyle \pi (E_{2})} are orthogonal to each other. Let V E = im ( π ( E ) ) {\displaystyle V_{E}=\operatorname {im} (\pi (E))} and its orthogonal complement V E ⊥ = ker ( π ( E ) ) {\displaystyle V_{E}^{\perp }=\ker(\pi (E))} denote the image and kernel, respectively, of π ( E ) {\displaystyle \pi (E)} . If V E {\displaystyle V_{E}} is a closed subspace of H {\displaystyle H} then H {\displaystyle H} can be wrtitten as the orthogonal decomposition H = V E ⊕ V E ⊥ {\displaystyle H=V_{E}\oplus V_{E}^{\perp }} and π ( E ) = I E {\displaystyle \pi (E)=I_{E}} is the unique identity operator on V E {\displaystyle V_{E}} satisfying all four properties. For every ξ , η ∈ H {\displaystyle \xi ,\eta \in H} and E ∈ M {\displaystyle E\in M} the projection-valued measure forms a complex-valued measure on H {\displaystyle H} defined as μ ξ , η ( E ) := ⟨ π ( E ) ξ ∣ η ⟩ {\displaystyle \mu _{\xi ,\eta }(E):=\langle \pi (E)\xi \mid \eta \rangle } with total variation at most ‖ ξ ‖ ‖ η ‖ {\displaystyle \|\xi \|\|\eta \|} . It reduces to a real-valued measure when μ ξ ( E ) := ⟨ π ( E ) ξ ∣ ξ ⟩ {\displaystyle \mu _{\xi }(E):=\langle \pi (E)\xi \mid \xi \rangle } and a probability measure when ξ {\displaystyle \xi } is a unit vector. Example Let ( X , M , μ ) {\displaystyle (X,M,\mu )} be a σ-finite measure space and, for all E ∈ M {\displaystyle E\in M} , let π ( E ) : L 2 ( X ) → L 2 ( X ) {\displaystyle \pi (E):L^{2}(X)\to L^{2}(X)} be defined as ψ ↦ π ( E ) ψ = 1 E ψ , {\displaystyle \psi \mapsto \pi (E)\psi =1_{E}\psi ,} i.e., as multiplication by the indicator function 1 E {\displaystyle 1_{E}} on L2(X). Then π ( E ) = 1 E {\displaystyle \pi (E)=1_{E}} defines a projection-valued measure. For example, if X = R {\displaystyle X=\mathbb {R} } , E = ( 0 , 1 ) {\displaystyle E=(0,1)} , and φ , ψ ∈ L 2 ( R ) {\displaystyle \varphi ,\psi \in L^{2}(\mathbb {R} )} there is then the associated complex measure μ φ , ψ {\displaystyle \mu _{\varphi ,\psi }} which takes a measurable function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } and gives the integral ∫ E f d μ φ , ψ = ∫ 0 1 f ( x ) ψ ( x ) φ ¯ ( x ) d x {\displaystyle \int _{E}f\,d\mu _{\varphi ,\psi }=\int _{0}^{1}f(x)\psi (x){\overline {\varphi }}(x)\,dx} == Extensions of projection-valued measures == If π is a projection-valued measure on a measurable space (X, M), then the map χ E ↦ π ( E ) {\displaystyle \chi _{E}\mapsto \pi (E)} extends to a linear map on the vector space of step functions on X. In fact, it is easy to check that this map is a ring homomorphism. This map extends in a canonical way to all bounded complex-valued measurable functions on X, and we have the following. The theorem is also correct for unbounded measurable functions f {\displaystyle f} but then T {\displaystyle T} will be an unbounded linear operator on the Hilbert space H {\displaystyle H} . This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. That is, if g : R → C {\displaystyle g:\mathbb {R} \to \mathbb {C} } is a measurable function, then a unique measure exists such that g ( T ) := ∫ R g ( x ) d π ( x ) . {\displaystyle g(T):=\int _{\mathbb {R} }g(x)\,d\pi (x).} === Spectral theorem === Let H {\displaystyle H} be a separable complex Hilbert space, A : H → H {\displaystyle A:H\to H} be a bounded self-adjoint operator and σ ( A ) {\displaystyle \sigma (A)} the spectrum of A {\displaystyle A} . Then the spectral theorem says that there exists a unique projection-valued measure π A {\displaystyle \pi ^{A}} , defined on a Borel subset E ⊂ σ ( A ) {\displaystyle E\subset \sigma (A)} , such that A = ∫ σ ( A ) λ d π A ( λ ) , {\displaystyle A=\int _{\sigma (A)}\lambda \,d\pi ^{A}(\lambda ),} where the integral extends to an unbounded function λ {\displaystyle \lambda } when the spectrum of A {\displaystyle A} is unbounded. === Direct integrals === First we provide a general example of projection-valued measure based on direct integrals. Suppose (X, M, μ) is a measure space and let {Hx}x ∈ X be a μ-measurable family of separable Hilbert spaces. For every E ∈ M, let π(E) be the operator of multiplication by 1E on the Hilbert space ∫ X ⊕ H x d μ ( x ) . {\displaystyle \int _{X}^{\oplus }H_{x}\ d\mu (x).} Then π is a projection-valued measure on (X, M). Suppose π, ρ are projection-valued measures on (X, M) with values in the projections of H, K. π, ρ are unitarily equivalent if and only if there is a unitary operator U:H → K such that π ( E ) = U ∗ ρ ( E ) U {\displaystyle \pi (E)=U^{*}\rho (E)U\quad } for every E ∈ M. Theorem. If (X, M) is a standard Borel space, then for every projection-valued measure π on (X, M) taking values in the projections of a separable Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces {Hx}x ∈ X , such that π is unitarily equivalent to multiplication by 1E on the Hilbert space ∫ X ⊕ H x d μ ( x ) . {\displaystyle \int _{X}^{\oplus }H_{x}\ d\mu (x).} The measure class of μ and the measure equivalence class of the multiplicity function x → dim Hx completely characterize the projection-valued measure up to unitary equivalence. A projection-valued measure π is homogeneous of multiplicity n if and only if the multiplicity function has constant value n. Clearly, Theorem. Any projection-valued measure π taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures: π = ⨁ 1 ≤ n ≤ ω ( π ∣ H n ) {\displaystyle \pi =\bigoplus _{1\leq n\leq \omega }(\pi \mid H_{n})} where H n = ∫ X n ⊕ H x d ( μ ∣ X n ) ( x ) {\displaystyle H_{n}=\int _{X_{n}}^{\oplus }H_{x}\ d(\mu \mid X_{n})(x)} and X n = { x ∈ X : dim H x = n } . {\displaystyle X_{n}=\{x\in X:\dim H_{x}=n\}.} == Application in quantum mechanics == In quantum mechanics, given a projection-valued measure of a measurable space X {\displaystyle X} to the space of continuous endomorphisms upon a Hilbert space H {\displaystyle H} , the projective space P ( H ) {\displaystyle \mathbf {P} (H)} of the Hilbert space H {\displaystyle H} is interpreted as the set of possible (normalizable) states φ {\displaystyle \varphi } of a quantum system, the measurable space X {\displaystyle X} is the value space for some quantum property of the system (an "observable"), the projection-valued measure π {\displaystyle \pi } expresses the probability that the observable takes on various values. A common choice for X {\displaystyle X} is the real line, but it may also be R 3 {\displaystyle \mathbb {R} ^{3}} (for position or momentum in three dimensions ), a discrete set (for angular momentum, energy of a bound state, etc.), the 2-point set "true" and "false" for the truth-value of an arbitrary proposition about φ {\displaystyle \varphi } . Let E {\displaystyle E} be a measurable subset of X {\displaystyle X} and φ {\displaystyle \varphi } a normalized vector quantum state in H {\displaystyle H} , so that its Hilbert norm is unitary, ‖ φ ‖ = 1 {\displaystyle \|\varphi \|=1} . The probability that the observable takes its value in E {\displaystyle E} , given the system in state φ {\displaystyle \varphi } , is P π ( φ ) ( E ) = ⟨ φ ∣ π ( E ) ( φ ) ⟩ = ⟨ φ ∣ π ( E ) ∣ φ ⟩ . {\displaystyle P_{\pi }(\varphi )(E)=\langle \varphi \mid \pi (E)(\varphi )\rangle =\langle \varphi \mid \pi (E)\mid \varphi \rangle .} We can parse this in two ways. First, for each fixed E {\displaystyle E} , the projection π ( E ) {\displaystyle \pi (E)} is a self-adjoint operator on H {\displaystyle H} whose 1-eigenspace are the states φ {\displaystyle \varphi } for which the value of the observable always lies in E {\displaystyle E} , and whose 0-eigenspace are the states φ {\displaystyle \varphi } for which the value of the observable never lies in E {\displaystyle E} . Second, for each fixed normalized vector state φ {\displaystyle \varphi } , the association P π ( φ ) : E ↦ ⟨ φ ∣ π ( E ) φ ⟩ {\displaystyle P_{\pi }(\varphi ):E\mapsto \langle \varphi \mid \pi (E)\varphi \rangle } is a probability measure on X {\displaystyle X} making the values of the observable into a random variable. A measurement that can be performed by a projection-valued measure π {\displaystyle \pi } is called a projective measurement. If X {\displaystyle X} is the real number line, there exists, associated to π {\displaystyle \pi } , a self-adjoint operator A {\displaystyle A} defined on H {\displaystyle H} by A ( φ ) = ∫ R λ d π ( λ ) ( φ ) , {\displaystyle A(\varphi )=\int _{\mathbb {R} }\lambda \,d\pi (\lambda )(\varphi ),} which reduces to A ( φ ) = ∑ i λ i π ( λ i ) ( φ ) {\displaystyle A(\varphi )=\sum _{i}\lambda _{i}\pi ({\lambda _{i}})(\varphi )} if the support of π {\displaystyle \pi } is a discrete subset of X {\displaystyle X} . The above operator A {\displaystyle A} is called the observable associated with the spectral measure. == Generalizations == The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal "partition of unity", i.e. a set of positive semi-definite Hermitian operators that sum to the identity. This generalization is motivated by applications to quantum information theory. == See also == Spectral theorem Spectral theory of compact operators Spectral theory of normal C*-algebras == Notes == == References == Ashtekar, Abhay; Schilling, Troy A. (1999). "Geometrical Formulation of Quantum Mechanics". On Einstein's Path. New York, NY: Springer New York. arXiv:gr-qc/9706069. doi:10.1007/978-1-4612-1422-9_3. ISBN 978-1-4612-7137-6.* Conway, John B. (2000). A course in operator theory. Providence (R.I.): American mathematical society. ISBN 978-0-8218-2065-0. Hall, Brian C. (2013). Quantum Theory for Mathematicians. New York: Springer Science & Business Media. ISBN 978-1-4614-7116-5. Mackey, G. W., The Theory of Unitary Group Representations, The University of Chicago Press, 1976 Moretti, Valter (2017), Spectral Theory and Quantum Mechanics Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation, vol. 110, Springer, Bibcode:2017stqm.book.....M, ISBN 978-3-319-70705-1 Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Vol 1: Functional analysis. Academic Press. ISBN 978-0-12-585050-6. Rudin, Walter (1991). Functional Analysis. Boston, Mass.: McGraw-Hill Science, Engineering & Mathematics. ISBN 978-0-07-054236-5. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. G. Teschl, Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators, https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Varadarajan, V. S., Geometry of Quantum Theory V2, Springer Verlag, 1970.
|
Wikipedia:Projectivization#0
|
In mathematics, projectivization is a procedure which associates with a non-zero vector space V a projective space P(V), whose elements are one-dimensional subspaces of V. More generally, any subset S of V closed under scalar multiplication defines a subset of P(V) formed by the lines contained in S and is called the projectivization of S. == Properties == Projectivization is a special case of the factorization by a group action: the projective space P(V) is the quotient of the open set V \ {0} of nonzero vectors by the action of the multiplicative group of the base field by scalar transformations. The dimension of P(V) in the sense of algebraic geometry is one less than the dimension of the vector space V. Projectivization is functorial with respect to injective linear maps: if f : V → W {\displaystyle f:V\to W} is a linear map with trivial kernel then f defines an algebraic map of the corresponding projective spaces, P ( f ) : P ( V ) → P ( W ) . {\displaystyle \mathbf {P} (f):\mathbf {P} (V)\to \mathbf {P} (W).} In particular, the general linear group GL(V) acts on the projective space P(V) by automorphisms. == Projective completion == A related procedure embeds a vector space V over a field K into the projective space P(V ⊕ K) of the same dimension. To every vector v of V, it associates the line spanned by the vector (v, 1) of V ⊕ K. == Generalization == In algebraic geometry, there is a procedure that associates a projective variety Proj S with a graded commutative algebra S (under some technical restrictions on S). If S is the algebra of polynomials on a vector space V then Proj S is P(V). This Proj construction gives rise to a contravariant functor from the category of graded commutative rings and surjective graded maps to the category of projective schemes. == References ==
|
Wikipedia:Proof School#0
|
Proof School is a secondary school in San Francisco that offers a mathematics-focused liberal arts education. Currently, 125 students in grades 6–12 are enrolled in Proof School for the academic year (2024-2025). The school was co-founded by Dennis Leary, Ian Brown, and Paul Zeitz, the chair of mathematics at University of San Francisco. The school opened in the fall of 2015 with 45 students in grades 6–10. The curriculum is inspired by math circles, which emphasizes communication and working together to solve math problems. == Academics == Proof School is a full-curriculum day school that emphasizes communication, collaboration, and problem-solving. The school is accredited by Western Association of Schools and Colleges. The school year is divided into 5 blocks, each of which consists of 6 normal academic weeks and a build week. Each student has 5 courses: 4 morning courses that vary across grades, and a math class. The morning courses meet twice a week for 80 minutes per class. The math courses meet for two hours every day in the afternoon. The (non-post-calculus) math classes focus on a different subject each block: Block 1 varies depending on grade, Block 2 is Algebra, Block 3 is Geometry, Block 4 is Algebra and Pre-Calculus, and Block 5 is Number Theory. == Extracurricular activities == Proof School currently has a number of internal clubs, and used to have a Zero Robotics team called Proof Robotics. The team qualified for the competition finals and was the leading member of the alliance Hit or Miss with the following teams: Crab Nebula from Liveo Cecioni in Livorno, Italy and Rock Rovers from Council Rock High School South in Holland, PA, USA. Hit or Miss placed 2nd place internationally and performed one of the first satellite hookings aboard the ISS. Students from Proof School have placed highly in a number of math competitions, including one student winning the European Girls' Mathematical Olympiad, narrowly missing qualification for the International Mathematical Olympiad. Nearly every year, a student is among the top couple hundred high school students in the country in math by qualifying for the United States of America Mathematical Olympiad. One student attended the Research Science Institute and was selected among the top 5 research papers completed by participants. Students have also placed highly in other academic competitions. Multiple students have won the Caroline D. Bradley Scholarship. In the Regeneron Science Talent Search, two Proof School students have been named finalists and three have been named scholars. == References ==
|
Wikipedia:Proof of the Euler product formula for the Riemann zeta function#0
|
Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737. == The Euler product formula == The Euler product formula for the Riemann zeta function reads ζ ( s ) = ∑ n = 1 ∞ 1 n s = ∏ p prime 1 1 − p − s {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}} where the left hand side equals the Riemann zeta function: ζ ( s ) = ∑ n = 1 ∞ 1 n s = 1 + 1 2 s + 1 3 s + 1 4 s + 1 5 s + … {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=1+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{5^{s}}}+\ldots } and the product on the right hand side extends over all prime numbers p: ∏ p prime 1 1 − p − s = 1 1 − 2 − s ⋅ 1 1 − 3 − s ⋅ 1 1 − 5 − s ⋅ 1 1 − 7 − s ⋯ 1 1 − p − s ⋯ {\displaystyle \prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}={\frac {1}{1-2^{-s}}}\cdot {\frac {1}{1-3^{-s}}}\cdot {\frac {1}{1-5^{-s}}}\cdot {\frac {1}{1-7^{-s}}}\cdots {\frac {1}{1-p^{-s}}}\cdots } == Proof of the Euler product formula == This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage: ζ ( s ) = 1 + 1 2 s + 1 3 s + 1 4 s + 1 5 s + … {\displaystyle \zeta (s)=1+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{5^{s}}}+\ldots } 1 2 s ζ ( s ) = 1 2 s + 1 4 s + 1 6 s + 1 8 s + 1 10 s + … {\displaystyle {\frac {1}{2^{s}}}\zeta (s)={\frac {1}{2^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{6^{s}}}+{\frac {1}{8^{s}}}+{\frac {1}{10^{s}}}+\ldots } Subtracting the second equation from the first we remove all elements that have a factor of 2: ( 1 − 1 2 s ) ζ ( s ) = 1 + 1 3 s + 1 5 s + 1 7 s + 1 9 s + 1 11 s + 1 13 s + … {\displaystyle \left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1+{\frac {1}{3^{s}}}+{\frac {1}{5^{s}}}+{\frac {1}{7^{s}}}+{\frac {1}{9^{s}}}+{\frac {1}{11^{s}}}+{\frac {1}{13^{s}}}+\ldots } Repeating for the next term: 1 3 s ( 1 − 1 2 s ) ζ ( s ) = 1 3 s + 1 9 s + 1 15 s + 1 21 s + 1 27 s + 1 33 s + … {\displaystyle {\frac {1}{3^{s}}}\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)={\frac {1}{3^{s}}}+{\frac {1}{9^{s}}}+{\frac {1}{15^{s}}}+{\frac {1}{21^{s}}}+{\frac {1}{27^{s}}}+{\frac {1}{33^{s}}}+\ldots } Subtracting again we get: ( 1 − 1 3 s ) ( 1 − 1 2 s ) ζ ( s ) = 1 + 1 5 s + 1 7 s + 1 11 s + 1 13 s + 1 17 s + … {\displaystyle \left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1+{\frac {1}{5^{s}}}+{\frac {1}{7^{s}}}+{\frac {1}{11^{s}}}+{\frac {1}{13^{s}}}+{\frac {1}{17^{s}}}+\ldots } where all elements having a factor of 3 or 2 (or both) are removed. It can be seen that the right side is being sieved. Repeating infinitely for 1 p s {\displaystyle {\frac {1}{p^{s}}}} where p {\displaystyle p} is prime, we get: … ( 1 − 1 11 s ) ( 1 − 1 7 s ) ( 1 − 1 5 s ) ( 1 − 1 3 s ) ( 1 − 1 2 s ) ζ ( s ) = 1 {\displaystyle \ldots \left(1-{\frac {1}{11^{s}}}\right)\left(1-{\frac {1}{7^{s}}}\right)\left(1-{\frac {1}{5^{s}}}\right)\left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1} Dividing both sides by everything but the ζ(s) we obtain: ζ ( s ) = 1 ( 1 − 1 2 s ) ( 1 − 1 3 s ) ( 1 − 1 5 s ) ( 1 − 1 7 s ) ( 1 − 1 11 s ) … {\displaystyle \zeta (s)={\frac {1}{\left(1-{\frac {1}{2^{s}}}\right)\left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{5^{s}}}\right)\left(1-{\frac {1}{7^{s}}}\right)\left(1-{\frac {1}{11^{s}}}\right)\ldots }}} This can be written more concisely as an infinite product over all primes p: ζ ( s ) = ∏ p prime 1 1 − p − s {\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}} To make this proof rigorous, we need only to observe that when ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for ζ ( s ) {\displaystyle \zeta (s)} . == The case s = 1 == An interesting result can be found for ζ(1), the harmonic series: … ( 1 − 1 11 ) ( 1 − 1 7 ) ( 1 − 1 5 ) ( 1 − 1 3 ) ( 1 − 1 2 ) ζ ( 1 ) = 1 {\displaystyle \ldots \left(1-{\frac {1}{11}}\right)\left(1-{\frac {1}{7}}\right)\left(1-{\frac {1}{5}}\right)\left(1-{\frac {1}{3}}\right)\left(1-{\frac {1}{2}}\right)\zeta (1)=1} which can also be written as, … ( 10 11 ) ( 6 7 ) ( 4 5 ) ( 2 3 ) ( 1 2 ) ζ ( 1 ) = 1 {\displaystyle \ldots \left({\frac {10}{11}}\right)\left({\frac {6}{7}}\right)\left({\frac {4}{5}}\right)\left({\frac {2}{3}}\right)\left({\frac {1}{2}}\right)\zeta (1)=1} which is, ( … ⋅ 10 ⋅ 6 ⋅ 4 ⋅ 2 ⋅ 1 … ⋅ 11 ⋅ 7 ⋅ 5 ⋅ 3 ⋅ 2 ) ζ ( 1 ) = 1 {\displaystyle \left({\frac {\ldots \cdot 10\cdot 6\cdot 4\cdot 2\cdot 1}{\ldots \cdot 11\cdot 7\cdot 5\cdot 3\cdot 2}}\right)\zeta (1)=1} as, ζ ( 1 ) = 1 + 1 2 + 1 3 + 1 4 + 1 5 + … {\displaystyle \zeta (1)=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\ldots } thus, 1 + 1 2 + 1 3 + 1 4 + 1 5 + … = 2 ⋅ 3 ⋅ 5 ⋅ 7 ⋅ 11 ⋅ … 1 ⋅ 2 ⋅ 4 ⋅ 6 ⋅ 10 ⋅ … {\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\ldots ={\frac {2\cdot 3\cdot 5\cdot 7\cdot 11\cdot \ldots }{1\cdot 2\cdot 4\cdot 6\cdot 10\cdot \ldots }}} While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g., lim n → ∞ ( 1 + 1 n ) n = e {\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}=e} . Instead, the partial products (whose numerators are primorials) may be bounded, using ln(1+x)≤x, as ∏ k = 1 n p k p k − 1 = e − ∑ k = 1 n ln ( 1 − 1 p k ) ≥ e ∑ k = 1 n 1 p k , {\displaystyle \prod _{k=1}^{n}{\frac {p_{k}}{p_{k}-1}}=e^{-\sum _{k=1}^{n}\ln \left(1-{\frac {1}{p_{k}}}\right)}\geq e^{\sum _{k=1}^{n}{\frac {1}{p_{k}}}},} so that divergence is clear given the double-logarithmic divergence of the inverse prime series. (Note that Euler's original proof for inverse prime series used just the converse direction to prove the divergence of the inverse prime series based on that of the Euler product and the harmonic series.) == Another proof == Each factor (for a given prime p) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s, as follows 1 1 − p − s = 1 + 1 p s + 1 p 2 s + 1 p 3 s + … + 1 p k s + … {\displaystyle {\frac {1}{1-p^{-s}}}=1+{\frac {1}{p^{s}}}+{\frac {1}{p^{2s}}}+{\frac {1}{p^{3s}}}+\ldots +{\frac {1}{p^{ks}}}+\ldots } When ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , this series converges absolutely. Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q, we have | ζ ( s ) − ∏ p ≤ q ( 1 1 − p − s ) | < ∑ n = q + 1 ∞ 1 n σ {\displaystyle \left|\zeta (s)-\prod _{p\leq q}\left({\frac {1}{1-p^{-s}}}\right)\right|<\sum _{n=q+1}^{\infty }{\frac {1}{n^{\sigma }}}} where σ is the real part of s. By the fundamental theorem of arithmetic, the partial product when expanded out gives a sum consisting of those terms n−s where n is a product of primes less than or equal to q. The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ(s) goes to zero when σ > 1, we have convergence in this region. == See also == Euler product Riemann zeta function == References == John Derbyshire, Prime Obsession: Bernhard Riemann and The Greatest Unsolved Problem in Mathematics, Joseph Henry Press, 2003, ISBN 978-0-309-08549-6 == Notes ==
|
Wikipedia:Proofs involving the addition of natural numbers#0
|
This article contains mathematical proofs for some properties of addition of the natural numbers: the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers. == Definitions == This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S(a) by the two rules For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is, 1 = S(0). For every natural number a, one has == Proof of associativity == We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c. For the base case c = 0, (a + b) + 0 = a + b = a + (b + 0) Each equation follows by definition [A1]; the first with a + b, the second with b. Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c, (a + b) + c = a + (b + c) Then it follows, In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete. == Proof of identity element == Definition [A1] states directly that 0 is a right identity. We prove that 0 is a left identity by induction on the natural number a. For the base case a = 0, 0 + 0 = 0 by definition [A1]. Now we assume the induction hypothesis, that 0 + a = a. Then This completes the induction on a. == Proof of commutativity == We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything). The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above: a + 0 = a = 0 + a. Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). We have proved that 0 commutes with everything, so in particular, 0 commutes with 1: for a = 0, we have 0 + 1 = 1 + 0. Now, suppose a + 1 = 1 + a. Then This completes the induction on a, and so we have proved the base case b = 1. Now, suppose that for all natural numbers a, we have a + b = b + a. We must show that for all natural numbers a, we have a + S(b) = S(b) + a. We have This completes the induction on b. == See also == Binary operation Proof Ring == References == Edmund Landau, Foundations of Analysis, Chelsea Pub Co. ISBN 0-8218-2693-X.
|
Wikipedia:Property (mathematics)#0
|
In mathematics, a property is any characteristic that applies to a given set. Rigorously, a property p defined for all elements of a set X is usually defined as a function p: X → {true, false}, that is true whenever the property holds; or, equivalently, as the subset of X for which p holds; i.e. the set {x | p(x) = true}; p is its indicator function. However, it may be objected that the rigorous definition defines merely the extension of a property, and says nothing about what causes the property to hold for exactly those values. == Examples == Of objects: Parity is the property of an integer of whether it is even or odd For more examples, see Category:Algebraic properties of elements. Of operations: associative property commutative property of binary operations between real and complex numbers distributive property For more examples, see Category:Properties of binary operations. == See also == Unary relation == References ==
|
Wikipedia:Przemysław Prusinkiewicz#0
|
Przemysław (Przemek) Prusinkiewicz [ˈpʂɛmɛk pruɕiŋˈkjevit͡ʂ] is a Polish computer scientist who advanced the idea that Fibonacci numbers in nature can be in part understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars. Prusinkiewicz's main work is on the modeling of plant growth through such grammars. == Early life and education == in 1978 Prusinkiewicz received his PhD from Warsaw University of Technology . == Career == As of 2008 he was a professor of Computer Science at the University of Calgary. == Awards == Prusinkiewicz received the 1997 SIGGRAPH Computer Graphics Achievement Award for his work. == Influences == In 2006, Michael Hensel examined the work of Prusinkiewicz and his collaborators - the Calgary team - in an article published in Architectural Design. Hensel argued that the Calgary team's computational plant models or "virtual plants" which culminated in software they developed capable of modeling various plant characteristics,: 14 could provide important lessons for architectural design. Architects would learn from "the self-organisation processes underlying the growth of living organisms" and the Calgary team's work uncovered some of that potential. Their computational models allowed for a "quantitative understanding of developmental mechanisms" and had the potential to "lead to a synthetic understanding of the interplay between various aspects of development." Prusinkiewicz's work was informed by that of the Hungarian biologist Aristid Lindenmayer who developed the theory of L-systems in 1968. Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes, plant development and the branching architecture of plant development. == Publications == Prusinkiewicz, Przemysław; James Hanan (1989). Lindenmayer Systems, Fractals, and Plants (Lecture Notes in Biomathematics). Springer-Verlag. ISBN 978-0-387-97092-9. Meinhardt, Hans; Przemysław Prusinkiewicz; Deborah R. Fowler (2003-02-12). The Algorithmic Beauty of Sea Shells (3rd ed.). Springer-Verlag. ISBN 978-3-540-44010-9. == References == == External links == Biography of Przemysław Prusinkiewicz from the University of Calgary Laboratory website at the University of Calgary
|
Wikipedia:Prékopa–Leindler inequality#0
|
In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler. == Statement of the inequality == Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy for all x and y in Rn. Then ‖ h ‖ 1 := ∫ R n h ( x ) d x ≥ ( ∫ R n f ( x ) d x ) 1 − λ ( ∫ R n g ( x ) d x ) λ =: ‖ f ‖ 1 1 − λ ‖ g ‖ 1 λ . {\displaystyle \|h\|_{1}:=\int _{\mathbb {R} ^{n}}h(x)\,\mathrm {d} x\geq \left(\int _{\mathbb {R} ^{n}}f(x)\,\mathrm {d} x\right)^{1-\lambda }\left(\int _{\mathbb {R} ^{n}}g(x)\,\mathrm {d} x\right)^{\lambda }=:\|f\|_{1}^{1-\lambda }\|g\|_{1}^{\lambda }.} == Essential form of the inequality == Recall that the essential supremum of a measurable function f : Rn → R is defined by e s s s u p x ∈ R n f ( x ) = inf { t ∈ [ − ∞ , + ∞ ] ∣ f ( x ) ≤ t for almost all x ∈ R n } . {\displaystyle \mathop {\mathrm {ess\,sup} } _{x\in \mathbb {R} ^{n}}f(x)=\inf \left\{t\in [-\infty ,+\infty ]\mid f(x)\leq t{\text{ for almost all }}x\in \mathbb {R} ^{n}\right\}.} This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let s ( x ) = e s s s u p y ∈ R n f ( x − y 1 − λ ) 1 − λ g ( y λ ) λ . {\displaystyle s(x)=\mathop {\mathrm {ess\,sup} } _{y\in \mathbb {R} ^{n}}f\left({\frac {x-y}{1-\lambda }}\right)^{1-\lambda }g\left({\frac {y}{\lambda }}\right)^{\lambda }.} Then s is measurable and ‖ s ‖ 1 ≥ ‖ f ‖ 1 1 − λ ‖ g ‖ 1 λ . {\displaystyle \|s\|_{1}\geq \|f\|_{1}^{1-\lambda }\|g\|_{1}^{\lambda }.} The essential supremum form was given by Herm Brascamp and Elliott Lieb. Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form. == Relationship to the Brunn–Minkowski inequality == It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then μ ( ( 1 − λ ) A + λ B ) ≥ μ ( A ) 1 − λ μ ( B ) λ , {\displaystyle \mu \left((1-\lambda )A+\lambda B\right)\geq \mu (A)^{1-\lambda }\mu (B)^{\lambda },} where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then μ ( ( 1 − λ ) A + λ B ) 1 / n ≥ ( 1 − λ ) μ ( A ) 1 / n + λ μ ( B ) 1 / n . {\displaystyle \mu \left((1-\lambda )A+\lambda B\right)^{1/n}\geq (1-\lambda )\mu (A)^{1/n}+\lambda \mu (B)^{1/n}.} == Applications in probability and statistics == === Log-concave distributions === The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if X , Y {\displaystyle X,Y} have pdf f , g {\displaystyle f,g} , and X , Y {\displaystyle X,Y} are independent, then f ⋆ g {\displaystyle f\star g} is the pdf of X + Y {\displaystyle X+Y} , we also have that the convolution of two log-concave functions is log-concave. Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have and let M(y) denote the marginal distribution obtained by integrating over x: M ( y ) = ∫ R m H ( x , y ) d x . {\displaystyle M(y)=\int _{\mathbb {R} ^{m}}H(x,y)\,dx.} Let y1, y2 ∈ Rn and 0 < λ < 1 be given. Then equation (2) satisfies condition (1) with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as M ( ( 1 − λ ) y 1 + λ y 2 ) ≥ M ( y 1 ) 1 − λ M ( y 2 ) λ , {\displaystyle M((1-\lambda )y_{1}+\lambda y_{2})\geq M(y_{1})^{1-\lambda }M(y_{2})^{\lambda },} which is the definition of log-concavity for M. To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + Y, X − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + Y, X − Y), we conclude that X + Y has a log-concave distribution. === Applications to concentration of measure === The Prékopa–Leindler inequality can be used to prove results about concentration of measure. Theorem Let A ⊆ R n {\textstyle A\subseteq \mathbb {R} ^{n}} , and set A ϵ = { x : d ( x , A ) < ϵ } {\textstyle A_{\epsilon }=\{x:d(x,A)<\epsilon \}} . Let γ ( x ) {\textstyle \gamma (x)} denote the standard Gaussian pdf, and μ {\textstyle \mu } its associated measure. Then μ ( A ϵ ) ≥ 1 − e − ϵ 2 / 4 μ ( A ) {\textstyle \mu (A_{\epsilon })\geq 1-{\frac {e^{-\epsilon ^{2}/4}}{\mu (A)}}} . == References == == Further reading == Eaton, Morris L. (1987). "Log concavity and related topics". Lectures on Topics in Probability Inequalities. Amsterdam. pp. 77–109. ISBN 90-6196-316-8.{{cite book}}: CS1 maint: location missing publisher (link) Wainwright, Martin J. (2019). "Concentration of Measure". High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press. pp. 72–76. ISBN 978-1-108-49802-9.
|
Wikipedia:Pseudo-differential operator#0
|
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space. == History == The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza. They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators. == Motivation == === Linear differential operators with constant coefficients === Consider a linear differential operator with constant coefficients, P ( D ) := ∑ α a α D α {\displaystyle P(D):=\sum _{\alpha }a_{\alpha }\,D^{\alpha }} which acts on smooth functions u {\displaystyle u} with compact support in Rn. This operator can be written as a composition of a Fourier transform, a simple multiplication by the polynomial function (called the symbol) P ( ξ ) = ∑ α a α ξ α , {\displaystyle P(\xi )=\sum _{\alpha }a_{\alpha }\,\xi ^{\alpha },} and an inverse Fourier transform, in the form: Here, α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})} is a multi-index, a α {\displaystyle a_{\alpha }} are complex numbers, and D α = ( − i ∂ 1 ) α 1 ⋯ ( − i ∂ n ) α n {\displaystyle D^{\alpha }=(-i\partial _{1})^{\alpha _{1}}\cdots (-i\partial _{n})^{\alpha _{n}}} is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants − i {\displaystyle -i} to facilitate the calculation of Fourier transforms. Derivation of formula (1) The Fourier transform of a smooth function u, compactly supported in Rn, is u ^ ( ξ ) := ∫ e − i y ξ u ( y ) d y {\displaystyle {\hat {u}}(\xi ):=\int e^{-iy\xi }u(y)\,dy} and Fourier's inversion formula gives u ( x ) = 1 ( 2 π ) n ∫ e i x ξ u ^ ( ξ ) d ξ = 1 ( 2 π ) n ∬ e i ( x − y ) ξ u ( y ) d y d ξ {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\hat {u}}(\xi )d\xi ={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }u(y)\,dy\,d\xi } By applying P(D) to this representation of u and using P ( D x ) e i ( x − y ) ξ = e i ( x − y ) ξ P ( ξ ) {\displaystyle P(D_{x})\,e^{i(x-y)\xi }=e^{i(x-y)\xi }\,P(\xi )} one obtains formula (1). === Representation of solutions to partial differential equations === To solve the partial differential equation P ( D ) u = f {\displaystyle P(D)\,u=f} we (formally) apply the Fourier transform on both sides and obtain the algebraic equation P ( ξ ) u ^ ( ξ ) = f ^ ( ξ ) . {\displaystyle P(\xi )\,{\hat {u}}(\xi )={\hat {f}}(\xi ).} If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ): u ^ ( ξ ) = 1 P ( ξ ) f ^ ( ξ ) {\displaystyle {\hat {u}}(\xi )={\frac {1}{P(\xi )}}{\hat {f}}(\xi )} By Fourier's inversion formula, a solution is u ( x ) = 1 ( 2 π ) n ∫ e i x ξ 1 P ( ξ ) f ^ ( ξ ) d ξ . {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\frac {1}{P(\xi )}}{\hat {f}}(\xi )\,d\xi .} Here it is assumed that: P(D) is a linear differential operator with constant coefficients, its symbol P(ξ) is never zero, both u and ƒ have a well defined Fourier transform. The last assumption can be weakened by using the theory of distributions. The first two assumptions can be weakened as follows. In the last formula, write out the Fourier transform of ƒ to obtain u ( x ) = 1 ( 2 π ) n ∬ e i ( x − y ) ξ 1 P ( ξ ) f ( y ) d y d ξ . {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }{\frac {1}{P(\xi )}}f(y)\,dy\,d\xi .} This is similar to formula (1), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind. == Definition of pseudo-differential operators == Here we view pseudo-differential operators as a generalization of differential operators. We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x: where u ^ ( ξ ) {\displaystyle {\hat {u}}(\xi )} is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class. For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property | ∂ ξ α ∂ x β P ( x , ξ ) | ≤ C α , β ( 1 + | ξ | ) m − | α | {\displaystyle |\partial _{\xi }^{\alpha }\partial _{x}^{\beta }P(x,\xi )|\leq C_{\alpha ,\beta }\,(1+|\xi |)^{m-|\alpha |}} for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class S 1 , 0 m {\displaystyle \scriptstyle {S_{1,0}^{m}}} of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class Ψ 1 , 0 m . {\displaystyle \Psi _{1,0}^{m}.} == Properties == Linear differential operators of order m with smooth bounded coefficients are pseudo-differential operators of order m. The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator. If a differential operator of order m is (uniformly) elliptic (of order m) and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly by using the theory of pseudo-differential operators. Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth. Just as a differential operator can be expressed in terms of D = −id/dx in the form p ( x , D ) {\displaystyle p(x,D)\,} for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis. == Kernel of pseudo-differential operator == Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel. == See also == Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings. Fourier transform Fourier integral operator Oscillatory integral operator Sato's fundamental theorem Operational calculus == Footnotes == == References == Stein, Elias (1993), Harmonic Analysis: Real-Variable Methods, Orthogonality and Oscillatory Integrals, Princeton University Press. Atiyah, Michael F.; Singer, Isadore M. (1968), "The Index of Elliptic Operators I", Annals of Mathematics, 87 (3): 484–530, doi:10.2307/1970715, JSTOR 1970715 == Further reading == Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010. Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. ISBN 0-691-08282-0 M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. ISBN 3-540-41195-X Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. ISBN 0-306-40404-4 F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. ISBN 0-521-64971-4 Hörmander, Lars (1987). The Analysis of Linear Partial Differential Operators III: Pseudo-Differential Operators. Springer. ISBN 3-540-49937-7. André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976. == External links == Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org. "Pseudo-differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Pseudo-ring#0
|
In mathematics, and more specifically in abstract algebra, a pseudo-ring is one of the following variants of a ring: A rng, i.e., a structure satisfying all the axioms of a ring except for the existence of a multiplicative identity. A set R with two binary operations + and ⋅ such that (R, +) is an abelian group with identity 0, and a(b + c) + a0 = ab + ac and (b + c)a + 0a = ba + ca for all a, b, c in R. An abelian group (A, +) equipped with a subgroup B and a multiplication B × A → A making B a ring and A a B-module. None of these definitions are equivalent, so it is best to avoid the term "pseudo-ring" or to clarify which meaning is intended. == See also == Semiring – an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse == References ==
|
Wikipedia:Pseudoalgebra#0
|
In algebra, given a 2-monad T in a 2-category, a pseudoalgebra for T is a 2-category-version of algebra for T, that satisfies the laws up to coherent isomorphisms. == See also == Operad == Notes == == References == Lack, Stephen (2000). "A Coherent Approach to Pseudomonads". Advances in Mathematics. 152 (2): 179–202. doi:10.1006/aima.1999.1881. == Further reading == Baez, John C.; May, J. Peter, eds. (2010). Towards higher categories. The IMA Volumes in Mathematics and its Applications. Vol. 152. Springer, New York. doi:10.1007/978-1-4419-1524-5. ISBN 978-1-4419-1523-8. == External links == https://ncatlab.org/nlab/show/pseudoalgebra+for+a+2-monad https://golem.ph.utexas.edu/category/2014/06/codescent_objects_and_coherenc.html
|
Wikipedia:Pseudogamma function#0
|
In mathematics, a pseudogamma function is a function that interpolates the factorial. The gamma function is the most famous solution to the problem of extending the notion of the factorial beyond the positive integers only. However, it is clearly not the only solution, as, for any set of points, an infinite number of curves can be drawn through those points. Such a curve, namely one which interpolates the factorial but is not equal to the gamma function, is known as a pseudogamma function. The two most famous pseudogamma functions are Hadamard's gamma function, H ( x ) = ψ ( 1 − x 2 ) − ψ ( 1 2 − x 2 ) 2 Γ ( 1 − x ) = Φ ( − 1 , 1 , − x ) Γ ( − x ) {\displaystyle H(x)={\frac {\psi \left(1-{\frac {x}{2}}\right)-\psi \left({\frac {1}{2}}-{\frac {x}{2}}\right)}{2\Gamma (1-x)}}={\frac {\Phi \left(-1,1,-x\right)}{\Gamma (-x)}}} where Φ {\displaystyle \Phi } is the Lerch zeta function, and the Luschny factorial: Γ ( x + 1 ) ( 1 − sin ( π x ) π x ( x 2 ( ψ ( x + 1 2 ) − ψ ( x 2 ) ) − 1 2 ) ) {\displaystyle \Gamma (x+1)\left(1-{\frac {\sin \left(\pi x\right)}{\pi x}}\left({\frac {x}{2}}\left(\psi \left({\frac {x+1}{2}}\right)-\psi \left({\frac {x}{2}}\right)\right)-{\frac {1}{2}}\right)\right)} where Γ(x) denotes the classical gamma function and ψ(x) denotes the digamma function. Other related pseudogamma functions are also known. However, by adding conditions to the function interpolating the factorial, we obtain uniqueness of this function, most often given by the Gamma function. The most common condition is the logarithmic convexity: this is the Bohr-Mollerup theorem. See also the Wielandt theorem for other conditions. == References ==
|
Wikipedia:Pseudoreflection#0
|
In mathematics, a pseudoreflection is an invertible linear transformation of a finite-dimensional vector space such that it is not the identity transformation, has a finite (multiplicative) order, and fixes a hyperplane. The concept of pseudoreflection generalizes the concepts of reflection and complex reflection and is simply called reflection by some mathematicians. It plays an important role in Invariant theory of finite groups, including the Chevalley-Shephard-Todd theorem. == Formal definition == Suppose that V is vector space over a field K, whose dimension is a finite number n. A pseudoreflection is an invertible linear transformation g : V → V {\displaystyle g:V\to V} such that the order of g is finite and the fixed subspace V g = { v ∈ V : g v = v } {\displaystyle V^{g}=\{v\in V:\ gv=v\}} of all vectors in V fixed by g has dimension n-1. == Eigenvalues == A pseudoreflection g has an eigenvalue 1 of multiplicity n-1 and another eigenvalue r of multiplicity 1. Since g has finite order, the eigenvalue r must be a root of unity in the field K. It is possible that r = 1 (see Transvections). == Diagonalizable pseudoreflections == Let p be the characteristic of the field K. If the order of g is coprime to p then g is diagonalizable and represented by a diagonal matrix diag(1, ... , 1, r ) = [ 1 0 0 ⋯ 0 0 1 0 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ 1 0 0 0 0 ⋯ r ] {\displaystyle {\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1&0\\0&0&0&\cdots &r\\\end{bmatrix}}} where r is a root of unity not equal to 1. This includes the case when K is a field of characteristic zero, such as the field of real numbers and the field of complex numbers. A diagonalizable pseudoreflection is sometimes called a semisimple reflection. == Real reflections == When K is the field of real numbers, a pseudoreflection has matrix form diag(1, ... , 1, -1). A pseudoreflection with such matrix form is called a real reflection. If the space on which this transformation acts admits a symmetric bilinear form so that orthogonality of vectors can be defined, then the transformation is a true reflection. == Complex reflections == When K is the field of complex numbers, a pseudoreflection is called a complex reflection, which can be represented by a diagonal matrix diag(1, ... , 1, r) where r is a complex root of unity unequal to 1. == Transvections == If the pseudoreflection g is not diagonalizable then r = 1 and g has Jordan normal form [ 1 0 0 ⋯ 0 0 1 0 ⋯ 0 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ 1 1 0 0 0 ⋯ 1 ] {\displaystyle {\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &1&1\\0&0&0&\cdots &1\\\end{bmatrix}}} In such case g is called a transvection. A pseudoreflection g is a transvection if and only if the characteristic p of the field K is positive and the order of g is p. Transvections are useful in the study of finite geometries and the classification of their groups of motions. == References ==
|
Wikipedia:Pseudoscalar#0
|
In linear algebra, a pseudoscalar is a quantity that behaves like a scalar, except that it changes sign under a parity inversion while a true scalar does not. A pseudoscalar, when multiplied by an ordinary vector, becomes a pseudovector (or axial vector); a similar construction creates the pseudotensor. A pseudoscalar also results from any scalar product between a pseudovector and an ordinary vector. The prototypical example of a pseudoscalar is the scalar triple product, which can be written as the scalar product between one of the vectors in the triple product and the cross product between the two other vectors, where the latter is a pseudovector. == In physics == In physics, a pseudoscalar denotes a physical quantity analogous to a scalar. Both are physical quantities which assume a single value which is invariant under proper rotations. However, under the parity transformation, pseudoscalars flip their signs while scalars do not. As reflections through a plane are the combination of a rotation with the parity transformation, pseudoscalars also change signs under reflections. === Motivation === One of the most powerful ideas in physics is that physical laws do not change when one changes the coordinate system used to describe these laws. That a pseudoscalar reverses its sign when the coordinate axes are inverted suggests that it is not the best object to describe a physical quantity. In 3D-space, quantities described by a pseudovector are antisymmetric tensors of order 2, which are invariant under inversion. The pseudovector may be a simpler representation of that quantity, but suffers from the change of sign under inversion. Similarly, in 3D-space, the Hodge dual of a scalar is equal to a constant times the 3-dimensional Levi-Civita pseudotensor (or "permutation" pseudotensor); whereas the Hodge dual of a pseudoscalar is an antisymmetric (pure) tensor of order three. The Levi-Civita pseudotensor is a completely antisymmetric pseudotensor of order 3. Since the dual of the pseudoscalar is the product of two "pseudo-quantities", the resulting tensor is a true tensor, and does not change sign upon an inversion of axes. The situation is similar to the situation for pseudovectors and antisymmetric tensors of order 2. The dual of a pseudovector is an antisymmetric tensor of order 2 (and vice versa). The tensor is an invariant physical quantity under a coordinate inversion, while the pseudovector is not invariant. The situation can be extended to any dimension. Generally in an n-dimensional space the Hodge dual of an order r tensor will be an antisymmetric pseudotensor of order (n − r) and vice versa. In particular, in the four-dimensional spacetime of special relativity, a pseudoscalar is the dual of a fourth-order tensor and is proportional to the four-dimensional Levi-Civita pseudotensor. === Examples === The stream function ψ ( x , y ) {\displaystyle \psi (x,y)} for a two-dimensional, incompressible fluid flow v ( x , y ) = ⟨ ∂ y ψ , − ∂ x ψ ⟩ {\displaystyle \mathbf {v} (x,y)=\langle \partial _{y}\psi ,-\partial _{x}\psi \rangle } . Magnetic charge is a pseudoscalar as it is mathematically defined, regardless of whether it exists physically. Magnetic flux is the result of a dot product between a vector (the surface normal) and pseudovector (the magnetic field). Helicity is the projection (dot product) of a spin pseudovector onto the direction of momentum (a true vector). Pseudoscalar particles, i.e. particles with spin 0 and odd parity, that is, a particle with no intrinsic spin with wave function that changes sign under parity inversion. Examples are pseudoscalar mesons. == In geometric algebra == A pseudoscalar in a geometric algebra is a highest-grade element of the algebra. For example, in two dimensions there are two orthogonal basis vectors, e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and the associated highest-grade basis element is e 1 e 2 = e 12 . {\displaystyle e_{1}e_{2}=e_{12}.} So a pseudoscalar is a multiple of e 12 {\displaystyle e_{12}} . The element e 12 {\displaystyle e_{12}} squares to −1 and commutes with all even elements – behaving therefore like the imaginary scalar i {\displaystyle i} in the complex numbers. It is these scalar-like properties which give rise to its name. In this setting, a pseudoscalar changes sign under a parity inversion, since if ( e 1 , e 2 ) ↦ ( u 1 , u 2 ) {\displaystyle (e_{1},e_{2})\mapsto (u_{1},u_{2})} is a change of basis representing an orthogonal transformation, then e 1 e 2 ↦ u 1 u 2 = ± e 1 e 2 , {\displaystyle e_{1}e_{2}\mapsto u_{1}u_{2}=\pm e_{1}e_{2},} where the sign depends on the determinant of the transformation. Pseudoscalars in geometric algebra thus correspond to the pseudoscalars in physics. == References ==
|
Wikipedia:Pseudovector#0
|
In physics and mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under continuous rigid transformations such as rotations or translations, but which does not transform like a vector under certain discontinuous rigid transformations such as reflections. For example, the angular velocity of a rotating object is a pseudovector because, when the object is reflected in a mirror, the reflected image rotates in such a way so that its angular velocity "vector" is not the mirror image of the angular velocity "vector" of the original object; for true vectors (also known as polar vectors), the reflection "vector" and the original "vector" must be mirror images. One example of a pseudovector is the normal to an oriented plane. An oriented plane can be defined by two non-parallel vectors, a and b, that span the plane. The vector a × b is a normal to the plane (there are two normals, one on each side – the right-hand rule will determine which), and is a pseudovector. This has consequences in computer graphics, where it has to be considered when transforming surface normals. In three dimensions, the curl of a polar vector field at a point and the cross product of two polar vectors are pseudovectors. A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and torque. In mathematics, in three dimensions, pseudovectors are equivalent to bivectors, from which the transformation rules of pseudovectors can be derived. More generally, in n-dimensional geometric algebra, pseudovectors are the elements of the algebra with dimension n − 1, written ⋀n−1Rn. The label "pseudo-" can be further generalized to pseudoscalars and pseudotensors, both of which gain an extra sign-flip under improper rotations compared to a true scalar or tensor. == Physical examples == Physical examples of pseudovectors include angular velocity, angular acceleration, angular momentum, torque, magnetic field, and magnetic dipole moment. Consider the pseudovector angular momentum L = Σ(r × p). Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left (by the right-hand rule). If the world is reflected in a mirror which switches the left and right side of the car, the "reflection" of this angular momentum "vector" (viewed as an ordinary vector) points to the right, but the actual angular momentum vector of the wheel (which is still turning forward in the reflection) still points to the left (by the right-hand rule), corresponding to the extra sign flip in the reflection of a pseudovector. The distinction between polar vectors and pseudovectors becomes important in understanding the effect of symmetry on the solution to physical systems. Consider an electric current loop in the z = 0 plane that inside the loop generates a magnetic field oriented in the z direction. This system is symmetric (invariant) under mirror reflections through this plane, with the magnetic field unchanged by the reflection. But reflecting the magnetic field as a vector through that plane would be expected to reverse it; this expectation is corrected by realizing that the magnetic field is a pseudovector, with the extra sign flip leaving it unchanged. In physics, pseudovectors are generally the result of taking the cross product of two polar vectors or the curl of a polar vector field. The cross product and curl are defined, by convention, according to the right hand rule, but could have been just as easily defined in terms of a left-hand rule. The entire body of physics that deals with (right-handed) pseudovectors and the right hand rule could be replaced by using (left-handed) pseudovectors and the left hand rule without issue. The (left) pseudovectors so defined would be opposite in direction to those defined by the right-hand rule. While vector relationships in physics can be expressed in a coordinate-free manner, a coordinate system is required in order to express vectors and pseudovectors as numerical quantities. Vectors are represented as ordered triplets of numbers: e.g. a = ( a x , a y , a z ) {\displaystyle \mathbf {a} =(a_{x},a_{y},a_{z})} , and pseudovectors are represented in this form too. When transforming between left and right-handed coordinate systems, representations of pseudovectors do not transform as vectors, and treating them as vector representations will cause an incorrect sign change, so that care must be taken to keep track of which ordered triplets represent vectors, and which represent pseudovectors. This problem does not exist if the cross product of two vectors is replaced by the exterior product of the two vectors, which yields a bivector which is a 2nd rank tensor and is represented by a 3×3 matrix. This representation of the 2-tensor transforms correctly between any two coordinate systems, independently of their handedness. == Details == The definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than the mathematical definition of "vector" (namely, any element of an abstract vector space). Under the physics definition, a "vector" is required to have components that "transform" in a certain way under a proper rotation: In particular, if everything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system is fixed in this discussion; in other words this is the perspective of active transformations.) Mathematically, if everything in the universe undergoes a rotation described by a rotation matrix R, so that a displacement vector x is transformed to x′ = Rx, then any "vector" v must be similarly transformed to v′ = Rv. This important requirement is what distinguishes a vector (which might be composed of, for example, the x-, y-, and z-components of velocity) from any other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot be considered the three components of a vector, since rotating the box does not appropriately transform these three components.) (In the language of differential geometry, this requirement is equivalent to defining a vector to be a tensor of contravariant rank one. In this more general framework, higher rank tensors can also have arbitrarily many and mixed covariant and contravariant ranks at the same time, denoted by raised and lowered indices within the Einstein summation convention.) A basic and rather concrete example is that of row and column vectors under the usual matrix multiplication operator: in one order they yield the dot product, which is just a scalar and as such a rank zero tensor, while in the other they yield the dyadic product, which is a matrix representing a rank two mixed tensor, with one contravariant and one covariant index. As such, the noncommutativity of standard matrix algebra can be used to keep track of the distinction between covariant and contravariant vectors. This is in fact how the bookkeeping was done before the more formal and generalised tensor notation came to be. It still manifests itself in how the basis vectors of general tensor spaces are exhibited for practical manipulation. The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also consider improper rotations, i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improper rotation is inversion through a point in 3-dimensional space.) Suppose everything in the universe undergoes an improper rotation described by the improper rotation matrix R, so that a position vector x is transformed to x′ = Rx. If the vector v is a polar vector, it will be transformed to v′ = Rv. If it is a pseudovector, it will be transformed to v′ = −Rv. The transformation rules for polar vectors and pseudovectors can be compactly stated as v ′ = R v (polar vector) v ′ = ( det R ) ( R v ) (pseudovector) {\displaystyle {\begin{aligned}\mathbf {v} '&=R\mathbf {v} &&{\text{(polar vector)}}\\\mathbf {v} '&=(\det R)(R\mathbf {v} )&&{\text{(pseudovector)}}\end{aligned}}} where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symbol det denotes determinant; this formula works because the determinant of proper and improper rotation matrices are +1 and −1, respectively. === Behavior under addition, subtraction, scalar multiplication === Suppose v1 and v2 are known pseudovectors, and v3 is defined to be their sum, v3 = v1 + v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to v 3 ′ = v 1 ′ + v 2 ′ = ( det R ) ( R v 1 ) + ( det R ) ( R v 2 ) = ( det R ) ( R ( v 1 + v 2 ) ) = ( det R ) ( R v 3 ) . {\displaystyle {\begin{aligned}\mathbf {v_{3}} '=\mathbf {v_{1}} '+\mathbf {v_{2}} '&=(\det R)(R\mathbf {v_{1}} )+(\det R)(R\mathbf {v_{2}} )\\&=(\det R)(R(\mathbf {v_{1}} +\mathbf {v_{2}} ))=(\det R)(R\mathbf {v_{3}} ).\end{aligned}}} So v3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is a pseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by any real number yields another polar vector, and that multiplying a pseudovector by any real number yields another pseudovector. On the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined to be their sum, v3 = v1 + v2. If the universe is transformed by an improper rotation matrix R, then v3 is transformed to v 3 ′ = v 1 ′ + v 2 ′ = ( R v 1 ) + ( det R ) ( R v 2 ) = R ( v 1 + ( det R ) v 2 ) . {\displaystyle \mathbf {v_{3}} '=\mathbf {v_{1}} '+\mathbf {v_{2}} '=(R\mathbf {v_{1}} )+(\det R)(R\mathbf {v_{2}} )=R(\mathbf {v_{1}} +(\det R)\mathbf {v_{2}} ).} Therefore, v3 is neither a polar vector nor a pseudovector (although it is still a vector, by the physics definition). For an improper rotation, v3 does not in general even keep the same magnitude: | v 3 | = | v 1 + v 2 | , but | v 3 ′ | = | v 1 ′ − v 2 ′ | {\displaystyle |\mathbf {v_{3}} |=|\mathbf {v_{1}} +\mathbf {v_{2}} |,{\text{ but }}\left|\mathbf {v_{3}} '\right|=\left|\mathbf {v_{1}} '-\mathbf {v_{2}} '\right|} . If the magnitude of v3 were to describe a measurable physical quantity, that would mean that the laws of physics would not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weak interaction: Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to the summation of a polar vector with a pseudovector in the underlying theory. (See parity violation.) === Behavior under cross products and curls === For a rotation matrix R, either proper or improper, the following mathematical equation is always true: ( R v 1 ) × ( R v 2 ) = ( det R ) ( R ( v 1 × v 2 ) ) {\displaystyle (R\mathbf {v_{1}} )\times (R\mathbf {v_{2}} )=(\det R)(R(\mathbf {v_{1}} \times \mathbf {v_{2}} ))} , where v1 and v2 are any three-dimensional vectors. (This equation can be proven either through a geometric argument or through an algebraic calculation.) Similarly, if v is any vector field, the following equation is always true: ∇ × ( R v ) = ( det R ) ( R ( ∇ × v ) ) {\displaystyle \nabla \times (R\mathbf {v} )=(\det R)(R(\nabla \times \mathbf {v} ))} where ∇ × denotes the curl operation from vector calculus. Suppose v1 and v2 are known polar vectors, and v3 is defined to be their cross product, v3 = v1 × v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to v 3 ′ = v 1 ′ × v 2 ′ = ( R v 1 ) × ( R v 2 ) = ( det R ) ( R ( v 1 × v 2 ) ) = ( det R ) ( R v 3 ) . {\displaystyle \mathbf {v_{3}} '=\mathbf {v_{1}} '\times \mathbf {v_{2}} '=(R\mathbf {v_{1}} )\times (R\mathbf {v_{2}} )=(\det R)(R(\mathbf {v_{1}} \times \mathbf {v_{2}} ))=(\det R)(R\mathbf {v_{3}} ).} So v3 is a pseudovector. Likewise, one can show that the cross product of two pseudovectors is a pseudovector and the cross product of a polar vector with a pseudovector is a polar vector. In conclusion, we have: polar vector × polar vector = pseudovector pseudovector × pseudovector = pseudovector polar vector × pseudovector = polar vector pseudovector × polar vector = polar vector This is isomorphic to addition modulo 2, where "polar" corresponds to 1 and "pseudo" to 0. Similarly, if v1 is any known polar vector field and v2 is defined to be its curl v2 = ∇ × v1, then if the universe is transformed by the rotation matrix R, v2 is transformed to v 2 ′ = ∇ × v 1 ′ = ∇ × ( R v 1 ) = ( det R ) ( R ( ∇ × v 1 ) ) = ( det R ) ( R v 2 ) . {\displaystyle \mathbf {v_{2}} '=\nabla \times \mathbf {v_{1}} '=\nabla \times (R\mathbf {v_{1}} )=(\det R)(R(\nabla \times \mathbf {v_{1}} ))=(\det R)(R\mathbf {v_{2}} ).} So v2 is a pseudovector field. Likewise, one can show that the curl of a pseudovector field is a polar vector field. In conclusion, we have: ∇ × polar vector field = pseudovector field ∇ × pseudovector field = polar vector field This is like the above rule for cross-products if one interprets the del operator ∇ as a polar vector. === Examples === From the definition, it is clear that linear displacement is a polar vector. Linear velocity is linear displacement (a polar vector) divided by time (a scalar), so is also a polar vector. Linear momentum is linear velocity (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum (in a point object) is the cross product of linear displacement (a polar vector) and linear momentum (a polar vector), and is therefore a pseudovector. Torque is angular momentum (a pseudovector) divided by time (a scalar), so is also a pseudovector. Angular velocity (in a rotating body or fluid) is one-half times the curl of linear velocity (a polar vector field), and thus is a pseudovector. Continuing this way, it is straightforward to classify any of the common vectors in physics as either a pseudovector or a polar vector. (There are the parity-violating vectors in the theory of weak-interactions, which are neither polar vectors nor pseudovectors. However, these occur very rarely in physics.) == The right-hand rule == Above, pseudovectors have been discussed using active transformations. An alternate approach, more along the lines of passive transformations, is to keep the universe fixed, but switch "right-hand rule" with "left-hand rule" everywhere in math and physics, including in the definition of the cross product and the curl. Any polar vector (e.g., a translation vector) would be unchanged, but pseudovectors (e.g., the magnetic field at a point) would switch signs. Nevertheless, there would be no physical consequences, apart from in the parity-violating phenomena such as certain radioactive decays. == Formalization == One way to formalize pseudovectors is as follows: if V is an n-dimensional vector space, then a pseudovector of V is an element of the (n − 1)-th exterior power of V: ⋀n−1(V). The pseudovectors of V form a vector space with the same dimension as V. This definition is not equivalent to that requiring a sign flip under improper rotations, but it is general to all vector spaces. In particular, when n is even, such a pseudovector does not experience a sign flip, and when the characteristic of the underlying field of V is 2, a sign flip has no effect. Otherwise, the definitions are equivalent, though it should be borne in mind that without additional structure (specifically, either a volume form or an orientation), there is no natural identification of ⋀n−1(V) with V. Another way to formalize them is by considering them as elements of a representation space for O ( n ) {\displaystyle {\text{O}}(n)} . Vectors transform in the fundamental representation of O ( n ) {\displaystyle {\text{O}}(n)} with data given by ( R n , ρ fund , O ( n ) ) {\displaystyle (\mathbb {R} ^{n},\rho _{\text{fund}},{\text{O}}(n))} , so that for any matrix R {\displaystyle R} in O ( n ) {\displaystyle {\text{O}}(n)} , one has ρ fund ( R ) = R {\displaystyle \rho _{\text{fund}}(R)=R} . Pseudovectors transform in a pseudofundamental representation ( R n , ρ pseudo , O ( n ) ) {\displaystyle (\mathbb {R} ^{n},\rho _{\text{pseudo}},{\text{O}}(n))} , with ρ pseudo ( R ) = det ( R ) R {\displaystyle \rho _{\text{pseudo}}(R)=\det(R)R} . Another way to view this homomorphism for n {\displaystyle n} odd is that in this case O ( n ) ≅ SO ( n ) × Z 2 {\displaystyle {\text{O}}(n)\cong {\text{SO}}(n)\times \mathbb {Z} _{2}} . Then ρ pseudo {\displaystyle \rho _{\text{pseudo}}} is a direct product of group homomorphisms; it is the direct product of the fundamental homomorphism on SO ( n ) {\displaystyle {\text{SO}}(n)} with the trivial homomorphism on Z 2 {\displaystyle \mathbb {Z} _{2}} . == Geometric algebra == In geometric algebra the basic elements are vectors, and these are used to build a hierarchy of elements using the definitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors. The basic multiplication in the geometric algebra is the geometric product, denoted by simply juxtaposing two vectors as in ab. This product is expressed as: a b = a ⋅ b + a ∧ b , {\displaystyle \mathbf {ab} =\mathbf {a\cdot b} +\mathbf {a\wedge b} \ ,} where the leading term is the customary vector dot product and the second term is called the wedge product or exterior product. Using the postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology to describe the various combinations is provided. For example, a multivector is a summation of k-fold wedge products of various k-values. A k-fold wedge product also is referred to as a k-blade. In the present context the pseudovector is one of these combinations. This term is attached to a different multivector depending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). In three dimensions, the most general 2-blade or bivector can be expressed as the wedge product of two vectors and is a pseudovector. In four dimensions, however, the pseudovectors are trivectors. In general, it is a (n − 1)-blade, where n is the dimension of the space and algebra. An n-dimensional space has n basis vectors and also n basis pseudovectors. Each basis pseudovector is formed from the outer (wedge) product of all but one of the n basis vectors. For instance, in four dimensions where the basis vectors are taken to be {e1, e2, e3, e4}, the pseudovectors can be written as: {e234, e134, e124, e123}. === Transformations in three dimensions === The transformation properties of the pseudovector in three dimensions has been compared to that of the vector cross product by Baylis. He says: "The terms axial vector and pseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector from its dual." To paraphrase Baylis: Given two polar vectors (that is, true vectors) a and b in three dimensions, the cross product composed from a and b is the vector normal to their plane given by c = a × b. Given a set of right-handed orthonormal basis vectors { eℓ }, the cross product is expressed in terms of its components as: a × b = ( a 2 b 3 − a 3 b 2 ) e 1 + ( a 3 b 1 − a 1 b 3 ) e 2 + ( a 1 b 2 − a 2 b 1 ) e 3 , {\displaystyle \mathbf {a} \times \mathbf {b} =\left(a^{2}b^{3}-a^{3}b^{2}\right)\mathbf {e} _{1}+\left(a^{3}b^{1}-a^{1}b^{3}\right)\mathbf {e} _{2}+\left(a^{1}b^{2}-a^{2}b^{1}\right)\mathbf {e} _{3},} where superscripts label vector components. On the other hand, the plane of the two vectors is represented by the exterior product or wedge product, denoted by a ∧ b. In this context of geometric algebra, this bivector is called a pseudovector, and is the Hodge dual of the cross product. The dual of e1 is introduced as e23 ≡ e2e3 = e2 ∧ e3, and so forth. That is, the dual of e1 is the subspace perpendicular to e1, namely the subspace spanned by e2 and e3. With this understanding, a ∧ b = ( a 2 b 3 − a 3 b 2 ) e 23 + ( a 3 b 1 − a 1 b 3 ) e 31 + ( a 1 b 2 − a 2 b 1 ) e 12 . {\displaystyle \mathbf {a} \wedge \mathbf {b} =\left(a^{2}b^{3}-a^{3}b^{2}\right)\mathbf {e} _{23}+\left(a^{3}b^{1}-a^{1}b^{3}\right)\mathbf {e} _{31}+\left(a^{1}b^{2}-a^{2}b^{1}\right)\mathbf {e} _{12}\ .} For details, see Hodge star operator § Three dimensions. The cross product and wedge product are related by: a ∧ b = i a × b , {\displaystyle \mathbf {a} \ \wedge \ \mathbf {b} ={\mathit {i}}\ \mathbf {a} \ \times \ \mathbf {b} \ ,} where i = e1 ∧ e2 ∧ e3 is called the unit pseudoscalar. It has the property: i 2 = − 1 . {\displaystyle {\mathit {i}}^{2}=-1\ .} Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their components while leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, if the components are fixed and the basis vectors eℓ are inverted, then the pseudovector is invariant, but the cross product changes sign. This behavior of cross products is consistent with their definition as vector-like elements that change sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors. === Note on usage === As an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, and some authors follow the terminology that does not distinguish between the pseudovector and the cross product. However, because the cross product does not generalize to other than three dimensions, the notion of pseudovector based upon the cross product also cannot be extended to a space of any other number of dimensions. The pseudovector as a (n – 1)-blade in an n-dimensional space is not restricted in this way. Another important note is that pseudovectors, despite their name, are "vectors" in the sense of being elements of a vector space. The idea that "a pseudovector is different from a vector" is only true with a different and more specific definition of the term "vector" as discussed above. == See also == Exterior algebra Clifford algebra Antivector, a generalization of pseudovector in Clifford algebra Orientability — discussion about non-orientable spaces. Tensor density == Notes == == References ==
|
Wikipedia:Ptolemy's inequality#0
|
In Euclidean geometry, Ptolemy's inequality relates the six distances determined by four points in the plane or in a higher-dimensional space. It states that, for any four points A, B, C, and D, the following inequality holds: A B ¯ ⋅ C D ¯ + B C ¯ ⋅ D A ¯ ≥ A C ¯ ⋅ B D ¯ . {\displaystyle {\overline {AB}}\cdot {\overline {CD}}+{\overline {BC}}\cdot {\overline {DA}}\geq {\overline {AC}}\cdot {\overline {BD}}.} It is named after the Greek astronomer and mathematician Ptolemy. The four points can be ordered in any of three distinct ways (counting reversals as not distinct) to form three different quadrilaterals, for each of which the sum of the products of opposite sides is at least as large as the product of the diagonals. Thus, the three product terms in the inequality can be additively permuted to put any one of them on the right side of the inequality, so the three products of opposite sides or of diagonals of any one of the quadrilaterals must obey the triangle inequality. As a special case, Ptolemy's theorem states that the inequality becomes an equality when the four points lie in cyclic order on a circle. The other case of equality occurs when the four points are collinear in order. The inequality does not generalize from Euclidean spaces to arbitrary metric spaces. The spaces where it remains valid are called the Ptolemaic spaces; they include the inner product spaces, Hadamard spaces, and shortest path distances on Ptolemaic graphs. == Assumptions and derivation == Ptolemy's inequality is often stated for a special case, in which the four points are the vertices of a convex quadrilateral, given in cyclic order. However, the theorem applies more generally to any four points; it is not required that the quadrilateral they form be convex, simple, or even planar. For points in the plane, Ptolemy's inequality can be derived from the triangle inequality by an inversion centered at one of the four points. Alternatively, it can be derived by interpreting the four points as complex numbers, using the complex number identity: ( A − B ) ( C − D ) + ( A − D ) ( B − C ) = ( A − C ) ( B − D ) {\displaystyle (A-B)(C-D)+(A-D)(B-C)=(A-C)(B-D)} to construct a triangle whose side lengths are the products of sides of the given quadrilateral, and applying the triangle inequality to this triangle. One can also view the points as belonging to the complex projective line, express the inequality in the form that the absolute values of two cross-ratios of the points sum to at least one, and deduce this from the fact that the cross-ratios themselves add to exactly one. A proof of the inequality for points in three-dimensional space can be reduced to the planar case, by observing that for any non-planar quadrilateral, it is possible to rotate one of the points around the diagonal until the quadrilateral becomes planar, increasing the other diagonal's length and keeping the other five distances constant. In spaces of higher dimension than three, any four points lie in a three-dimensional subspace, and the same three-dimensional proof can be used. == Four concyclic points == For four points in order around a circle, Ptolemy's inequality becomes an equality, known as Ptolemy's theorem: A B ¯ ⋅ C D ¯ + A D ¯ ⋅ B C ¯ = A C ¯ ⋅ B D ¯ . {\displaystyle {\overline {AB}}\cdot {\overline {CD}}+{\overline {AD}}\cdot {\overline {BC}}={\overline {AC}}\cdot {\overline {BD}}.} In the inversion-based proof of Ptolemy's inequality, transforming four co-circular points by an inversion centered at one of them causes the other three to become collinear, so the triangle equality for these three points (from which Ptolemy's inequality may be derived) also becomes an equality. For any other four points, Ptolemy's inequality is strict. == In three dimensions == Four non-coplanar points A, B, C, and D in 3D form a tetrahedron. In this case, the strict inequality holds: A B ¯ ⋅ C D ¯ + B C ¯ ⋅ D A ¯ > A C ¯ ⋅ B D ¯ {\displaystyle {\overline {AB}}\cdot {\overline {CD}}+{\overline {BC}}\cdot {\overline {DA}}>{\overline {AC}}\cdot {\overline {BD}}} . == In general metric spaces == Ptolemy's inequality holds more generally in any inner product space, and whenever it is true for a real normed vector space, that space must be an inner product space. For other types of metric space, the inequality may or may not be valid. A space in which it holds is called Ptolemaic. For instance, consider the four-vertex cycle graph, shown in the figure, with all edge lengths equal to 1. The sum of the products of opposite sides is 2. However, diagonally opposite vertices are at distance 2 from each other, so the product of the diagonals is 4, bigger than the sum of products of sides. Therefore, the shortest path distances in this graph are not Ptolemaic. The graphs in which the distances obey Ptolemy's inequality are called the Ptolemaic graphs and have a restricted structure compared to arbitrary graphs; in particular, they disallow induced cycles of length greater than three, such as the one shown. The Ptolemaic spaces include all CAT(0) spaces and in particular all Hadamard spaces. If a complete Riemannian manifold is Ptolemaic, it is necessarily a Hadamard space. == Inner product spaces == Suppose that ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is a norm on a vector space X . {\displaystyle X.} Then this norm satisfies Ptolemy's inequality: ‖ x − y ‖ ‖ z ‖ + ‖ y − z ‖ ‖ x ‖ ≥ ‖ x − z ‖ ‖ y ‖ for all vectors x , y , z . {\displaystyle \|x-y\|\,\|z\|~+~\|y-z\|\,\|x\|~\geq ~\|x-z\|\,\|y\|\qquad {\text{ for all vectors }}x,y,z.} if and only if there exists an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } on X {\displaystyle X} such that ‖ x ‖ 2 = ⟨ x , x ⟩ {\displaystyle \|x\|^{2}=\langle x,\ x\rangle } for all vectors x ∈ X . {\displaystyle x\in X.} Another necessary and sufficient condition for there to exist such an inner product is for the norm to satisfy the parallelogram law: ‖ x + y ‖ 2 + ‖ x − y ‖ 2 = 2 ‖ x ‖ 2 + 2 ‖ y ‖ 2 for all vectors x , y . {\displaystyle \|x+y\|^{2}~+~\|x-y\|^{2}~=~2\|x\|^{2}+2\|y\|^{2}\qquad {\text{ for all vectors }}x,y.} If this is the case then this inner product will be unique and it can be defined in terms of the norm by using the polarization identity. == See also == Greek mathematics – Mathematics of Ancient GreecePages displaying short descriptions of redirect targets Parallelogram law – Sides and diagonals have equal sums of squares Polarization identity – Formula relating the norm and the inner product in a inner product space Ptolemy – Astronomer and geographer (c. 100–170) Ptolemy's table of chords – 2nd century AD trigonometric table Ptolemy's theorem – Relates the 4 sides and 2 diagonals of a quadrilateral with vertices on a common circle == References ==
|
Wikipedia:Ptolemy's table of chords#0
|
The table of chords, created by the Greek astronomer, geometer, and geographer Ptolemy in Egypt during the 2nd century AD, is a trigonometric table in Book I, chapter 11 of Ptolemy's Almagest, a treatise on mathematical astronomy. It is essentially equivalent to a table of values of the sine function. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy (an earlier table of chords by Hipparchus gave chords only for arcs that were multiples of 7+1/2° = π/24 radians). Since the 8th and 9th centuries, the sine and other trigonometric functions have been used in Islamic mathematics and astronomy, reforming the production of sine tables. Khwarizmi and Habash al-Hasib later produced a set of trigonometric tables. == The chord function and the table == A chord of a circle is a line segment whose endpoints are on the circle. Ptolemy used a circle whose diameter is 120 parts. He tabulated the length of a chord whose endpoints are separated by an arc of n degrees, for n ranging from 1/2 to 180 by increments of 1/2. In modern notation, the length of the chord corresponding to an arc of θ degrees is chord ( θ ) = 120 sin ( θ ∘ 2 ) = 60 ⋅ ( 2 sin ( π θ 360 radians ) ) . {\displaystyle {\begin{aligned}&\operatorname {chord} (\theta )=120\sin \left({\frac {\theta ^{\circ }}{2}}\right)\\={}&60\cdot \left(2\sin \left({\frac {\pi \theta }{360}}{\text{ radians}}\right)\right).\end{aligned}}} As θ goes from 0 to 180, the chord of a θ° arc goes from 0 to 120. For tiny arcs, the chord is to the arc angle in degrees as π is to 3, or more precisely, the ratio can be made as close as desired to π/3 ≈ 1.04719755 by making θ small enough. Thus, for the arc of 1/2°, the chord length is slightly more than the arc angle in degrees. As the arc increases, the ratio of the chord to the arc decreases. When the arc reaches 60°, the chord length is exactly equal to the number of degrees in the arc, i.e. chord 60° = 60. For arcs of more than 60°, the chord is less than the arc, until an arc of 180° is reached, when the chord is only 120. The fractional parts of chord lengths were expressed in sexagesimal (base 60) numerals. For example, where the length of a chord subtended by a 112° arc is reported to be 99,29,5, it has a length of 99 + 29 60 + 5 60 2 = 99.4847 2 ¯ , {\displaystyle 99+{\frac {29}{60}}+{\frac {5}{60^{2}}}=99.4847{\overline {2}},} rounded to the nearest 1/602. After the columns for the arc and the chord, a third column is labeled "sixtieths". For an arc of θ°, the entry in the "sixtieths" column is chord ( θ + 1 2 ∘ ) − chord ( θ ∘ ) 30 . {\displaystyle {\frac {\operatorname {chord} \left(\theta +{\tfrac {1}{2}}^{\circ }\right)-\operatorname {chord} \left(\theta ^{\circ }\right)}{30}}.} This is the average number of sixtieths of a unit that must be added to chord(θ°) each time the angle increases by one minute of arc, between the entry for θ° and that for (θ + 1/2)°. Thus, it is used for linear interpolation. Glowatzki and Göttsche showed that Ptolemy must have calculated chords to five sexigesimal places in order to achieve the degree of accuracy found in the "sixtieths" column. arc ∘ chord sixtieths 1 2 0 31 25 0 1 2 50 1 1 2 50 0 1 2 50 1 1 2 1 34 15 0 1 2 50 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 109 97 41 38 0 0 36 23 109 1 2 97 59 49 0 0 36 9 110 98 17 54 0 0 35 56 110 1 2 98 35 52 0 0 35 42 111 98 53 43 0 0 35 29 111 1 2 99 11 27 0 0 35 15 112 99 29 5 0 0 35 1 112 1 2 99 46 35 0 0 34 48 113 100 3 59 0 0 34 34 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 179 119 59 44 0 0 0 25 179 1 2 119 59 56 0 0 0 9 180 120 0 0 0 0 0 0 {\displaystyle {\begin{array}{|l|rrr|rrr|}\hline {\text{arc}}^{\circ }&{\text{chord}}&&&{\text{sixtieths}}&&\\\hline {}\,\,\,\,\,\,\,\,\,\,{\tfrac {1}{2}}&0&31&25&0\quad 1&2&50\\{}\,\,\,\,\,\,\,1&1&2&50&0\quad 1&2&50\\{}\,\,\,\,\,\,\,1{\tfrac {1}{2}}&1&34&15&0\quad 1&2&50\\{}\,\,\,\,\,\,\,\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\109&97&41&38&0\quad 0&36&23\\109{\tfrac {1}{2}}&97&59&49&0\quad 0&36&9\\110&98&17&54&0\quad 0&35&56\\110{\tfrac {1}{2}}&98&35&52&0\quad 0&35&42\\111&98&53&43&0\quad 0&35&29\\111{\tfrac {1}{2}}&99&11&27&0\quad 0&35&15\\112&99&29&5&0\quad 0&35&1\\112{\tfrac {1}{2}}&99&46&35&0\quad 0&34&48\\113&100&3&59&0\quad 0&34&34\\{}\,\,\,\,\,\,\,\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\179&119&59&44&0\quad 0&0&25\\179{\frac {1}{2}}&119&59&56&0\quad 0&0&9\\180&120&0&0&0\quad 0&0&0\\\hline \end{array}}} == How Ptolemy computed chords == Chapter 10 of Book I of the Almagest presents geometric theorems used for computing chords. Ptolemy used geometric reasoning based on Proposition 10 of Book XIII of Euclid's Elements to find the chords of 72° and 36°. That Proposition states that if an equilateral pentagon is inscribed in a circle, then the area of the square on the side of the pentagon equals the sum of the areas of the squares on the sides of the hexagon and the decagon inscribed in the same circle. He used Ptolemy's theorem on quadrilaterals inscribed in a circle to derive formulas for the chord of a half-arc, the chord of the sum of two arcs, and the chord of a difference of two arcs. The theorem states that for a quadrilateral inscribed in a circle, the product of the lengths of the diagonals equals the sum of the products of the two pairs of lengths of opposite sides. The derivations of trigonometric identities rely on a cyclic quadrilateral in which one side is a diameter of the circle. To find the chords of arcs of 1° and 1/2° he used approximations based on Aristarchus's inequality. The inequality states that for arcs α and β, if 0 < β < α < 90°, then sin α sin β < α β < tan α tan β . {\displaystyle {\frac {\sin \alpha }{\sin \beta }}<{\frac {\alpha }{\beta }}<{\frac {\tan \alpha }{\tan \beta }}.} Ptolemy showed that for arcs of 1° and 1/2°, the approximations correctly give the first two sexagesimal places after the integer part. === Accuracy === Gerald J. Toomer in his translation of the Almagest gives seven entries where some manuscripts have scribal errors, changing one "digit" (one letter, see below). Glenn Elert has made a comparison between Ptolemy's values and the true values (120 times the sine of half the angle) and has found that the root mean square error is 0.000136. But much of this is simply due to rounding off to the nearest 1/3600, since this equals 0.0002777... There are nevertheless many entries where the last "digit" is off by 1 (too high or too low) from the best rounded value. Ptolemy's values are often too high by 1 in the last place, and more so towards the higher angles. The largest errors are about 0.0004, which still corresponds to an error of only 1 in the last sexagesimal digit. == The numeral system and the appearance of the untranslated table == Lengths of arcs of the circle, in degrees, and the integer parts of chord lengths, were expressed in a base 10 numeral system that used 21 of the letters of the Greek alphabet with the meanings given in the following table, and a symbol, "∠′", that means 1/2 and a raised circle "○" that fills a blank space (effectively representing zero). Three of the letters, labeled "archaic" in the table below, had not been in use in the Greek language for some centuries before the Almagest was written, but were still in use as numerals and musical notes. α a l p h a 1 ι i o t a 10 ρ r h o 100 β b e t a 2 κ k a p p a 20 σ s i g m a 200 γ g a m m a 3 λ l a m b d a 30 τ t a u 300 δ d e l t a 4 μ m u 40 υ u p s i l o n 400 ε e p s i l o n 5 ν n u 50 φ p h i 500 ϛ s t i g m a ( a r c h a i c ) 6 ξ x i 60 χ c h i 600 ζ z e t a 7 o o m i c r o n 70 ψ p s i 700 η e t a 8 π p i 80 ω o m e g a 800 θ t h e t a 9 ϟ k o p p a ( a r c h a i c ) 90 ϡ s a m p i ( a r c h a i c ) 900 {\displaystyle {\begin{array}{|rlr|rlr|rlr|}\hline \alpha &\mathrm {alpha} &1&\iota &\mathrm {iota} &10&\rho &\mathrm {rho} &100\\\beta &\mathrm {beta} &2&\kappa &\mathrm {kappa} &20&\sigma &\mathrm {sigma} &200\\\gamma &\mathrm {gamma} &3&\lambda &\mathrm {lambda} &30&\tau &\mathrm {tau} &300\\\delta &\mathrm {delta} &4&\mu &\mathrm {mu} &40&\upsilon &\mathrm {upsilon} &400\\\varepsilon &\mathrm {epsilon} &5&\nu &\mathrm {nu} &50&\varphi &\mathrm {phi} &500\\\mathrm {\stigma} &\mathrm {stigma\ (archaic)} &6&\xi &\mathrm {xi} &60&\chi &\mathrm {chi} &600\\\zeta &\mathrm {zeta} &7&o&\mathrm {omicron} &70&\psi &\mathrm {psi} &700\\\eta &\mathrm {eta} &8&\pi &\mathrm {pi} &80&\omega &\mathrm {omega} &800\\\theta &\mathrm {theta} &9&\mathrm {\koppa} &\mathrm {koppa\ (archaic)} &90&\mathrm {\sampi} &\mathrm {sampi\ (archaic)} &900\\\hline \end{array}}} Thus, for example, an arc of 143+1/2° is expressed as ρμγ∠′. (As the table only reaches 180°, the Greek numerals for 200 and above are not used.) The fractional parts of chord lengths required great accuracy, and were given in sexagesimal notation in two columns in the table: The first column gives an integer multiple of 1/60, in the range 0–59, the second an integer multiple of 1/602 = 1/3600, also in the range 0–59. Thus in Heiberg's edition of the Almagest with the table of chords on pages 48–63, the beginning of the table, corresponding to arcs from 1/2° to 7+1/2°, looks like this: π ε ρ ι φ ε ρ ε ι ω ~ ν ε υ ' θ ε ι ω ~ ν ε ‘ ξ η κ o σ τ ω ~ ν ∠ ′ α α ∠ ′ β β ∠ ′ γ γ ∠ ′ δ δ ∠ ′ ε ε ∠ ′ ϛ ϛ ∠ ′ ζ ζ ∠ ′ ∘ λ α κ ε α β ν α λ δ ι ε β ε μ β λ ζ δ γ η κ η γ λ θ ν β δ ι α ι ϛ δ μ β μ ε ι δ δ ε μ ε κ ζ ϛ ι ϛ μ θ ϛ μ η ι α ζ ι θ λ γ ζ ν ν δ ∘ α β ν ∘ α β ν ∘ α β ν ∘ α β ν ∘ α β μ η ∘ α β μ η ∘ α β μ η ∘ α β μ ζ ∘ α β μ ζ ∘ α β μ ϛ ∘ α β μ ε ∘ α β μ δ ∘ α β μ γ ∘ α β μ β ∘ α β μ α {\displaystyle {\begin{array}{ccc}\pi \varepsilon \rho \iota \varphi \varepsilon \rho \varepsilon \iota {\tilde {\omega }}\nu &\varepsilon {\overset {\text{'}}{\upsilon }}\theta \varepsilon \iota {\tilde {\omega }}\nu &{\overset {\text{‘}}{\varepsilon }}\xi \eta \kappa o\sigma \tau {\tilde {\omega }}\nu \\{\begin{array}{|l|}\hline \quad \angle '\\\alpha \\\alpha \;\angle '\\\hline \beta \\\beta \;\angle '\\\gamma \\\hline \gamma \;\angle '\\\delta \\\delta \;\angle '\\\hline \varepsilon \\\varepsilon \;\angle '\\\mathrm {\stigma} \\\hline \mathrm {\stigma} \;\angle '\\\zeta \\\zeta \;\angle '\\\hline \end{array}}&{\begin{array}{|r|r|r|}\hline \circ &\lambda \alpha &\kappa \varepsilon \\\alpha &\beta &\nu \\\alpha &\lambda \delta &\iota \varepsilon \\\hline \beta &\varepsilon &\mu \\\beta &\lambda \zeta &\delta \\\gamma &\eta &\kappa \eta \\\hline \gamma &\lambda \theta &\nu \beta \\\delta &\iota \alpha &\iota \mathrm {\stigma} \\\delta &\mu \beta &\mu \\\hline \varepsilon &\iota \delta &\delta \\\varepsilon &\mu \varepsilon &\kappa \zeta \\\mathrm {\stigma} &\iota \mathrm {\stigma} &\mu \theta \\\hline \mathrm {\stigma} &\mu \eta &\iota \alpha \\\zeta &\iota \theta &\lambda \gamma \\\zeta &\nu &\nu \delta \\\hline \end{array}}&{\begin{array}{|r|r|r|r|}\hline \circ &\alpha &\beta &\nu \\\circ &\alpha &\beta &\nu \\\circ &\alpha &\beta &\nu \\\hline \circ &\alpha &\beta &\nu \\\circ &\alpha &\beta &\mu \eta \\\circ &\alpha &\beta &\mu \eta \\\hline \circ &\alpha &\beta &\mu \eta \\\circ &\alpha &\beta &\mu \zeta \\\circ &\alpha &\beta &\mu \zeta \\\hline \circ &\alpha &\beta &\mu \mathrm {\stigma} \\\circ &\alpha &\beta &\mu \varepsilon \\\circ &\alpha &\beta &\mu \delta \\\hline \circ &\alpha &\beta &\mu \gamma \\\circ &\alpha &\beta &\mu \beta \\\circ &\alpha &\beta &\mu \alpha \\\hline \end{array}}\end{array}}} Later in the table, one can see the base-10 nature of the numerals expressing the integer parts of the arc and the chord length. Thus an arc of 85° is written as πε (π for 80 and ε for 5) and not broken down into 60 + 25. The corresponding chord length is 81 plus a fractional part. The integer part begins with πα, likewise not broken into 60 + 21. But the fractional part, 4 60 + 15 60 2 {\textstyle {\tfrac {4}{60}}+{\tfrac {15}{60^{2}}}} , is written as δ, for 4, in the 1/60 column, followed by ιε, for 15, in the 1/602 column. π ε ρ ι φ ε ρ ε ι ω ~ ν ε υ ' θ ε ι ω ~ ν ε ‘ ξ η κ o σ τ ω ~ ν π δ ∠ ′ π ε π ε ∠ ′ π ϛ π ϛ ∠ ′ π ζ π μ α γ π α δ ι ε π α κ ζ κ β π α ν κ δ π β ι γ ι θ π β λ ϛ θ ∘ ∘ μ ϛ κ ε ∘ ∘ μ ϛ ι δ ∘ ∘ μ ϛ γ ∘ ∘ μ ε ν β ∘ ∘ μ ε μ ∘ ∘ μ ε κ θ {\displaystyle {\begin{array}{ccc}\pi \varepsilon \rho \iota \varphi \varepsilon \rho \varepsilon \iota {\tilde {\omega }}\nu &\varepsilon {\overset {\text{'}}{\upsilon }}\theta \varepsilon \iota {\tilde {\omega }}\nu &{\overset {\text{‘}}{\varepsilon }}\xi \eta \kappa o\sigma \tau {\tilde {\omega }}\nu \\{\begin{array}{|l|}\hline \pi \delta \angle '\\\pi \varepsilon \\\pi \varepsilon \angle '\\\hline \pi \mathrm {\stigma} \\\pi \mathrm {\stigma} \angle '\\\pi \zeta \\\hline \end{array}}&{\begin{array}{|r|r|r|}\hline \pi &\mu \alpha &\gamma \\\pi \alpha &\delta &\iota \varepsilon \\\pi \alpha &\kappa \zeta &\kappa \beta \\\hline \pi \alpha &\nu &\kappa \delta \\\pi \beta &\iota \gamma &\iota \theta \\\pi \beta &\lambda \mathrm {\stigma} &\theta \\\hline \end{array}}&{\begin{array}{|r|r|r|r|}\hline \circ &\circ &\mu \mathrm {\stigma} &\kappa \varepsilon \\\circ &\circ &\mu \mathrm {\stigma} &\iota \delta \\\circ &\circ &\mu \mathrm {\stigma} &\gamma \\\hline \circ &\circ &\mu \varepsilon &\nu \beta \\\circ &\circ &\mu \varepsilon &\mu \\\circ &\circ &\mu \varepsilon &\kappa \theta \\\hline \end{array}}\end{array}}} The table has 45 lines on each of eight pages, for a total of 360 lines. == See also == Aryabhata's sine table Exsecant Fundamentum Astronomiae, a book setting forth an algorithm for precise computation of sines, published in the late 1500s Greek mathematics Madhava's sine table Ptolemy Scale of chords Versine == References == Aaboe, Asger (1997), Episodes from the Early History of Mathematics, Mathematical Association of America, ISBN 978-0-88385-613-0 Clagett, Marshall (2002), Greek Science in Antiquity, Courier Dover Publications, ISBN 978-0-8369-2150-2 Neugebauer, Otto (1975), A History of Ancient Mathematical Astronomy, Springer-Verlag, ISBN 978-0-387-06995-1 Olaf Pedersen (1974) A Survey of the Almagest, Odense University Press ISBN 87-7492-087-1 Thurston, Hugh (1996), Early Astronomy, Springer, ISBN 978-0-387-94822-5 == External links == J. L. Heiberg Almagest, Table of chords on pages 48–63. Glenn Elert Ptolemy's Table of Chords: Trigonometry in the Second Century Almageste in Greek and French, at the internet archive.
|
Wikipedia:Ptolemy's theorem#0
|
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy. If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that: A C ⋅ B D = A B ⋅ C D + B C ⋅ A D {\displaystyle AC\cdot BD=AB\cdot CD+BC\cdot AD} This relation may be verbally expressed as follows: If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides. Moreover, the converse of Ptolemy's theorem is also true: In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral. To appreciate the utility and general significance of Ptolemy’s Theorem, it is especially useful to study its main Corollaries. == Corollaries on inscribed polygons == === Equilateral triangle === Ptolemy's Theorem yields as a corollary a theorem regarding an equilateral triangle inscribed in a circle. Given An equilateral triangle inscribed on a circle, and a point on the circle. The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices. Proof: Follows immediately from Ptolemy's theorem: q s = p s + r s ⇒ q = p + r . {\displaystyle qs=ps+rs\Rightarrow q=p+r.} This corollary has as an application an algorithm for computing minimal Steiner trees whose topology is fixed, by repeatedly replacing pairs of leaves of the tree A, B that should be connected to a Steiner point, by the third point C of their equilateral triangle. The unknown Steiner point must lie on arc AB of the circle, and this replacement ensures that, no matter where it is placed, the length of the tree remains unchanged. === Square === Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to a {\displaystyle a} then the length of the diagonal is equal to a 2 {\displaystyle a{\sqrt {2}}} according to the Pythagorean theorem, and Ptolemy's relation obviously holds. === Rectangle === More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of the diagonals is then d2, the right hand side of Ptolemy's relation is the sum a2 + b2. Copernicus – who used Ptolemy's theorem extensively in his trigonometrical work – refers to this result as a 'Porism' or self-evident corollary: Furthermore it is clear (manifestum est) that when the chord subtending an arc has been given, that chord too can be found which subtends the rest of the semicircle. === Pentagon === A more interesting example is the relation between the length a of the side and the (common) length b of the 5 chords in a regular pentagon. By completing the square, the relation yields the golden ratio: b ⋅ b = a ⋅ a + a ⋅ b b 2 − a b = a 2 b 2 a 2 − a b a 2 = a 2 a 2 ( b a ) 2 − b a + ( 1 2 ) 2 = 1 + ( 1 2 ) 2 ( b a − 1 2 ) 2 = 5 4 b a − 1 2 = ± 5 2 b a > 0 ⇒ φ = b a = 1 + 5 2 {\displaystyle {\begin{array}{rl}b\cdot b\,\;\;\qquad \quad \qquad =&\!\!\!\!a\!\cdot \!a+a\!\cdot \!b\\b^{2}\;\;-ab\quad \qquad =&\!\!a^{2}\\{\frac {b^{2}}{a^{2}}}\;\;-{\frac {ab}{a^{2}}}\;\;\;\qquad =&\!\!\!{\frac {a^{2}}{a^{2}}}\\\left({\frac {b}{a}}\right)^{2}-{\frac {b}{a}}+\left({\frac {1}{2}}\right)^{2}=&\!\!1+\left({\frac {1}{2}}\right)^{2}\\\left({\frac {b}{a}}-{\frac {1}{2}}\right)^{2}=&\!\!\quad {\frac {5}{4}}\\{\frac {b}{a}}-{\frac {1}{2}}\;\;\;=&\!\!\!\!\pm {\frac {\sqrt {5}}{2}}\\{\frac {b}{a}}>0\,\Rightarrow \,\varphi ={\frac {b}{a}}=&\!\!\!\!{\frac {1+{\sqrt {5}}}{2}}\end{array}}} === Side of decagon === If now diameter AF is drawn bisecting DC so that DF and CF are sides c of an inscribed decagon, Ptolemy's Theorem can again be applied – this time to cyclic quadrilateral ADFC with diameter d as one of its diagonals: a d = 2 b c {\displaystyle ad=2bc} ⇒ a d = 2 φ a c {\displaystyle \Rightarrow ad=2\varphi ac} where φ {\displaystyle \varphi } is the golden ratio. ⇒ c = d 2 φ . {\displaystyle \Rightarrow c={\frac {d}{2\varphi }}.} whence the side of the inscribed decagon is obtained in terms of the circle diameter. Pythagoras's theorem applied to right triangle AFD then yields "b" in terms of the diameter and "a" the side of the pentagon is thereafter calculated as a = b φ = b ( φ − 1 ) . {\displaystyle a={\frac {b}{\varphi }}=b\left(\varphi -1\right).} As Copernicus (following Ptolemy) wrote, "The diameter of a circle being given, the sides of the triangle, tetragon, pentagon, hexagon and decagon, which the same circle circumscribes, are also given." == Proofs == === Visual proof === The animation here shows a visual demonstration of Ptolemy's theorem, based on Derrick & Herstein (2012). === Proof by similarity of triangles === Let ABCD be a cyclic quadrilateral. On the chord BC, the inscribed angles ∠BAC = ∠BDC, and on AB, ∠ADB = ∠ACB. Construct K on AC such that ∠ABK = ∠CBD; since ∠ABK + ∠CBK = ∠ABC = ∠CBD + ∠ABD, ∠CBK = ∠ABD. Now, by common angles △ABK is similar to △DBC, and likewise △ABD is similar to △KBC. Thus AK/AB = CD/BD, and CK/BC = DA/BD; equivalently, AK⋅BD = AB⋅CD, and CK⋅BD = BC⋅DA. By adding two equalities we have AK⋅BD + CK⋅BD = AB⋅CD + BC⋅DA, and factorizing this gives (AK+CK)·BD = AB⋅CD + BC⋅DA. But AK+CK = AC, so AC⋅BD = AB⋅CD + BC⋅DA, Q.E.D. The proof as written is only valid for simple cyclic quadrilaterals. If the quadrilateral is self-crossing then K will be located outside the line segment AC. But in this case, AK−CK = ±AC, giving the expected result. === Proof by trigonometric identities === Let the inscribed angles subtended by A B {\displaystyle AB} , B C {\displaystyle BC} and C D {\displaystyle CD} be, respectively, α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } , and the radius of the circle be R {\displaystyle R} , then we have A B = 2 R sin α {\displaystyle AB=2R\sin \alpha } , B C = 2 R sin β {\displaystyle BC=2R\sin \beta } , C D = 2 R sin γ {\displaystyle CD=2R\sin \gamma } , A D = 2 R sin ( 180 ∘ − ( α + β + γ ) ) {\displaystyle AD=2R\sin(180^{\circ }-(\alpha +\beta +\gamma ))} , A C = 2 R sin ( α + β ) {\displaystyle AC=2R\sin(\alpha +\beta )} and B D = 2 R sin ( β + γ ) {\displaystyle BD=2R\sin(\beta +\gamma )} , and the original equality to be proved is transformed to sin ( α + β ) sin ( β + γ ) = sin α sin γ + sin β sin ( α + β + γ ) {\displaystyle \sin(\alpha +\beta )\sin(\beta +\gamma )=\sin \alpha \sin \gamma +\sin \beta \sin(\alpha +\beta +\gamma )} from which the factor 4 R 2 {\displaystyle 4R^{2}} has disappeared by dividing both sides of the equation by it. Now by using the sum formulae, sin ( x + y ) = sin x cos y + cos x sin y {\displaystyle \sin(x+y)=\sin {x}\cos y+\cos x\sin y} and cos ( x + y ) = cos x cos y − sin x sin y {\displaystyle \cos(x+y)=\cos x\cos y-\sin x\sin y} , it is trivial to show that both sides of the above equation are equal to sin α sin β cos β cos γ + sin α cos 2 β sin γ + cos α sin 2 β cos γ + cos α sin β cos β sin γ . {\displaystyle {\begin{aligned}&\sin \alpha \sin \beta \cos \beta \cos \gamma +\sin \alpha \cos ^{2}\beta \sin \gamma \\+{}&\cos \alpha \sin ^{2}\beta \cos \gamma +\cos \alpha \sin \beta \cos \beta \sin \gamma .\end{aligned}}} Q.E.D. Here is another, perhaps more transparent, proof using rudimentary trigonometry. Define a new quadrilateral A B C D ′ {\displaystyle ABCD'} inscribed in the same circle, where A , B , C {\displaystyle A,B,C} are the same as in A B C D {\displaystyle ABCD} , and D ′ {\displaystyle D'} located at a new point on the same circle, defined by | A D ′ ¯ | = | C D ¯ | {\displaystyle |{\overline {AD'}}|=|{\overline {CD}}|} , | C D ′ ¯ | = | A D ¯ | {\displaystyle |{\overline {CD'}}|=|{\overline {AD}}|} . (Picture triangle A C D {\displaystyle ACD} flipped, so that vertex C {\displaystyle C} moves to vertex A {\displaystyle A} and vertex A {\displaystyle A} moves to vertex C {\displaystyle C} . Vertex D {\displaystyle D} will now be located at a new point D’ on the circle.) Then, A B C D ′ {\displaystyle ABCD'} has the same edges lengths, and consequently the same inscribed angles subtended by the corresponding edges, as A B C D {\displaystyle ABCD} , only in a different order. That is, α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } , for, respectively, A B , B C {\displaystyle AB,BC} and A D ′ {\displaystyle AD'} . Also, A B C D {\displaystyle ABCD} and A B C D ′ {\displaystyle ABCD'} have the same area. Then, A r e a ( A B C D ) = 1 2 A C ⋅ B D ⋅ sin ( α + γ ) ; A r e a ( A B C D ′ ) = 1 2 A B ⋅ A D ′ ⋅ sin ( 180 ∘ − α − γ ) + 1 2 B C ⋅ C D ′ ⋅ sin ( α + γ ) = 1 2 ( A B ⋅ C D + B C ⋅ A D ) ⋅ sin ( α + γ ) . {\displaystyle {\begin{aligned}\mathrm {Area} (ABCD)&={\frac {1}{2}}AC\cdot BD\cdot \sin(\alpha +\gamma );\\\mathrm {Area} (ABCD')&={\frac {1}{2}}AB\cdot AD'\cdot \sin(180^{\circ }-\alpha -\gamma )+{\frac {1}{2}}BC\cdot CD'\cdot \sin(\alpha +\gamma )\\&={\frac {1}{2}}(AB\cdot CD+BC\cdot AD)\cdot \sin(\alpha +\gamma ).\end{aligned}}} Q.E.D. === Proof by inversion === Choose an auxiliary circle Γ {\displaystyle \Gamma } of radius r {\displaystyle r} centered at D with respect to which the circumcircle of ABCD is inverted into a line (see figure). Then A ′ B ′ + B ′ C ′ = A ′ C ′ . {\displaystyle A'B'+B'C'=A'C'.} Then A ′ B ′ , B ′ C ′ {\displaystyle A'B',B'C'} and A ′ C ′ {\displaystyle A'C'} can be expressed as A B ⋅ D B ′ D A {\textstyle {\frac {AB\cdot DB'}{DA}}} , B C ⋅ D B ′ D C {\textstyle {\frac {BC\cdot DB'}{DC}}} and A C ⋅ D C ′ D A {\textstyle {\frac {AC\cdot DC'}{DA}}} respectively. Multiplying each term by D A ⋅ D C D B ′ {\textstyle {\frac {DA\cdot DC}{DB'}}} and using D C ′ D B ′ = D B D C {\textstyle {\frac {DC'}{DB'}}={\frac {DB}{DC}}} yields Ptolemy's equality. Q.E.D. Note that if the quadrilateral is not cyclic then A', B' and C' form a triangle and hence A'B'+B'C' > A'C', giving us a very simple proof of Ptolemy's Inequality which is presented below. === Proof using complex numbers === Embed ABCD in the complex plane C {\displaystyle \mathbb {C} } by identifying A ↦ z A , … , D ↦ z D {\displaystyle A\mapsto z_{A},\ldots ,D\mapsto z_{D}} as four distinct complex numbers z A , … , z D ∈ C {\displaystyle z_{A},\ldots ,z_{D}\in \mathbb {C} } . Define the cross-ratio ζ := ( z A − z B ) ( z C − z D ) ( z A − z D ) ( z B − z C ) ∈ C ≠ 0 {\displaystyle \zeta :={\frac {(z_{A}-z_{B})(z_{C}-z_{D})}{(z_{A}-z_{D})(z_{B}-z_{C})}}\in \mathbb {C} _{\neq 0}} . Then A B ¯ ⋅ C D ¯ + A D ¯ ⋅ B C ¯ = | z A − z B | | z C − z D | + | z A − z D | | z B − z C | = | ( z A − z B ) ( z C − z D ) | + | ( z A − z D ) ( z B − z C ) | = ( | ( z A − z B ) ( z C − z D ) ( z A − z D ) ( z B − z C ) | + 1 ) | ( z A − z D ) ( z B − z C ) | = ( | ζ | + 1 ) | ( z A − z D ) ( z B − z C ) | ≥ | ( ζ + 1 ) ( z A − z D ) ( z B − z C ) | = | ( z A − z B ) ( z C − z D ) + ( z A − z D ) ( z B − z C ) | = | ( z A − z C ) ( z B − z D ) | = | z A − z C | | z B − z D | = A C ¯ ⋅ B D ¯ {\displaystyle {\begin{aligned}{\overline {AB}}\cdot {\overline {CD}}+{\overline {AD}}\cdot {\overline {BC}}&=\left|z_{A}-z_{B}\right|\left|z_{C}-z_{D}\right|+\left|z_{A}-z_{D}\right|\left|z_{B}-z_{C}\right|\\&=\left|(z_{A}-z_{B})(z_{C}-z_{D})\right|+\left|(z_{A}-z_{D})(z_{B}-z_{C})\right|\\&=\left(\left|{\frac {(z_{A}-z_{B})(z_{C}-z_{D})}{(z_{A}-z_{D})(z_{B}-z_{C})}}\right|+1\right)\left|(z_{A}-z_{D})(z_{B}-z_{C})\right|\\&=\left(\left|\zeta \right|+1\right)\left|(z_{A}-z_{D})(z_{B}-z_{C})\right|\\&\geq \left|(\zeta +1)(z_{A}-z_{D})(z_{B}-z_{C})\right|\\&=\left|(z_{A}-z_{B})(z_{C}-z_{D})+(z_{A}-z_{D})(z_{B}-z_{C})\right|\\&=\left|(z_{A}-z_{C})(z_{B}-z_{D})\right|\\&=\left|z_{A}-z_{C}\right|\left|z_{B}-z_{D}\right|\\&={\overline {AC}}\cdot {\overline {BD}}\end{aligned}}} with equality if and only if the cross-ratio ζ {\displaystyle \zeta } is a positive real number. This proves Ptolemy's inequality generally, as it remains only to show that z A , … , z D {\displaystyle z_{A},\ldots ,z_{D}} lie consecutively arranged on a circle (possibly of infinite radius, i.e. a line) in C {\displaystyle \mathbb {C} } if and only if ζ ∈ R > 0 {\displaystyle \zeta \in \mathbb {R} _{>0}} . From the polar form of a complex number z = | z | e i arg ( z ) {\displaystyle z=\vert z\vert e^{i\arg(z)}} , it follows arg ( ζ ) = arg ( z A − z B ) ( z C − z D ) ( z A − z D ) ( z B − z C ) = arg ( z A − z B ) + arg ( z C − z D ) − arg ( z A − z D ) − arg ( z B − z C ) ( mod 2 π ) = arg ( z A − z B ) + arg ( z C − z D ) − arg ( z A − z D ) − arg ( z C − z B ) − arg ( − 1 ) ( mod 2 π ) = − [ arg ( z C − z B ) − arg ( z A − z B ) ] − [ arg ( z A − z D ) − arg ( z C − z D ) ] − arg ( − 1 ) ( mod 2 π ) = − ∠ A B C − ∠ C D A − π ( mod 2 π ) = 0 {\displaystyle {\begin{aligned}\arg(\zeta )&=\arg {\frac {(z_{A}-z_{B})(z_{C}-z_{D})}{(z_{A}-z_{D})(z_{B}-z_{C})}}\\&=\arg(z_{A}-z_{B})+\arg(z_{C}-z_{D})-\arg(z_{A}-z_{D})-\arg(z_{B}-z_{C}){\pmod {2\pi }}\\&=\arg(z_{A}-z_{B})+\arg(z_{C}-z_{D})-\arg(z_{A}-z_{D})-\arg(z_{C}-z_{B})-\arg(-1){\pmod {2\pi }}\\&=-\left[\arg(z_{C}-z_{B})-\arg(z_{A}-z_{B})\right]-\left[\arg(z_{A}-z_{D})-\arg(z_{C}-z_{D})\right]-\arg(-1){\pmod {2\pi }}\\&=-\angle ABC-\angle CDA-\pi {\pmod {2\pi }}\\&=0\end{aligned}}} with the last equality holding if and only if ABCD is cyclic, since a quadrilateral is cyclic if and only if opposite angles sum to π {\displaystyle \pi } . Q.E.D. Note that this proof is equivalently made by observing that the cyclicity of ABCD, i.e. the supplementarity ∠ A B C {\displaystyle \angle ABC} and ∠ C D A {\displaystyle \angle CDA} , is equivalent to the condition arg [ ( z A − z B ) ( z C − z D ) ] = arg [ ( z A − z D ) ( z B − z C ) ] = arg [ ( z A − z C ) ( z B − z D ) ] ( mod 2 π ) {\displaystyle \arg \left[(z_{A}-z_{B})(z_{C}-z_{D})\right]=\arg \left[(z_{A}-z_{D})(z_{B}-z_{C})\right]=\arg \left[(z_{A}-z_{C})(z_{B}-z_{D})\right]{\pmod {2\pi }}} ; in particular there is a rotation of C {\displaystyle \mathbb {C} } in which this arg {\displaystyle \arg } is 0 (i.e. all three products are positive real numbers), and by which Ptolemy's theorem A B ¯ ⋅ C D ¯ + A D ¯ ⋅ B C ¯ = A C ¯ ⋅ B D ¯ {\displaystyle {\overline {AB}}\cdot {\overline {CD}}+{\overline {AD}}\cdot {\overline {BC}}={\overline {AC}}\cdot {\overline {BD}}} is then directly established from the simple algebraic identity ( z A − z B ) ( z C − z D ) + ( z A − z D ) ( z B − z C ) = ( z A − z C ) ( z B − z D ) . {\displaystyle (z_{A}-z_{B})(z_{C}-z_{D})+(z_{A}-z_{D})(z_{B}-z_{C})=(z_{A}-z_{C})(z_{B}-z_{D}).} == Corollaries == In the case of a circle of unit diameter the sides S 1 , S 2 , S 3 , S 4 {\displaystyle S_{1},S_{2},S_{3},S_{4}} of any cyclic quadrilateral ABCD are numerically equal to the sines of the angles θ 1 , θ 2 , θ 3 {\displaystyle \theta _{1},\theta _{2},\theta _{3}} and θ 4 {\displaystyle \theta _{4}} which they subtend (see Law of sines). Similarly the diagonals are equal to the sine of the sum of whichever pair of angles they subtend. We may then write Ptolemy's Theorem in the following trigonometric form: sin θ 1 sin θ 3 + sin θ 2 sin θ 4 = sin ( θ 1 + θ 2 ) sin ( θ 1 + θ 4 ) {\displaystyle \sin \theta _{1}\sin \theta _{3}+\sin \theta _{2}\sin \theta _{4}=\sin(\theta _{1}+\theta _{2})\sin(\theta _{1}+\theta _{4})} Applying certain conditions to the subtended angles θ 1 , θ 2 , θ 3 {\displaystyle \theta _{1},\theta _{2},\theta _{3}} and θ 4 {\displaystyle \theta _{4}} it is possible to derive a number of important corollaries using the above as our starting point. In what follows it is important to bear in mind that the sum of angles θ 1 + θ 2 + θ 3 + θ 4 = 180 ∘ {\displaystyle \theta _{1}+\theta _{2}+\theta _{3}+\theta _{4}=180^{\circ }} . === Corollary 1. Pythagoras's theorem === Let θ 1 = θ 3 {\displaystyle \theta _{1}=\theta _{3}} and θ 2 = θ 4 {\displaystyle \theta _{2}=\theta _{4}} . Then θ 1 + θ 2 = θ 3 + θ 4 = 90 ∘ {\displaystyle \theta _{1}+\theta _{2}=\theta _{3}+\theta _{4}=90^{\circ }} (since opposite angles of a cyclic quadrilateral are supplementary). Then: sin θ 1 sin θ 3 + sin θ 2 sin θ 4 = sin ( θ 1 + θ 2 ) sin ( θ 1 + θ 4 ) {\displaystyle \sin \theta _{1}\sin \theta _{3}+\sin \theta _{2}\sin \theta _{4}=\sin(\theta _{1}+\theta _{2})\sin(\theta _{1}+\theta _{4})} sin 2 θ 1 + sin 2 θ 2 = sin 2 ( θ 1 + θ 2 ) {\displaystyle \sin ^{2}\theta _{1}+\sin ^{2}\theta _{2}=\sin ^{2}(\theta _{1}+\theta _{2})} sin 2 θ 1 + cos 2 θ 1 = 1 {\displaystyle \sin ^{2}\theta _{1}+\cos ^{2}\theta _{1}=1} === Corollary 2. The law of cosines === Let θ 2 = θ 4 {\displaystyle \theta _{2}=\theta _{4}} . The rectangle of corollary 1 is now a symmetrical trapezium with equal diagonals and a pair of equal sides. The parallel sides differ in length by 2 x {\displaystyle 2x} units where: x = S 2 cos ( θ 2 + θ 3 ) {\displaystyle x=S_{2}\cos(\theta _{2}+\theta _{3})} It will be easier in this case to revert to the standard statement of Ptolemy's theorem: S 1 S 3 + S 2 S 4 = A C ¯ ⋅ B D ¯ ⇒ S 1 S 3 + S 2 2 = A C ¯ 2 ⇒ S 1 [ S 1 − 2 S 2 cos ( θ 2 + θ 3 ) ] + S 2 2 = A C ¯ 2 ⇒ S 1 2 + S 2 2 − 2 S 1 S 2 cos ( θ 2 + θ 3 ) = A C ¯ 2 {\displaystyle {\begin{array}{lcl}S_{1}S_{3}+S_{2}S_{4}={\overline {AC}}\cdot {\overline {BD}}\\\Rightarrow S_{1}S_{3}+{S_{2}}^{2}={\overline {AC}}^{2}\\\Rightarrow S_{1}[S_{1}-2S_{2}\cos(\theta _{2}+\theta _{3})]+{S_{2}}^{2}={\overline {AC}}^{2}\\\Rightarrow {S_{1}}^{2}+{S_{2}}^{2}-2S_{1}S_{2}\cos(\theta _{2}+\theta _{3})={\overline {AC}}^{2}\\\end{array}}} The cosine rule for triangle ABC. === Corollary 3. Compound angle sine (+) === Let θ 1 + θ 2 = θ 3 + θ 4 = 90 ∘ . {\displaystyle \theta _{1}+\theta _{2}=\theta _{3}+\theta _{4}=90^{\circ }.} Then sin θ 1 sin θ 3 + sin θ 2 sin θ 4 = sin ( θ 3 + θ 2 ) sin ( θ 3 + θ 4 ) {\displaystyle \sin \theta _{1}\sin \theta _{3}+\sin \theta _{2}\sin \theta _{4}=\sin(\theta _{3}+\theta _{2})\sin(\theta _{3}+\theta _{4})} Therefore, cos θ 2 sin θ 3 + sin θ 2 cos θ 3 = sin ( θ 3 + θ 2 ) × 1 {\displaystyle \cos \theta _{2}\sin \theta _{3}+\sin \theta _{2}\cos \theta _{3}=\sin(\theta _{3}+\theta _{2})\times 1} Formula for compound angle sine (+). === Corollary 4. Compound angle sine (−) === Let θ 1 = 90 ∘ {\displaystyle \theta _{1}=90^{\circ }} . Then θ 2 + ( θ 3 + θ 4 ) = 90 ∘ {\displaystyle \theta _{2}+(\theta _{3}+\theta _{4})=90^{\circ }} . Hence, sin θ 1 sin θ 3 + sin θ 2 sin θ 4 = sin ( θ 3 + θ 2 ) sin ( θ 3 + θ 4 ) {\displaystyle \sin \theta _{1}\sin \theta _{3}+\sin \theta _{2}\sin \theta _{4}=\sin(\theta _{3}+\theta _{2})\sin(\theta _{3}+\theta _{4})} sin θ 3 + sin θ 2 cos ( θ 2 + θ 3 ) = sin ( θ 3 + θ 2 ) cos θ 2 {\displaystyle \sin \theta _{3}+\sin \theta _{2}\cos(\theta _{2}+\theta _{3})=\sin(\theta _{3}+\theta _{2})\cos \theta _{2}} sin θ 3 = sin ( θ 3 + θ 2 ) cos θ 2 − cos ( θ 2 + θ 3 ) sin θ 2 {\displaystyle \sin \theta _{3}=\sin(\theta _{3}+\theta _{2})\cos \theta _{2}-\cos(\theta _{2}+\theta _{3})\sin \theta _{2}} Formula for compound angle sine (−). This derivation corresponds to the Third Theorem as chronicled by Copernicus following Ptolemy in Almagest. In particular if the sides of a pentagon (subtending 36° at the circumference) and of a hexagon (subtending 30° at the circumference) are given, a chord subtending 6° may be calculated. This was a critical step in the ancient method of calculating tables of chords. === Corollary 5. Compound angle cosine (+) === This corollary is the core of the Fifth Theorem as chronicled by Copernicus following Ptolemy in Almagest. Let θ 3 = 90 ∘ {\displaystyle \theta _{3}=90^{\circ }} . Then θ 1 + ( θ 2 + θ 4 ) = 90 ∘ {\displaystyle \theta _{1}+(\theta _{2}+\theta _{4})=90^{\circ }} . Hence sin θ 1 sin θ 3 + sin θ 2 sin θ 4 = sin ( θ 3 + θ 2 ) sin ( θ 3 + θ 4 ) {\displaystyle \sin \theta _{1}\sin \theta _{3}+\sin \theta _{2}\sin \theta _{4}=\sin(\theta _{3}+\theta _{2})\sin(\theta _{3}+\theta _{4})} cos ( θ 2 + θ 4 ) + sin θ 2 sin θ 4 = cos θ 2 cos θ 4 {\displaystyle \cos(\theta _{2}+\theta _{4})+\sin \theta _{2}\sin \theta _{4}=\cos \theta _{2}\cos \theta _{4}} cos ( θ 2 + θ 4 ) = cos θ 2 cos θ 4 − sin θ 2 sin θ 4 {\displaystyle \cos(\theta _{2}+\theta _{4})=\cos \theta _{2}\cos \theta _{4}-\sin \theta _{2}\sin \theta _{4}} Formula for compound angle cosine (+) Despite lacking the dexterity of our modern trigonometric notation, it should be clear from the above corollaries that in Ptolemy's theorem (or more simply the Second Theorem) the ancient world had at its disposal an extremely flexible and powerful trigonometric tool which enabled the cognoscenti of those times to draw up accurate tables of chords (corresponding to tables of sines) and to use these in their attempts to understand and map the cosmos as they saw it. Since tables of chords were drawn up by Hipparchus three centuries before Ptolemy, we must assume he knew of the 'Second Theorem' and its derivatives. Following the trail of ancient astronomers, history records the star catalogue of Timocharis of Alexandria. If, as seems likely, the compilation of such catalogues required an understanding of the 'Second Theorem', then the true origins of the latter disappear thereafter into the mists of antiquity; but it cannot be unreasonable to presume that the astronomers, architects and construction engineers of ancient Egypt may have had some knowledge of it. == Ptolemy's inequality == The equation in Ptolemy's theorem is never true with non-cyclic quadrilaterals. Ptolemy's inequality is an extension of this fact, and it is a more general form of Ptolemy's theorem. It states that, given a quadrilateral ABCD, then A B ¯ ⋅ C D ¯ + B C ¯ ⋅ D A ¯ ≥ A C ¯ ⋅ B D ¯ {\displaystyle {\overline {AB}}\cdot {\overline {CD}}+{\overline {BC}}\cdot {\overline {DA}}\geq {\overline {AC}}\cdot {\overline {BD}}} where equality holds if and only if the quadrilateral is cyclic. This special case is equivalent to Ptolemy's theorem. == Related theorem about the ratio of the diagonals == Ptolemy's theorem gives the product of the diagonals (of a cyclic quadrilateral) knowing the sides. The following theorem yields the same for the ratio of the diagonals. A C B D = A B ⋅ D A + B C ⋅ C D A B ⋅ B C + D A ⋅ C D {\displaystyle {\frac {AC}{BD}}={\frac {AB\cdot DA+BC\cdot CD}{AB\cdot BC+DA\cdot CD}}} Proof: It is known that the area of a triangle A B C {\displaystyle ABC} inscribed in a circle of radius R {\displaystyle R} is: A = A B ⋅ B C ⋅ C A 4 R {\displaystyle {\mathcal {A}}={\frac {AB\cdot BC\cdot CA}{4R}}} Writing the area of the quadrilateral as sum of two triangles sharing the same circumscribing circle, we obtain two relations for each decomposition. A tot = A B ⋅ B C ⋅ C A 4 R + C D ⋅ D A ⋅ A C 4 R = A C ⋅ ( A B ⋅ B C + C D ⋅ D A ) 4 R {\displaystyle {\mathcal {A}}_{\text{tot}}={\frac {AB\cdot BC\cdot CA}{4R}}+{\frac {CD\cdot DA\cdot AC}{4R}}={\frac {AC\cdot (AB\cdot BC+CD\cdot DA)}{4R}}} A tot = A B ⋅ B D ⋅ D A 4 R + B C ⋅ C D ⋅ D B 4 R = B D ⋅ ( A B ⋅ D A + B C ⋅ C D ) 4 R {\displaystyle {\mathcal {A}}_{\text{tot}}={\frac {AB\cdot BD\cdot DA}{4R}}+{\frac {BC\cdot CD\cdot DB}{4R}}={\frac {BD\cdot (AB\cdot DA+BC\cdot CD)}{4R}}} Equating, we obtain the announced formula. Consequence: Knowing both the product and the ratio of the diagonals, we deduce their immediate expressions: A C 2 = A C ⋅ B D ⋅ A C B D = ( A B ⋅ C D + B C ⋅ D A ) A B ⋅ D A + B C ⋅ C D A B ⋅ B C + D A ⋅ C D B D 2 = A C ⋅ B D A C B D = ( A B ⋅ C D + B C ⋅ D A ) A B ⋅ B C + D A ⋅ C D A B ⋅ D A + B C ⋅ C D {\displaystyle {\begin{aligned}AC^{2}&=AC\cdot BD\cdot {\frac {AC}{BD}}=(AB\cdot CD+BC\cdot DA){\frac {AB\cdot DA+BC\cdot CD}{AB\cdot BC+DA\cdot CD}}\\[8pt]BD^{2}&={\frac {AC\cdot BD}{\frac {AC}{BD}}}=(AB\cdot CD+BC\cdot DA){\frac {AB\cdot BC+DA\cdot CD}{AB\cdot DA+BC\cdot CD}}\end{aligned}}} == See also == Casey's theorem Intersecting chords theorem Greek mathematics == Notes == == References == Coxeter, H. S. M. and S. L. Greitzer (1967) "Ptolemy's Theorem and its Extensions." §2.6 in Geometry Revisited, Mathematical Association of America pp. 42–43. Copernicus (1543) De Revolutionibus Orbium Coelestium, English translation found in On the Shoulders of Giants (2002) edited by Stephen Hawking, Penguin Books ISBN 0-14-101571-3 Amarasinghe, G. W. I. S. (2013) A Concise Elementary Proof for the Ptolemy's Theorem, Global Journal of Advanced Research on Classical and Modern Geometries (GJARCMG) 2(1): 20–25 (pdf). == External links == Proof of Ptolemy's Theorem for Cyclic Quadrilateral MathPages – On Ptolemy's Theorem Elert, Glenn (1994). "Ptolemy's Table of Chords". E-World. Ptolemy's Theorem at cut-the-knot Compound angle proof at cut-the-knot Ptolemy's Theorem Archived 2011-07-24 at the Wayback Machine on PlanetMath Ptolemy Inequality on MathWorld De Revolutionibus Orbium Coelestium at Harvard. Deep Secrets: The Great Pyramid, the Golden Ratio and the Royal Cubit Ptolemy's Theorem by Jay Warendorff, The Wolfram Demonstrations Project. Book XIII of Euclid's Elements A Miraculous Proof (Ptolemy's Theorem) by Zvezdelina Stankova, on Numberphile.
|
Wikipedia:Pullback#0
|
In mathematics, a pullback is either of two different, but related processes: precomposition and fiber-product. Its dual is a pushforward. == Precomposition == Precomposition with a function probably provides the most elementary notion of pullback: in simple terms, a function f {\displaystyle f} of a variable y , {\displaystyle y,} where y {\displaystyle y} itself is a function of another variable x , {\displaystyle x,} may be written as a function of x . {\displaystyle x.} This is the pullback of f {\displaystyle f} by the function y . {\displaystyle y.} f ( y ( x ) ) ≡ g ( x ) {\displaystyle f(y(x))\equiv g(x)} It is such a fundamental process that it is often passed over without mention. However, it is not just functions that can be "pulled back" in this sense. Pullbacks can be applied to many other objects such as differential forms and their cohomology classes; see Pullback (differential geometry) Pullback (cohomology) == Fiber-product == The pullback bundle is an example that bridges the notion of a pullback as precomposition, and the notion of a pullback as a Cartesian square. In that example, the base space of a fiber bundle is pulled back, in the sense of precomposition, above. The fibers then travel along with the points in the base space at which they are anchored: the resulting new pullback bundle looks locally like a Cartesian product of the new base space, and the (unchanged) fiber. The pullback bundle then has two projections: one to the base space, the other to the fiber; the product of the two becomes coherent when treated as a fiber product. === Generalizations and category theory === The notion of pullback as a fiber-product ultimately leads to the very general idea of a categorical pullback, but it has important special cases: inverse image (and pullback) sheaves in algebraic geometry, and pullback bundles in algebraic topology and differential geometry. See also: Pullback (category theory) Fibred category Inverse image sheaf == Functional analysis == When the pullback is studied as an operator acting on function spaces, it becomes a linear operator, and is known as the transpose or composition operator. Its adjoint is the push-forward, or, in the context of functional analysis, the transfer operator. == Relationship == The relation between the two notions of pullback can perhaps best be illustrated by sections of fiber bundles: if s {\displaystyle s} is a section of a fiber bundle E {\displaystyle E} over N , {\displaystyle N,} and f : M → N , {\displaystyle f:M\to N,} then the pullback (precomposition) f ∗ s = s ∘ f {\displaystyle f^{*}s=s\circ f} of s with f {\displaystyle f} is a section of the pullback (fiber-product) bundle f ∗ E {\displaystyle f^{*}E} over M . {\displaystyle M.} == See also == Inverse image functor – functor between categories of Abelian-group-valued sheaves induced by a continuous map between topological spaces; sheafification of the presheaf associating to an open set U the inductive limit of the groups associated to open supersets of U’s imagePages displaying wikidata descriptions as a fallback == References ==
|
Wikipedia:Pure mathematics#0
|
Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles. While pure mathematics has existed as an activity since at least ancient Greece, the concept was elaborated upon around the year 1900, after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable, and Russell's paradox). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods. This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics. Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science. A famous early example is Isaac Newton's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections, geometrical curves that had been studied in antiquity by Apollonius. Another example is the problem of factoring large integers, which is the basis of the RSA cryptosystem, widely used to secure internet communications. It follows that, currently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathematics. == History == === Ancient Greece === Ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between "arithmetic", now called number theory, and "logistic", now called arithmetic. Plato regarded logistic (arithmetic) as appropriate for businessmen and men of war who "must learn the art of numbers or [they] will not know how to array [their] troops" and arithmetic (number theory) as appropriate for philosophers "because [they have] to arise out of the sea of change and lay hold of true being." Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, asked his slave to give the student threepence, "since he must make gain of what he learns." The Greek mathematician Apollonius of Perga was asked about the usefulness of some of his theorems in Book IV of Conics to which he proudly asserted, They are worthy of acceptance for the sake of the demonstrations themselves, in the same way as we accept many other things in mathematics for this and for no other reason. And since many of his results were not applicable to the science or engineering of his day, Apollonius further argued in the preface of the fifth book of Conics that the subject is one of those that "...seem worthy of study for their own sake." === 19th century === The term itself is enshrined in the full title of the Sadleirian Chair, "Sadleirian Professor of Pure Mathematics", founded (as a professorship) in the mid-nineteenth century. The idea of a separate discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind between pure and applied. In the following years, specialisation and professionalisation (particularly in the Weierstrass approach to mathematical analysis) started to make a rift more apparent. === 20th century === At the start of the twentieth century mathematicians took up the axiomatic method, strongly influenced by David Hilbert's example. The logical formulation of pure mathematics suggested by Bertrand Russell in terms of a quantifier structure of propositions seemed more and more plausible, as large parts of mathematics became axiomatised and thus subject to the simple criteria of rigorous proof. Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved. "Pure mathematician" became a recognized vocation, achievable through training. The case was made that pure mathematics is useful in engineering education: There is a training in habits of thought, points of view, and intellectual comprehension of ordinary engineering problems, which only the study of higher mathematics can give. == Generality and abstraction == One central concept in pure mathematics is the idea of generality; pure mathematics often exhibits a trend towards increased generality. Uses and advantages of generality include the following: Generalizing theorems or mathematical structures can lead to deeper understanding of the original theorems or structures Generality can simplify the presentation of material, resulting in shorter proofs or arguments that are easier to follow. One can use generality to avoid duplication of effort, proving a general result instead of having to prove separate cases independently, or using results from other areas of mathematics. Generality can facilitate connections between different branches of mathematics. Category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math. Generality's impact on intuition is both dependent on the subject and a matter of personal preference or learning style. Often generality is seen as a hindrance to intuition, although it can certainly function as an aid to it, especially when it provides analogies to material for which one already has good intuition. As a prime example of generality, the Erlangen program involved an expansion of geometry to accommodate non-Euclidean geometries as well as the field of topology, and other forms of geometry, by viewing geometry as the study of a space together with a group of transformations. The study of numbers, called algebra at the beginning undergraduate level, extends to abstract algebra at a more advanced level; and the study of functions, called calculus at the college freshman level becomes mathematical analysis and functional analysis at a more advanced level. Each of these branches of more abstract mathematics have many sub-specialties, and there are in fact many connections between pure mathematics and applied mathematics disciplines. A steep rise in abstraction was seen mid 20th century. In practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, not enough Poincaré. The point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central. == Pure vs. applied mathematics == Mathematicians have always had differing opinions regarding the distinction between pure and applied mathematics. One of the most famous (but perhaps misunderstood) modern examples of this debate can be found in G.H. Hardy's 1940 essay A Mathematician's Apology. It is widely believed that Hardy considered applied mathematics to be ugly and dull. Although it is true that Hardy preferred pure mathematics, which he often compared to painting and poetry, Hardy saw the distinction between pure and applied mathematics to be simply that applied mathematics sought to express physical truth in a mathematical framework, whereas pure mathematics expressed truths that were independent of the physical world. Hardy made a separate distinction in mathematics between what he called "real" mathematics, "which has permanent aesthetic value", and "the dull and elementary parts of mathematics" that have practical use. Hardy considered some physicists, such as Einstein and Dirac, to be among the "real" mathematicians, but at the time that he was writing his Apology, he considered general relativity and quantum mechanics to be "useless", which allowed him to hold the opinion that only "dull" mathematics was useful. Moreover, Hardy briefly admitted that—just as the application of matrix theory and group theory to physics had come unexpectedly—the time may come where some kinds of beautiful, "real" mathematics may be useful as well. Another insightful view is offered by American mathematician Andy Magid: I've always thought that a good model here could be drawn from ring theory. In that subject, one has the subareas of commutative ring theory and non-commutative ring theory. An uninformed observer might think that these represent a dichotomy, but in fact the latter subsumes the former: a non-commutative ring is a not-necessarily-commutative ring. If we use similar conventions, then we could refer to applied mathematics and nonapplied mathematics, where by the latter we mean not-necessarily-applied mathematics... [emphasis added] Friedrich Engels argued in his 1878 book Anti-Dühring that "it is not at all true that in pure mathematics the mind deals only with its own creations and imaginations. The concepts of number and figure have not been invented from any source other than the world of reality".: 36 He further argued that "Before one came upon the idea of deducing the form of a cylinder from the rotation of a rectangle about one of its sides, a number of real rectangles and cylinders, however imperfect in form, must have been examined. Like all other sciences, mathematics arose out of the needs of men...But, as in every department of thought, at a certain stage of development the laws, which were abstracted from the real world, become divorced from the real world, and are set up against it as something independent, as laws coming from outside, to which the world has to conform.": 37 == See also == Applied mathematics Logic Metalogic Metamathematics == References == == External links == What is Pure Mathematics? – Department of Pure Mathematics, University of Waterloo The Principles of Mathematics by Bertrand Russell
|
Wikipedia:Puthumana Somayaji#0
|
Puthumana Somayaji (c.1660–1740) was a 17th-century astronomer-mathematician from Kerala, India. He was born into the Puthumana or Puthuvana (in Sanskrit, Nutanagriha or Nuthanvipina) family of Sivapuram (identified as present day Thrissur). The most famous work attributed to Puthumana Somayaji is Karanapaddhati which is a comprehensive treatise on Astronomy. == Period of Somayaji == The period in which Somayaji lived is uncertain. There are several theories in this regard. C.M. Whish, the first westerner to write about Karanapaddhati, based on his interpretation that certain words appearing in the final verse of Karanapaddhati denote in katapayadi system the number of days in the Kali Yuga, concluded that the book was completed in 1733 CE. Whish had also claimed that the grandson of the author of the Karanapaddhati was alive and was in his seventieth year at the time of writing his paper. Based on reference to Puthumana Somayaji in a verse in Ganita Sucika Grantha by Govindabhatta, Raja Raja Varma placed the author of Karanapaddhati between 1375 and 1475 CE. An internal study of Karanapaddhati suggests that the work is contemporaneous with or even antedates the Tantrasangraha of Nilakantha Somayaji (1465–1545 CE). The date of composition of Karanapaddhati is given in the concluding verse by a chronogram which can be translated as 1732 CE. K. V. Sarma has argued for accepting this date as the most probable date of composition of Karanapaddhati. == Other works by Somayaji == Nyaaya Rathnam, an 8-chapter Ganitha Grantham Jaathakaadesa Maargam Smaartha-Praayaschitham Venvaarohaashtakam Pañcabodha Grahanaashtakam Grahana Ganitham == See also == Indian Mathematics Indian Astronomy List of astronomers and mathematicians of the Kerala school == References ==
|
Wikipedia:Pyotr Ulyanov#0
|
Pyotr Lavrentyevich Ulyanov (Russian: Пётр Лавре́нтьевич Улья́нов) (May 3, 1928 – November 13, 2006) was a Russian mathematician working on analysis. After graduating from Saratov State University in 1950, Ulyanov studied at Moscow State University, where he received in 1953 his Russian Candidate of Sciences degree (PhD) under the supervision of Nina Bari. In 1960 at Moscow State University he received his Russian Doctor of Science degree (habilitation) and became a professor. There from 1979 he headed the department of function theory and functional analysis. From 1957 he also worked at the Steklov Institute of Mathematics. In 1970 Ulyanov was an invited speaker in the section Ensembles exceptionelles en analyse with talk Allgemeine Entwicklungen und gemischte Fragen (General developments and special questions) delivered in German at the International Congress of Mathematicians in Nice. He was from 1981 a corresponding member and from 2006 a full member of the Russian Academy of Sciences. He was on the editorial board of Matematicheskii Sbornik. He was the founder of the International Saratov Winter School "Contemporary Problems of Function Theory and Their Applications". His doctoral students include Sergei Viktorovich Bochkarev, Boris Kashin, and Evgenii Nikishin. == References == D'yachenko, M. I.; Potapov, M. K.; Kashin, B. S. (2008), "Petr Lavrent'evich Ul'yanov (on the 80th anniversary of his birth, May 3, 1928–November 13, 2006)", Vestnik Moskovskogo Universiteta. Seriya I. Matematika, Mekhanika (3): 3–5, ISSN 0201-7385, MR 2517000 Pyotr Ulyanov at the Mathematics Genealogy Project Ульянов Петр Лаврентьевич == External links == picture of Ulyanov
|
Wikipedia:Pythagoras tree (fractal)#0
|
The Pythagoras tree is a plane fractal constructed from squares. Invented by the Dutch mathematics teacher Albert E. Bosman in 1942, it is named after the ancient Greek mathematician Pythagoras because each triple of touching squares encloses a right triangle, in a configuration traditionally used to depict the Pythagorean theorem. If the largest square has a size of L × L, the entire Pythagoras tree fits snugly inside a box of size 6L × 4L. The finer details of the tree resemble the Lévy C curve. == Construction == The construction of the Pythagoras tree begins with a square. Upon this square are constructed two squares, each scaled down by a linear factor of √2/2, such that the corners of the squares coincide pairwise. The same procedure is then applied recursively to the two smaller squares, ad infinitum. The illustration below shows the first few iterations in the construction process. This is the simplest symmetric triangle. Alternatively, the sides of the triangle are recursively equal proportions, leading to the sides being proportional to the square root of the inverse golden ratio, and the areas of the squares being in golden ratio proportion. == Area == Iteration n in the construction adds 2n squares of area 1 2 n {\displaystyle {\tfrac {1}{2^{n}}}} , for a total area of 1. Thus the area of the tree might seem to grow without bound in the limit as n → ∞. However, some of the squares overlap starting at the order 5 iteration, and the tree actually has a finite area because it fits inside a 6×4 box. It can be shown easily that the area A of the Pythagoras tree must be in the range 5 < A < 18, which can be narrowed down further with extra effort. Little seems to be known about the actual value of A. == Varying the angle == An interesting set of variations can be constructed by maintaining an isosceles triangle but changing the base angle (90 degrees for the standard Pythagoras tree). In particular, when the base half-angle is set to (30°) = arcsin(0.5), it is easily seen that the size of the squares remains constant. The first overlap occurs at the fourth iteration. The general pattern produced is the rhombitrihexagonal tiling, an array of hexagons bordered by the constructing squares. In the limit where the half-angle is 90 degrees, there is obviously no overlap, and the total area is twice the area of the base square. The Pythagoras tree was first constructed by Albert E. Bosman (1891–1961), a Dutch mathematics teacher, in 1942. == See also == Lévy C curve == References == == External links == Gallery of Pythagoras trees Filled Pythagoras Tree using VB6 by Edward Bole (Boleeman) Interactive generator with code "Pythagoras tree with different geometries as well as in 3D". Archived from the original on 2008-01-15. Pythagoras Tree by Enrique Zeleny based on a program by Eric W. Weisstein, The Wolfram Demonstrations Project. Weisstein, Eric W. "Pythagoras Tree". MathWorld. Three-dimensional Pythagoras tree MatLab script to generate Pythagoras Tree Construction step by step in the virtual reality software Neotrie VR Pourahmadazar, J.; Ghobadi, C.; Nourinia, J. (2011). "Novel Modified Pythagorean Tree Fractal Monopole Antennas for UWB Applications". IEEE Antennas and Wireless Propagation Letters. 10. New York: IEEE: 484–487. Bibcode:2011IAWPL..10..484P. doi:10.1109/LAWP.2011.2154354.
|
Wikipedia:Pythagorean means#0
|
In mathematics, the three classical Pythagorean means are the arithmetic mean (AM), the geometric mean (GM), and the harmonic mean (HM). These means were studied with proportions by Pythagoreans and later generations of Greek mathematicians because of their importance in geometry and music. == Definition == They are defined by: AM ( x 1 , … , x n ) = x 1 + ⋯ + x n n GM ( x 1 , … , x n ) = | x 1 × ⋯ × x n | n HM ( x 1 , … , x n ) = n 1 x 1 + ⋯ + 1 x n QM ( x 1 , … , x n ) = 1 n ( x 1 2 + x 2 2 + ⋯ + x n 2 ) {\displaystyle {\begin{aligned}\operatorname {AM} \left(x_{1},\;\ldots ,\;x_{n}\right)&={\frac {x_{1}+\;\cdots \;+x_{n}}{n}}\\[9pt]\operatorname {GM} \left(x_{1},\;\ldots ,\;x_{n}\right)&={\sqrt[{n}]{\left\vert x_{1}\times \,\cdots \,\times x_{n}\right\vert }}\\[9pt]\operatorname {HM} \left(x_{1},\;\ldots ,\;x_{n}\right)&={\frac {n}{\displaystyle {\frac {1}{x_{1}}}+\;\cdots \;+{\frac {1}{x_{n}}}}}\\[9pt]\operatorname {QM} \left(x_{1},\;\ldots ,\;x_{n}\right)&={\sqrt {{\frac {1}{n}}\left({x_{1}}^{2}+{x_{2}}^{2}+\cdots +{x_{n}}^{2}\right)}}\end{aligned}}} == Properties == Each mean, M {\textstyle \operatorname {M} } , has the following properties: First-order homogeneity M ( b x 1 , … , b x n ) = b M ( x 1 , … , x n ) {\displaystyle \operatorname {M} (bx_{1},\,\ldots ,\,bx_{n})=b\operatorname {M} (x_{1},\,\ldots ,\,x_{n})} Invariance under exchange M ( … , x i , … , x j , … ) = M ( … , x j , … , x i , … ) {\displaystyle \operatorname {M} (\ldots ,\,x_{i},\,\ldots ,\,x_{j},\,\ldots )=\operatorname {M} (\ldots ,\,x_{j},\,\ldots ,\,x_{i},\,\ldots )} for any i {\displaystyle i} and j {\displaystyle j} . Monotonicity a ≤ b → M ( a , x 1 , x 2 , … x n ) ≤ M ( b , x 1 , x 2 , … x n ) {\displaystyle a\leq b\rightarrow \operatorname {M} (a,x_{1},x_{2},\ldots x_{n})\leq \operatorname {M} (b,x_{1},x_{2},\ldots x_{n})} Idempotence ∀ x , M ( x , x , … x ) = x {\displaystyle \forall x,\;M(x,x,\ldots x)=x} Monotonicity and idempotence together imply that a mean of a set always lies between the extremes of the set: min ( x 1 , … , x n ) ≤ M ( x 1 , … , x n ) ≤ max ( x 1 , … , x n ) . {\displaystyle \min(x_{1},\,\ldots ,\,x_{n})\leq \operatorname {M} (x_{1},\,\ldots ,\,x_{n})\leq \max(x_{1},\,\ldots ,\,x_{n}).} The harmonic and arithmetic means are reciprocal duals of each other for positive arguments, HM ( 1 x 1 , … , 1 x n ) = 1 AM ( x 1 , … , x n ) , {\displaystyle \operatorname {HM} \left({\frac {1}{x_{1}}},\,\ldots ,\,{\frac {1}{x_{n}}}\right)={\frac {1}{\operatorname {AM} \left(x_{1},\,\ldots ,\,x_{n}\right)}},} while the geometric mean is its own reciprocal dual: GM ( 1 x 1 , … , 1 x n ) = 1 GM ( x 1 , … , x n ) . {\displaystyle \operatorname {GM} \left({\frac {1}{x_{1}}},\,\ldots ,\,{\frac {1}{x_{n}}}\right)={\frac {1}{\operatorname {GM} \left(x_{1},\,\ldots ,\,x_{n}\right)}}.} == Inequalities among means == There is an ordering to these means (if all of the x i {\displaystyle x_{i}} are positive) min ≤ HM ≤ GM ≤ AM ≤ max {\displaystyle \min \leq \operatorname {HM} \leq \operatorname {GM} \leq \operatorname {AM} \leq \max } with equality holding if and only if the x i {\displaystyle x_{i}} are all equal. This is a generalization of the inequality of arithmetic and geometric means and a special case of an inequality for generalized means. The proof follows from the arithmetic–geometric mean inequality, AM ≤ max {\displaystyle \operatorname {AM} \leq \max } , and reciprocal duality ( min {\displaystyle \min } and max {\displaystyle \max } are also reciprocal dual to each other). The study of the Pythagorean means is closely related to the study of majorization and Schur-convex functions. The harmonic and geometric means are concave symmetric functions of their arguments, and hence Schur-concave, while the arithmetic mean is a linear function of its arguments and hence is both concave and convex. == History == Almost everything that we know about the Pythagorean means came from arithmetic handbooks written in the first and second century. Nicomachus of Gerasa says that they were "acknowledged by all the ancients, Pythagoras, Plato and Aristotle." Their earliest known use is a fragment of the Pythagorean philosopher Archytas of Tarentum: There are three means in music: one is arithmetic, second is the geometric, third is sub-contrary, which they call harmonic. The mean is arithmetic when three terms are in proportion such that the excess by which the first exceeds the second is that by which the second exceeds the third. In this proportion it turns out that the interval of the greater terms is less, but that of the lesser terms greater. The mean is the geometric when they are such that as the first is to the second, so the second is to the third. Of these terms the greater and the lesser have the interval between them equal. Subcontrary, which we call harmonic, is the mean when they are such that, by whatever part of itself the first term exceeds the second, by that part of the third the middle term exceeds the third. It turns out that in this proportion the interval between the greater terms is greater and that between the lesser terms is less. The name "harmonic mean", according to Iamblichus, was coined by Archytas and Hippasus. The Pythagorean means also appear in Plato's Timaeus. Another evidence of their early use is a commentary by Pappus. It was [...] Theaetetus who distinguished the powers which are commensurable in length from those which are incommensurable, and who divided the more generally known irrational lines according to the different means, assigning the medial lines to geometry, the binomial to arithmetic, and the apotome to harmony, as is stated by Eudemus, the Peripatetic. The term "mean" (Ancient Greek μεσότης, mesótēs) appears in the Neopythagorean arithmetic handbooks in connection with the term "proportion" (Ancient Greek ἀναλογία, analogía). == Smallest distinct positive integer means == Of all pairs of different natural numbers of the form (a, b) such that a < b, the smallest (as defined by least value of a + b) for which the arithmetic, geometric and harmonic means are all also natural numbers are (5, 45) and (10, 40). == See also == Arithmetic–geometric mean Average Golden ratio Kepler triangle == Notes == == References == == External links == Cantrell, David W. "Pythagorean Means". MathWorld.
|
Wikipedia:Pythagorean theorem#0
|
In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. The theorem can be written as an equation relating the lengths of the sides a, b and the hypotenuse c, sometimes called the Pythagorean equation: a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} The theorem is named for the Greek philosopher Pythagoras, born around 570 BC. The theorem has been proved numerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years. When Euclidean space is represented by a Cartesian coordinate system in analytic geometry, Euclidean distance satisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can be generalized in various ways: to higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all but n-dimensional solids. == Proofs using constructed squares == === Rearrangement proofs === In one rearrangement proof, two squares are used whose sides have a measure of a + b {\displaystyle a+b} and which contain four right triangles whose sides are a, b and c, with the hypotenuse being c. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are length c. Each outer square has an area of ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + c 2 {\displaystyle 2ab+c^{2}} , with 2 a b {\displaystyle 2ab} representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of length a and b. These rectangles in their new position have now delineated two new squares, one having side length a is formed in the bottom-left corner, and another square of side length b formed in the top-right corner. In this new position, this left side now has a square of area ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . Since both squares have the area of ( a + b ) 2 {\displaystyle (a+b)^{2}} it follows that the other measure of the square area also equal each other such that 2 a b + c 2 {\displaystyle 2ab+c^{2}} = 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . With the area of the four triangles removed from both side of the equation what remains is a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areas a 2 {\displaystyle a^{2}} and b 2 {\displaystyle b^{2}} which will again lead to a second square of with the area 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . English mathematician Sir Thomas Heath gives this proof in his commentary on Proposition I.47 in Euclid's Elements, and mentions the proposals of German mathematicians Carl Anton Bretschneider and Hermann Hankel that Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him." Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues. === Algebraic proofs === The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with side c, as shown in the lower part of the diagram. This results in a larger square, with side a + b and area (a + b)2. The four triangles and the square side c must have the same area as the larger square, ( b + a ) 2 = c 2 + 4 a b 2 = c 2 + 2 a b , {\displaystyle (b+a)^{2}=c^{2}+4{\frac {ab}{2}}=c^{2}+2ab,} giving c 2 = ( b + a ) 2 − 2 a b = b 2 + 2 a b + a 2 − 2 a b = a 2 + b 2 . {\displaystyle c^{2}=(b+a)^{2}-2ab=b^{2}+2ab+a^{2}-2ab=a^{2}+b^{2}.} A similar proof uses four copies of a right triangle with sides a, b and c, arranged inside a square with side c as in the top half of the diagram. The triangles are similar with area 1 2 a b {\displaystyle {\tfrac {1}{2}}ab} , while the small square has side b − a and area (b − a)2. The area of the large square is therefore ( b − a ) 2 + 4 a b 2 = ( b − a ) 2 + 2 a b = b 2 − 2 a b + a 2 + 2 a b = a 2 + b 2 . {\displaystyle (b-a)^{2}+4{\frac {ab}{2}}=(b-a)^{2}+2ab=b^{2}-2ab+a^{2}+2ab=a^{2}+b^{2}.} But this is a square with side c and area c2, so c 2 = a 2 + b 2 . {\displaystyle c^{2}=a^{2}+b^{2}.} == Other proofs of the theorem == This theorem may have more known proofs than any other (the law of quadratic reciprocity being another contender for that distinction); the book The Pythagorean Proposition contains 370 proofs. === Proof using similar triangles === This proof is based on the proportionality of the sides of three similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles. Let ABC represent a right triangle, with the right angle located at C, as shown on the figure. Draw the altitude from point C, and call H its intersection with the side AB. Point H divides the length of the hypotenuse c into parts d and e. The new triangle, ACH, is similar to triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at A, meaning that the third angle will be the same in both triangles as well, marked as θ in the figure. By a similar reasoning, the triangle CBH is also similar to ABC. The proof of similarity of the triangles requires the triangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to the parallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: B C A B = B H B C and A C A B = A H A C . {\displaystyle {\frac {BC}{AB}}={\frac {BH}{BC}}{\text{ and }}{\frac {AC}{AB}}={\frac {AH}{AC}}.} The first result equates the cosines of the angles θ, whereas the second result equates their sines. These ratios can be written as B C 2 = A B × B H and A C 2 = A B × A H . {\displaystyle BC^{2}=AB\times BH{\text{ and }}AC^{2}=AB\times AH.} Summing these two equalities results in B C 2 + A C 2 = A B × B H + A B × A H = A B ( A H + B H ) = A B 2 , {\displaystyle BC^{2}+AC^{2}=AB\times BH+AB\times AH=AB(AH+BH)=AB^{2},} which, after simplification, demonstrates the Pythagorean theorem: B C 2 + A C 2 = A B 2 . {\displaystyle BC^{2}+AC^{2}=AB^{2}.} The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. One conjecture is that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in the Elements, and that the theory of proportions needed further development at that time. === Einstein's proof by dissection without rearrangement === Albert Einstein gave a proof by dissection in which the pieces do not need to be moved. Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and two similar shapes that each include one of two legs instead of the hypotenuse (see Similar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. === Euclid's proof === In outline, here is how the proof in Euclid's Elements proceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to be congruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. Let A, B, C be the vertices of a right triangle, with a right angle at A. Drop a perpendicular from A to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementary lemmata: If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent (side-angle-side). The area of a triangle is half the area of any parallelogram on the same base and having the same altitude. The area of a rectangle is equal to the product of two adjacent sides. The area of a square is equal to the product of two of its sides (follows from 3). Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square. The proof is as follows: Let ACB be a right-angled triangle with right angle CAB. On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate. From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively. Join CF and AD, to form the triangles BCF and BDA. Angles CAB and BAG are both right angles; therefore C, A, and G are collinear. Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC. Since AB is equal to FB, BD is equal to BC and angle ABD equals angle FBC, triangle ABD must be congruent to triangle FBC. Since A-K-L is a straight line, parallel to BD, then rectangle BDLK has twice the area of triangle ABD because they share the base BD and have the same altitude BK, i.e., a line normal to their common base, connecting the parallel lines BD and AL. (lemma 2) Since C is collinear with A and G, and this line is parallel to FB, then square BAGF must be twice in area to triangle FBC. Therefore, rectangle BDLK must have the same area as square BAGF = AB2. By applying steps 3 to 10 to the other side of the figure, it can be similarly shown that rectangle CKLE must have the same area as square ACIH = AC2. Adding these two results, AB2 + AC2 = BD × BK + KL × KC Since BD = KL, BD × BK + KL × KC = BD(BK + KC) = BD × BC Therefore, AB2 + AC2 = BC2, since CBDE is a square. This proof, which appears in Euclid's Elements as that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares. This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used. === Proofs by dissection and rearrangement === Another by rearrangement is given by the middle animation. A large square is formed with area c2, from four identical right triangles with sides a, b and c, fitted around a small central square. Then two rectangles are formed with sides a and b by moving the triangles. Combining the smaller square with these rectangles produces two squares of areas a2 and b2, which must have the same area as the initial large square. The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is called dissection. This shows the area of the large square equals that of the two smaller ones. === Proof by area-preserving shearing === As shown in the accompanying animation, area-preserving shear mappings and translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly. Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. === Other algebraic proofs === A related proof by U.S. President James A. Garfield was published before he was elected president; while he was a U.S. Representative. Instead of a square it uses a trapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. The area of the trapezoid can be calculated to be half the area of the square, that is 1 2 ( b + a ) 2 . {\displaystyle {\frac {1}{2}}(b+a)^{2}.} The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of 1 2 {\displaystyle {\frac {1}{2}}} , which is removed by multiplying by two to give the result. === Proof using differentials === One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing calculus. The triangle ABC is a right triangle, as shown in the upper part of the diagram, with BC the hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of length y, the side AC of length x and the side AB of length a, as seen in the lower diagram part. If x is increased by a small amount dx by extending the side AC slightly to D, then y also increases by dy. These form two sides of a triangle, CDE, which (with E chosen so CE is perpendicular to the hypotenuse) is a right triangle approximately similar to ABC. Therefore, the ratios of their sides must be the same, that is: d y d x = x y . {\displaystyle {\frac {dy}{dx}}={\frac {x}{y}}.} This can be rewritten as y d y = x d x {\displaystyle y\,dy=x\,dx} , which is a differential equation that can be solved by direct integration: ∫ y d y = ∫ x d x , {\displaystyle \int y\,dy=\int x\,dx\,,} giving y 2 = x 2 + C . {\displaystyle y^{2}=x^{2}+C.} The constant can be deduced from x = 0, y = a to give the equation y 2 = x 2 + a 2 . {\displaystyle y^{2}=x^{2}+a^{2}.} This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place of dx and dy. == Converse == The converse of the theorem is also true: Given a triangle with sides of length a, b, and c, if a2 + b2 = c2, then the angle between sides a and b is a right angle. For any three positive real numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c as a consequence of the converse of the triangle inequality. This converse appears in Euclid's Elements (Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right." It can be proved using the law of cosines or as follows: Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2. Construct a second triangle with sides of length a and b containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length c = √a2 + b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths a, b and c, the triangles are congruent and must have the same angles. Therefore, the angle between the side of lengths a and b in the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem. A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Let c be chosen to be the longest of the three sides and a + b > c (otherwise there is no triangle according to the triangle inequality). The following statements apply: If a2 + b2 = c2, then the triangle is right. If a2 + b2 > c2, then the triangle is acute. If a2 + b2 < c2, then the triangle is obtuse. Edsger W. Dijkstra has stated this proposition about acute, right, and obtuse triangles in this language: sgn(α + β − γ) = sgn(a2 + b2 − c2), where α is the angle opposite to side a, β is the angle opposite to side b, γ is the angle opposite to side c, and sgn is the sign function. == Consequences and uses of the theorem == === Pythagorean triples === A Pythagorean triple has three positive integers a, b, and c, such that a2 + b2 = c2. In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Such a triple is commonly written (a, b, c). Some well-known examples are (3, 4, 5) and (5, 12, 13). A primitive Pythagorean triple is one in which a, b and c are coprime (the greatest common divisor of a, b and c is 1). The following is a list of primitive Pythagorean triples with values less than 100: (3, 4, 5), (5, 12, 13), (7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65), (36, 77, 85), (39, 80, 89), (48, 55, 73), (65, 72, 97) There are many formulas for generating Pythagorean triples. Of these, Euclid's formula is the most well-known: given arbitrary positive integers m and n, the formula states that the integers a = m 2 − n 2 , b = 2 m n , c = m 2 + n 2 {\displaystyle a=m^{2}-n^{2},\quad \,b=2mn,\quad \,c=m^{2}+n^{2}} forms a Pythagorean triple. === Inverse Pythagorean theorem === Given a right triangle with sides a , b , c {\displaystyle a,b,c} and altitude d {\displaystyle d} (a line from the right angle and perpendicular to the hypotenuse c {\displaystyle c} ). The Pythagorean theorem has, a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} while the inverse Pythagorean theorem relates the two legs a , b {\displaystyle a,b} to the altitude d {\displaystyle d} , 1 a 2 + 1 b 2 = 1 d 2 {\displaystyle {\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}={\frac {1}{d^{2}}}} The equation can be transformed to, 1 ( x z ) 2 + 1 ( y z ) 2 = 1 ( x y ) 2 {\displaystyle {\frac {1}{(xz)^{2}}}+{\frac {1}{(yz)^{2}}}={\frac {1}{(xy)^{2}}}} where x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} for any non-zero real x , y , z {\displaystyle x,y,z} . If the a , b , d {\displaystyle a,b,d} are to be integers, the smallest solution a > b > d {\displaystyle a>b>d} is then 1 20 2 + 1 15 2 = 1 12 2 {\displaystyle {\frac {1}{20^{2}}}+{\frac {1}{15^{2}}}={\frac {1}{12^{2}}}} using the smallest Pythagorean triple 3 , 4 , 5 {\displaystyle 3,4,5} . The reciprocal Pythagorean theorem is a special case of the optic equation 1 p + 1 q = 1 r {\displaystyle {\frac {1}{p}}+{\frac {1}{q}}={\frac {1}{r}}} where the denominators are squares and also for a heptagonal triangle whose sides p , q , r {\displaystyle p,q,r} are square numbers. === Incommensurable lengths === One of the consequences of the Pythagorean theorem is that line segments whose lengths are incommensurable (so the ratio of which is not a rational number) can be constructed using a straightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by the square root operation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer. Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as √2, √3, √5 . For more detail, see Quadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit. According to one legend, Hippasus of Metapontum (ca. 470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable. A careful discussion of Hippasus's contributions is found in Fritz. === Complex numbers === For any complex number z = x + i y , {\displaystyle z=x+iy,} the absolute value or modulus is given by r = | z | = x 2 + y 2 . {\displaystyle r=|z|={\sqrt {x^{2}+y^{2}}}.} So the three quantities, r, x and y are related by the Pythagorean equation, r 2 = x 2 + y 2 . {\displaystyle r^{2}=x^{2}+y^{2}.} Note that r is defined to be a positive number or zero but x and y can be negative as well as positive. Geometrically r is the distance of the z from zero or the origin O in the complex plane. This can be generalised to find the distance between two points, z1 and z2 say. The required distance is given by | z 1 − z 2 | = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 , {\displaystyle |z_{1}-z_{2}|={\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}},} so again they are related by a version of the Pythagorean equation, | z 1 − z 2 | 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle |z_{1}-z_{2}|^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}.} === Euclidean distance === The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If (x1, y1) and (x2, y2) are points in the plane, then the distance between them, also called the Euclidean distance, is given by ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle {\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}}.} More generally, in Euclidean n-space, the Euclidean distance between two points, A = ( a 1 , a 2 , … , a n ) {\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})} and B = ( b 1 , b 2 , … , b n ) {\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})} , is defined, by generalization of the Pythagorean theorem, as: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle {\sqrt {(a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}}}={\sqrt {\sum _{i=1}^{n}(a_{i}-b_{i})^{2}}}.} If instead of Euclidean distance, the square of this value (the squared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle (a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}=\sum _{i=1}^{n}(a_{i}-b_{i})^{2}.} The squared form is a smooth, convex function of both points, and is widely used in optimization theory and statistics, forming the basis of least squares. === Euclidean distance in other coordinate systems === If Cartesian coordinates are not used, for example, if polar coordinates are used in two dimensions or, in more general terms, if curvilinear coordinates are used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in the applications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates (r, θ) can be introduced as: x = r cos θ , y = r sin θ . {\displaystyle x=r\cos \theta ,\ y=r\sin \theta .} Then two points with locations (r1, θ1) and (r2, θ2) are separated by a distance s: s 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 = ( r 1 cos θ 1 − r 2 cos θ 2 ) 2 + ( r 1 sin θ 1 − r 2 sin θ 2 ) 2 . {\displaystyle s^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}=(r_{1}\cos \theta _{1}-r_{2}\cos \theta _{2})^{2}+(r_{1}\sin \theta _{1}-r_{2}\sin \theta _{2})^{2}.} Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: s 2 = r 1 2 + r 2 2 − 2 r 1 r 2 ( cos θ 1 cos θ 2 + sin θ 1 sin θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos ( θ 1 − θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos Δ θ , {\displaystyle {\begin{aligned}s^{2}&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\left(\cos \theta _{1}\cos \theta _{2}+\sin \theta _{1}\sin \theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \left(\theta _{1}-\theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \Delta \theta ,\end{aligned}}} using the trigonometric product-to-sum formulas. This formula is the law of cosines, sometimes called the generalized Pythagorean theorem. From this result, for the case where the radii to the two locations are at right angles, the enclosed angle Δθ = π/2, and the form corresponding to Pythagoras' theorem is regained: s 2 = r 1 2 + r 2 2 . {\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.} The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. === Pythagorean trigonometric identity === In a right triangle with sides a, b and hypotenuse c, trigonometry determines the sine and cosine of the angle θ between side a and the hypotenuse as: sin θ = b c , cos θ = a c . {\displaystyle \sin \theta ={\frac {b}{c}},\quad \cos \theta ={\frac {a}{c}}.} From that it follows: cos 2 θ + sin 2 θ = a 2 + b 2 c 2 = 1 , {\displaystyle {\cos }^{2}\theta +{\sin }^{2}\theta ={\frac {a^{2}+b^{2}}{c^{2}}}=1,} where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity. In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of size sin θ and adjacent side of size cos θ in units of the hypotenuse. === Relation to the cross product === The Pythagorean theorem relates the cross product and dot product in a similar way: ‖ a × b ‖ 2 + ( a ⋅ b ) 2 = ‖ a ‖ 2 ‖ b ‖ 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}+(\mathbf {a} \cdot \mathbf {b} )^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}.} This can be seen from the definitions of the cross product and dot product, as a × b = a b n sin θ a ⋅ b = a b cos θ , {\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} &=ab\mathbf {n} \sin {\theta }\\\mathbf {a} \cdot \mathbf {b} &=ab\cos {\theta },\end{aligned}}} with n a unit vector normal to both a and b. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained ‖ a × b ‖ 2 = ‖ a ‖ 2 ‖ b ‖ 2 − ( a ⋅ b ) 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}-(\mathbf {a} \cdot \mathbf {b} )^{2}.} This can be considered as a condition on the cross product and so part of its definition, for example in seven dimensions. === As an axiom === If the first four of the Euclidean geometry axioms are assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is, Euclid's fifth postulate implies the Pythagorean theorem and vice-versa. == Generalizations == === Similar figures on the three sides === The Pythagorean theorem generalizes beyond the areas of squares on the three sides to any similar figures. This was known by Hippocrates of Chios in the 5th century BC, and was included by Euclid in his Elements: If one erects similar figures (see Euclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures are a:b:c). While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle). The basic idea behind this generalization is that the area of a plane figure is proportional to the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areas A, B and C are erected on sides with corresponding lengths a, b and c then: A a 2 = B b 2 = C c 2 , {\displaystyle {\frac {A}{a^{2}}}={\frac {B}{b^{2}}}={\frac {C}{c^{2}}}\,,} ⇒ A + B = a 2 c 2 C + b 2 c 2 C . {\displaystyle \Rightarrow A+B={\frac {a^{2}}{c^{2}}}C+{\frac {b^{2}}{c^{2}}}C\,.} But, by the Pythagorean theorem, a2 + b2 = c2, so A + B = C. Conversely, if we can prove that A + B = C for three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangle C on its hypotenuse, and two similar right triangles (A and B ) constructed on the other two sides, formed by dividing the central triangle by its altitude. The sum of the areas of the two smaller triangles therefore is that of the third, thus A + B = C and reversing the above logic leads to the Pythagorean theorem a2 + b2 = c2. (See also Einstein's proof by dissection without rearrangement) === Law of cosines === The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states that a 2 + b 2 − 2 a b cos θ = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}} where θ {\displaystyle \theta } is the angle between sides a {\displaystyle a} and b {\displaystyle b} . When θ {\displaystyle \theta } is π 2 {\displaystyle {\frac {\pi }{2}}} radians or 90°, then cos θ = 0 {\displaystyle \cos {\theta }=0} , and the formula reduces to the usual Pythagorean theorem. === Arbitrary triangle === At any selected angle of a general triangle of sides a, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeled c. Inscribing the isosceles triangle forms triangle CAD with angle θ opposite side b and with side r along c. A second triangle is formed with angle θ opposite side a and a side with length s along c, as shown in the figure. Thābit ibn Qurra stated that the sides of the three triangles were related as: a 2 + b 2 = c ( r + s ) . {\displaystyle a^{2}+b^{2}=c(r+s)\ .} As the angle θ approaches π/2, the base of the isosceles triangle narrows, and lengths r and s overlap less and less. When θ = π/2, ADB becomes a right triangle, r + s = c, and the original Pythagorean theorem is regained. One proof observes that triangle ABC has the same angles as triangle CAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by the triangle postulate.) Consequently, ABC is similar to the reflection of CAD, the triangle DAC in the lower panel. Taking the ratio of sides opposite and adjacent to θ, c b = b r . {\displaystyle {\frac {c}{b}}={\frac {b}{r}}\ .} Likewise, for the reflection of the other triangle, c a = a s . {\displaystyle {\frac {c}{a}}={\frac {a}{s}}\ .} Clearing fractions and adding these two relations: c s + c r = a 2 + b 2 , {\displaystyle cs+cr=a^{2}+b^{2}\ ,} the required result. The theorem remains valid if the angle θ {\displaystyle \theta } is obtuse so the lengths r and s are non-overlapping. === General triangles using parallelograms === Pappus's area theorem is a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization by Pappus of Alexandria in 4 AD The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same base b and height h. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. === Solid geometry === In terms of solid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider the cuboid shown in the figure. The length of face diagonal AC is found from Pythagoras' theorem as: A C ¯ 2 = A B ¯ 2 + B C ¯ 2 , {\displaystyle {\overline {AC}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}\,,} where these three sides form a right triangle. Using diagonal AC and the horizontal edge CD, the length of body diagonal AD then is found by a second application of Pythagoras' theorem as: A D ¯ 2 = A C ¯ 2 + C D ¯ 2 , {\displaystyle {\overline {AD}}^{\,2}={\overline {AC}}^{\,2}+{\overline {CD}}^{\,2}\,,} or, doing it all in one step: A D ¯ 2 = A B ¯ 2 + B C ¯ 2 + C D ¯ 2 . {\displaystyle {\overline {AD}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}+{\overline {CD}}^{\,2}\,.} This result is the three-dimensional expression for the magnitude of a vector v (the diagonal AD) in terms of its orthogonal components {vk} (the three mutually perpendicular sides): ‖ v ‖ 2 = ∑ k = 1 3 ‖ v k ‖ 2 . {\displaystyle \|\mathbf {v} \|^{2}=\sum _{k=1}^{3}\|\mathbf {v} _{k}\|^{2}.} This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions is de Gua's theorem, named for Jean Paul de Gua de Malves: If a tetrahedron has a right angle corner (like a corner of a cube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem": Let x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} be orthogonal vectors in Rn. Consider the n-dimensional simplex S with vertices 0 , x 1 , … , x n {\displaystyle 0,x_{1},\ldots ,x_{n}} . (Think of the (n − 1)-dimensional simplex with vertices x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} not including the origin as the "hypotenuse" of S and the remaining (n − 1)-dimensional faces of S as its "legs".) Then the square of the volume of the hypotenuse of S is the sum of the squares of the volumes of the n legs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording: Given an n-rectangular n-dimensional simplex, the square of the (n − 1)-content of the facet opposing the right vertex will equal the sum of the squares of the (n − 1)-contents of the remaining facets. === Inner product spaces === The Pythagorean theorem can be generalized to inner product spaces, which are generalizations of the familiar 2-dimensional and 3-dimensional Euclidean spaces. For example, a function may be considered as a vector with infinitely many components in an inner product space, as in functional analysis. In an inner product space, the concept of perpendicularity is replaced by the concept of orthogonality: two vectors v and w are orthogonal if their inner product ⟨ v , w ⟩ {\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle } is zero. The inner product is a generalization of the dot product of vectors. The dot product is called the standard inner product or the Euclidean inner product. However, other inner products are possible. The concept of length is replaced by the concept of the norm ‖v‖ of a vector v, defined as: ‖ v ‖ ≡ ⟨ v , v ⟩ . {\displaystyle \lVert \mathbf {v} \rVert \equiv {\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}\,.} In an inner-product space, the Pythagorean theorem states that for any two orthogonal vectors v and w we have ‖ v + w ‖ 2 = ‖ v ‖ 2 + ‖ w ‖ 2 . {\displaystyle \left\|\mathbf {v} +\mathbf {w} \right\|^{2}=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2}.} Here the vectors v and w are akin to the sides of a right triangle with hypotenuse given by the vector sum v + w. This form of the Pythagorean theorem is a consequence of the properties of the inner product: ‖ v + w ‖ 2 = ⟨ v + w , v + w ⟩ = ⟨ v , v ⟩ + ⟨ w , w ⟩ + ⟨ v , w ⟩ + ⟨ w , v ⟩ = ‖ v ‖ 2 + ‖ w ‖ 2 , {\displaystyle {\begin{aligned}\left\|\mathbf {v} +\mathbf {w} \right\|^{2}&=\langle \mathbf {v+w} ,\ \mathbf {v+w} \rangle \\[3mu]&=\langle \mathbf {v} ,\ \mathbf {v} \rangle +\langle \mathbf {w} ,\ \mathbf {w} \rangle +\langle \mathbf {v,\ w} \rangle +\langle \mathbf {w,\ v} \rangle \\[3mu]&=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2},\end{aligned}}} where ⟨ v , w ⟩ = ⟨ w , v ⟩ = 0 {\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0} because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is the parallelogram law: 2 ‖ v ‖ 2 + 2 ‖ w ‖ 2 = ‖ v + w ‖ 2 + ‖ v − w ‖ 2 , {\displaystyle 2\|\mathbf {v} \|^{2}+2\|\mathbf {w} \|^{2}=\|\mathbf {v+w} \|^{2}+\|\mathbf {v-w} \|^{2}\ ,} which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality is ipso facto a norm corresponding to an inner product. The Pythagorean identity can be extended to sums of more than two orthogonal vectors. If v1, v2, ..., vn are pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section on solid geometry) results in the equation ‖ ∑ k = 1 n v k ‖ 2 = ∑ k = 1 n ‖ v k ‖ 2 {\displaystyle {\biggl \|}\sum _{k=1}^{n}\mathbf {v} _{k}{\biggr \|}^{2}=\sum _{k=1}^{n}\|\mathbf {v} _{k}\|^{2}} === Sets of m-dimensional objects in n-dimensional space === Another generalization of the Pythagorean theorem applies to Lebesgue-measurable sets of objects in any number of dimensions. Specifically, the square of the measure of an m-dimensional set of objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space is equal to the sum of the squares of the measures of the orthogonal projections of the object(s) onto all m-dimensional coordinate subspaces. In mathematical terms: μ m s 2 = ∑ i = 1 x μ 2 m p i {\displaystyle \mu _{ms}^{2}=\sum _{i=1}^{x}\mathbf {\mu ^{2}} _{mp_{i}}} where: μ m {\displaystyle \mu _{m}} is a measure in m-dimensions (a length in one dimension, an area in two dimensions, a volume in three dimensions, etc.). s {\displaystyle s} is a set of one or more non-overlapping m-dimensional objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space. μ m s {\displaystyle \mu _{ms}} is the total measure (sum) of the set of m-dimensional objects. p {\displaystyle p} represents an m-dimensional projection of the original set onto an orthogonal coordinate subspace. μ m p i {\displaystyle \mu _{mp_{i}}} is the measure of the m-dimensional set projection onto m-dimensional coordinate subspace i {\displaystyle i} . Because object projections can overlap on a coordinate subspace, the measure of each object projection in the set must be calculated individually, then measures of all projections added together to provide the total measure for the set of projections on the given coordinate subspace. x {\displaystyle x} is the number of orthogonal, m-dimensional coordinate subspaces in n-dimensional space (Rn) onto which the m-dimensional objects are projected (m ≤ n): x = ( n m ) = n ! m ! ( n − m ) ! {\displaystyle x={\binom {n}{m}}={\frac {n!}{m!(n-m)!}}} === Non-Euclidean geometry === The Pythagorean theorem is derived from the axioms of Euclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theorem implies, and is implied by, Euclid's Parallel (Fifth) Postulate. Thus, right triangles in a non-Euclidean geometry do not satisfy the Pythagorean theorem. For example, in spherical geometry, all three sides of the right triangle (say a, b, and c) bounding an octant of the unit sphere have length equal to π/2, and all its angles are right angles, which violates the Pythagorean theorem because a 2 + b 2 = 2 c 2 > c 2 {\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}} . Here two cases of non-Euclidean geometry are considered—spherical geometry and hyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, say A+B = C. The sides are then related as follows: the sum of the areas of the circles with diameters a and b equals the area of the circle with diameter c. ==== Spherical geometry ==== For any right triangle on a sphere of radius R (for example, if γ in the figure is a right angle), with sides a, b, c, the relation between the sides takes the form: cos c R = cos a R cos b R . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}.} This equation can be derived as a special case of the spherical law of cosines that applies to all spherical triangles: cos c R = cos a R cos b R + sin a R sin b R cos γ . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}+\sin {\frac {a}{R}}\,\sin {\frac {b}{R}}\,\cos {\gamma }.} For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengths a, b, and c on a sphere with expanding radius R. As R approaches infinity the quantities a/R, b/R, and c/R tend to zero and the spherical Pythagorean identity reduces to 1 = 1 , {\displaystyle 1=1,} so we must look at its asymptotic expansion. The Maclaurin series for the cosine function can be written as cos x = 1 − 1 2 x 2 + O ( x 4 ) {\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}} with the remainder term in big O notation. Letting x = c / R {\displaystyle x=c/R} be a side of the triangle, and treating the expression as an asymptotic expansion in terms of R for a fixed c, cos c R = 1 − c 2 2 R 2 + O ( R − 4 ) {\displaystyle {\begin{aligned}\cos {\frac {c}{R}}=1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\end{aligned}}} and likewise for a and b. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields 1 − c 2 2 R 2 + O ( R − 4 ) = ( 1 − a 2 2 R 2 + O ( R − 4 ) ) ( 1 − b 2 2 R 2 + O ( R − 4 ) ) = 1 − a 2 2 R 2 − b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\begin{aligned}1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}&=\left(1-{\frac {a^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\left(1-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\\&=1-{\frac {a^{2}}{2R^{2}}}-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.\end{aligned}}} Subtracting 1 and then negating each side, c 2 2 R 2 = a 2 2 R 2 + b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\frac {c^{2}}{2R^{2}}}={\frac {a^{2}}{2R^{2}}}+{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.} Multiplying through by 2R2, the asymptotic expansion for c in terms of fixed a, b and variable R is c 2 = a 2 + b 2 + O ( R − 2 ) . {\displaystyle c^{2}=a^{2}+b^{2}+O{\left(R^{-2}\right)}.} The Euclidean Pythagorean relationship c 2 = a 2 + b 2 {\textstyle c^{2}=a^{2}+b^{2}} is recovered in the limit, as the remainder vanishes when the radius R approaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identity cos 2 θ = 1 − 2 sin 2 θ {\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }} to avoid loss of significance. Then the spherical Pythagorean theorem can alternately be written as sin 2 c 2 R = sin 2 a 2 R + sin 2 b 2 R − 2 sin 2 a 2 R sin 2 b 2 R . {\displaystyle \sin ^{2}{\frac {c}{2R}}=\sin ^{2}{\frac {a}{2R}}+\sin ^{2}{\frac {b}{2R}}-2\sin ^{2}{\frac {a}{2R}}\,\sin ^{2}{\frac {b}{2R}}.} ==== Hyperbolic geometry ==== In a hyperbolic space with uniform Gaussian curvature −1/R2, for a right triangle with legs a, b, and hypotenuse c, the relation between the sides takes the form: cosh c R = cosh a R cosh b R {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\,\cosh {\frac {b}{R}}} where cosh is the hyperbolic cosine. This formula is a special form of the hyperbolic law of cosines that applies to all hyperbolic triangles: cosh c R = cosh a R cosh b R − sinh a R sinh b R cos γ , {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\ \cosh {\frac {b}{R}}-\sinh {\frac {a}{R}}\ \sinh {\frac {b}{R}}\ \cos \gamma \ ,} with γ the angle at the vertex opposite the side c. By using the Maclaurin series for the hyperbolic cosine, cosh x ≈ 1 + x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, as a, b, and c all approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles (a, b << R), the hyperbolic cosines can be eliminated to avoid loss of significance, giving sinh 2 c 2 R = sinh 2 a 2 R + sinh 2 b 2 R + 2 sinh 2 a 2 R sinh 2 b 2 R . {\displaystyle \sinh ^{2}{\frac {c}{2R}}=\sinh ^{2}{\frac {a}{2R}}+\sinh ^{2}{\frac {b}{2R}}+2\sinh ^{2}{\frac {a}{2R}}\sinh ^{2}{\frac {b}{2R}}\,.} ==== Very small triangles ==== For any uniform curvature K (positive, zero, or negative), in very small right triangles (|K|a2, |K|b2 << 1) with hypotenuse c, it can be shown that c 2 = a 2 + b 2 − K 3 a 2 b 2 − K 2 45 a 2 b 2 ( a 2 + b 2 ) − 2 K 3 945 a 2 b 2 ( a 2 − b 2 ) 2 + O ( K 4 c 10 ) . {\displaystyle c^{2}=a^{2}+b^{2}-{\frac {K}{3}}a^{2}b^{2}-{\frac {K^{2}}{45}}a^{2}b^{2}(a^{2}+b^{2})-{\frac {2K^{3}}{945}}a^{2}b^{2}(a^{2}-b^{2})^{2}+O(K^{4}c^{10})\,.} === Differential geometry === The Pythagorean theorem applies to infinitesimal triangles seen in differential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies d s 2 = d x 2 + d y 2 + d z 2 , {\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2},} with ds the element of distance and (dx, dy, dz) the components of the vector separating the two points. Such a space is called a Euclidean space. However, in Riemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form: d s 2 = ∑ i , j n g i j d x i d x j {\displaystyle ds^{2}=\sum _{i,j}^{n}g_{ij}\,dx_{i}\,dx_{j}} which is called the metric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficients gij.) It may be a function of position, and often describes curved space. A simple example is Euclidean (flat) space expressed in curvilinear coordinates. For example, in polar coordinates: d s 2 = d r 2 + r 2 d θ 2 . {\displaystyle ds^{2}=dr^{2}+r^{2}d\theta ^{2}\ .} == History == There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians of Mesopotamian mathematics have concluded that the Pythagorean rule was in widespread use during the Old Babylonian period (20th to 16th centuries BC), over a thousand years before Pythagoras was born. The history of the theorem can be divided into four parts: knowledge of Pythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within some deductive system. Written c. 1800 BC, the Egyptian Middle Kingdom Berlin Papyrus 6619 includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tablet Plimpton 322, written near Larsa also c. 1800 BC, contains many entries closely related to Pythagorean triples. In India, the Baudhayana Shulba Sutra, the dates of which are given variously as between the 8th and 5th century BC, contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of the isosceles right triangle and in the general case, as does the Apastamba Shulba Sutra (c. 600 BC). Byzantine Neoplatonic philosopher and mathematician Proclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed to Plato, the other to Pythagoras", for generating special Pythagorean triples. The rule attributed to Pythagoras (c. 570 – c. 495 BC) starts from an odd number and produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According to Thomas L. Heath (1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived. However, when authors such as Plutarch and Cicero attributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted. Classicist Kurt von Fritz wrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period of Pythagorean mathematics." Around 300 BC, in Euclid's Elements, the oldest extant axiomatic proof of the theorem is presented. With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, the Chinese text Zhoubi Suanjing (周髀算经), (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理). During the Han Dynasty (202 BC to 220 AD), Pythagorean triples appear in The Nine Chapters on the Mathematical Art, together with a mention of right triangles. Some believe the theorem arose first in China in the 11th century BC, where it is alternatively known as the "Shang Gao theorem" (商高定理), named after the Duke of Zhou's astronomer and mathematician, whose reasoning composed most of what was in the Zhoubi Suanjing. == See also == == Notes and references == === Notes === === References === === Works cited === == External links == Euclid (1997) [c. 300 BC]. David E. Joyce (ed.). Elements. Retrieved 2006-08-30. In HTML with Java-based interactive figures. "Pythagorean theorem". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. History topic: Pythagoras's theorem in Babylonian mathematics Interactive links: Interactive proof in Java of the Pythagorean theorem Another interactive proof in Java of the Pythagorean theorem Pythagorean theorem with interactive animation Animated, non-algebraic, and user-paced Pythagorean theorem Pythagorean theorem water demo on YouTube Pythagorean theorem (more than 70 proofs from cut-the-knot) Weisstein, Eric W. "Pythagorean theorem". MathWorld.
|
Wikipedia:Pythagorean trigonometric identity#0
|
The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions. The identity is sin 2 θ + cos 2 θ = 1. {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1.} As usual, sin 2 θ {\displaystyle \sin ^{2}\theta } means ( sin θ ) 2 {\textstyle (\sin \theta )^{2}} . == Proofs and their relationships to the Pythagorean theorem == === Proof based on right-angle triangles === Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely cos θ. The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are: sin θ = o p p o s i t e h y p o t e n u s e = b c cos θ = a d j a c e n t h y p o t e n u s e = a c {\displaystyle {\begin{alignedat}{3}\sin \theta &={\frac {\mathrm {opposite} }{\mathrm {hypotenuse} }}={\frac {b}{c}}\\\cos \theta &={\frac {\mathrm {adjacent} }{\mathrm {hypotenuse} }}={\frac {a}{c}}\end{alignedat}}} The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes o p p o s i t e 2 + a d j a c e n t 2 h y p o t e n u s e 2 {\displaystyle {\frac {\mathrm {opposite} ^{2}+\mathrm {adjacent} ^{2}}{\mathrm {hypotenuse} ^{2}}}} which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining x = cos θ and y sin θ for the unit circle and thus x = c cos θ and y = c sin θ for a circle of radius c and reflecting our triangle in the y-axis and setting a = x and b = y. Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for −π < θ ≤ π then it is true for all real θ. Next we prove the identity in the range π/2 < θ ≤ π. To do this we let t = θ − π/2, t will now be in the range 0 < t ≤ π/2. We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs): sin 2 θ + cos 2 θ = sin 2 ( t + 1 2 π ) + cos 2 ( t + 1 2 π ) = cos 2 t + sin 2 t = 1. {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =\sin ^{2}\left(t+{\tfrac {1}{2}}\pi \right)+\cos ^{2}\left(t+{\tfrac {1}{2}}\pi \right)=\cos ^{2}t+\sin ^{2}t=1.} Finally, it remains is to prove the formula for −π < θ < 0; this can be done by squaring the symmetry identities to get sin 2 θ = sin 2 ( − θ ) and cos 2 θ = cos 2 ( − θ ) . {\displaystyle \sin ^{2}\theta =\sin ^{2}(-\theta ){\text{ and }}\cos ^{2}\theta =\cos ^{2}(-\theta ).} ==== Related identities ==== The two identities 1 + tan 2 θ = sec 2 θ 1 + cot 2 θ = csc 2 θ {\displaystyle {\begin{aligned}1+\tan ^{2}\theta &=\sec ^{2}\theta \\1+\cot ^{2}\theta &=\csc ^{2}\theta \end{aligned}}} are also called Pythagorean trigonometric identities. If one leg of a right triangle has length 1, then the tangent of the angle adjacent to that leg is the length of the other leg, and the secant of the angle is the length of the hypotenuse. tan θ = b a , sec θ = c a . {\displaystyle {\begin{aligned}\tan \theta &={\frac {b}{a}}\,,\\\sec \theta &={\frac {c}{a}}\,.\end{aligned}}} In this way, this trigonometric identity involving the tangent and the secant follows from the Pythagorean theorem. The angle opposite the leg of length 1 (this angle can be labeled φ = π/2 − θ) has cotangent equal to the length of the other leg, and cosecant equal to the length of the hypotenuse. In that way, this trigonometric identity involving the cotangent and the cosecant also follows from the Pythagorean theorem. The following table gives the identities with the factor or divisor that relates them to the main identity. === Proof using the unit circle === The unit circle centered at the origin in the Euclidean plane is defined by the equation: x 2 + y 2 = 1. {\displaystyle x^{2}+y^{2}=1.} Given an angle θ, there is a unique point P on the unit circle at an anticlockwise angle of θ from the x-axis, and the x- and y-coordinates of P are: x = cos θ and y = sin θ . {\displaystyle x=\cos \theta \ {\text{ and }}\ y=\sin \theta .} Consequently, from the equation for the unit circle, cos 2 θ + sin 2 θ = 1 , {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1,} the Pythagorean identity. In the figure, the point P has a negative x-coordinate, and is appropriately given by x = cos θ, which is a negative number: cos θ = −cos(π − θ). Point P has a positive y-coordinate, and sin θ = sin(π − θ) > 0. As θ increases from zero to the full circle θ = 2π, the sine and cosine change signs in the various quadrants to keep x and y with the correct signs. The figure shows how the sign of the sine function varies as the angle changes quadrant. Because the x- and y-axes are perpendicular, this Pythagorean identity is equivalent to the Pythagorean theorem for triangles with hypotenuse of length 1 (which is in turn equivalent to the full Pythagorean theorem by applying a similar-triangles argument). See Unit circle for a short explanation. === Proof using power series === The trigonometric functions may also be defined using power series, namely for x (an angle measured in radians): sin x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 , cos x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n . {\displaystyle {\begin{aligned}\sin x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1},\\\cos x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}} Using the multiplication formula for power series at Multiplication and division of power series (suitably modified to account for the form of the series here) we obtain sin 2 x = ∑ i = 0 ∞ ∑ j = 0 ∞ ( − 1 ) i ( 2 i + 1 ) ! ( − 1 ) j ( 2 j + 1 ) ! x ( 2 i + 1 ) + ( 2 j + 1 ) = ∑ n = 1 ∞ ( ∑ i = 0 n − 1 ( − 1 ) n − 1 ( 2 i + 1 ) ! ( 2 ( n − i − 1 ) + 1 ) ! ) x 2 n = ∑ n = 1 ∞ ( ∑ i = 0 n − 1 ( 2 n 2 i + 1 ) ) ( − 1 ) n − 1 ( 2 n ) ! x 2 n , cos 2 x = ∑ i = 0 ∞ ∑ j = 0 ∞ ( − 1 ) i ( 2 i ) ! ( − 1 ) j ( 2 j ) ! x ( 2 i ) + ( 2 j ) = ∑ n = 0 ∞ ( ∑ i = 0 n ( − 1 ) n ( 2 i ) ! ( 2 ( n − i ) ) ! ) x 2 n = ∑ n = 0 ∞ ( ∑ i = 0 n ( 2 n 2 i ) ) ( − 1 ) n ( 2 n ) ! x 2 n . {\displaystyle {\begin{aligned}\sin ^{2}x&=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }{\frac {(-1)^{i}}{(2i+1)!}}{\frac {(-1)^{j}}{(2j+1)!}}x^{(2i+1)+(2j+1)}\\&=\sum _{n=1}^{\infty }\left(\sum _{i=0}^{n-1}{\frac {(-1)^{n-1}}{(2i+1)!(2(n-i-1)+1)!}}\right)x^{2n}\\&=\sum _{n=1}^{\infty }\left(\sum _{i=0}^{n-1}{2n \choose 2i+1}\right){\frac {(-1)^{n-1}}{(2n)!}}x^{2n},\\\cos ^{2}x&=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }{\frac {(-1)^{i}}{(2i)!}}{\frac {(-1)^{j}}{(2j)!}}x^{(2i)+(2j)}\\&=\sum _{n=0}^{\infty }\left(\sum _{i=0}^{n}{\frac {(-1)^{n}}{(2i)!(2(n-i))!}}\right)x^{2n}\\&=\sum _{n=0}^{\infty }\left(\sum _{i=0}^{n}{2n \choose 2i}\right){\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}} In the expression for sin2, n must be at least 1, while in the expression for cos2, the constant term is equal to 1. The remaining terms of their sum are (with common factors removed) ∑ i = 0 n ( 2 n 2 i ) − ∑ i = 0 n − 1 ( 2 n 2 i + 1 ) = ∑ j = 0 2 n ( − 1 ) j ( 2 n j ) = ( 1 − 1 ) 2 n = 0 {\displaystyle \sum _{i=0}^{n}{2n \choose 2i}-\sum _{i=0}^{n-1}{2n \choose 2i+1}=\sum _{j=0}^{2n}(-1)^{j}{2n \choose j}=(1-1)^{2n}=0} by the binomial theorem. Consequently, sin 2 x + cos 2 x = 1 , {\displaystyle \sin ^{2}x+\cos ^{2}x=1,} which is the Pythagorean trigonometric identity. When the trigonometric functions are defined in this way, the identity in combination with the Pythagorean theorem shows that these power series parameterize the unit circle, which we used in the previous section. This definition constructs the sine and cosine functions in a rigorous fashion and proves that they are differentiable, so that in fact it subsumes the previous two. === Proof using the differential equation === Sine and cosine can be defined as the two solutions to the differential equation: y ″ + y = 0 {\displaystyle y''+y=0} satisfying respectively y(0) = 0, y′(0) = 1 and y(0) = 1, y′(0) = 0. It follows from the theory of ordinary differential equations that the first solution, sine, has the second, cosine, as its derivative, and it follows from this that the derivative of cosine is the negative of the sine. The identity is equivalent to the assertion that the function z = sin 2 x + cos 2 x {\displaystyle z=\sin ^{2}x+\cos ^{2}x} is constant and equal to 1. Differentiating using the chain rule gives: d d x z = 2 sin x cos x + 2 cos x ( − sin x ) = 0 , {\displaystyle {\frac {d}{dx}}z=2\sin x\cos x+2\cos x(-\sin x)=0,} so z is constant. A calculation confirms that z(0) = 1, and z is a constant so z = 1 for all x, so the Pythagorean identity is established. A similar proof can be completed using power series as above to establish that the sine has as its derivative the cosine, and the cosine has as its derivative the negative sine. In fact, the definitions by ordinary differential equation and by power series lead to similar derivations of most identities. This proof of the identity has no direct connection with Euclid's demonstration of the Pythagorean theorem. === Proof using Euler's formula === Using Euler's formula e i θ = cos θ + i sin θ {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } and factoring cos 2 θ + sin 2 θ {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta } as the complex difference of two squares, 1 = e i θ e − i θ = ( cos θ + i sin θ ) ( cos θ − i sin θ ) = cos 2 θ + sin 2 θ . {\displaystyle {\begin{aligned}1&=e^{i\theta }e^{-i\theta }\\[3mu]&=(\cos \theta +i\sin \theta )(\cos \theta -i\sin \theta )\\[3mu]&=\cos ^{2}\theta +\sin ^{2}\theta .\end{aligned}}} == See also == Pythagorean theorem List of trigonometric identities Unit circle Power series Differential equation == Notes ==
|
Wikipedia:Pātīgaṇita#0
|
Pātīgaṇita is the term used in pre-modern Indian mathematical literature to denote the area of mathematics dealing with arithmetic and mensuration. The term is a compound word formed by combining the words pātī and gaṇita. The former is a non-Sanskrit word meaning a "board" and the latter is a Sanskrit word meaning "science of calculation". Thus the term pātīgaṇita literally means the science of calculations which requires a board (on which dust or sand is spread out) for performing the calculations, or "board-computation" in short. The usage of the term became popular among authors of Indian mathematical works about the beginning of the seventh century CE. It may be noted that Brahmagupta (c. 598 – c. 668 CE) has not used this term. Instead, he uses the term dhūlīkarma (dhūlī is the Sanskrit term for dust). The terminology pātīgaṇitamay be contrasted with "bījagaṇita" which denotes the area of mathematics referred to as algebra. The term Pātīgaṇita is also the title of a work composed by Sridhara, an Indian mathematician who flourished during the 8th-9th century CE. == Topics discussed in pātīgaṇita == According to Brahmagupta there are 20 operations (parikarma-s) and 8 determinations (also called logistics) (vyavahāra-s) that come under pātīgaṇita. He has stated as such in his Brahma-sphuṭa-siddhānta without specifying what these are. The commentators of Brahmasphuṭa-siddhānta have listed the following as the 20 operations and the 8 determinations. === Parikarma (Operations) === === Vyavahāra-s (determinations/logistics) === Miśrakah (mixture): Computations involving mixtures of several things. Sreḍhi (progression or series): A sreḍhiis that which has a beginning (first term) and an increase (common difference). Kṣetram (plane figures): Calculations of the area of a figure having several angles. Khātam (excavation): Finding the volumes of excavations. Citih (stock): Computing the measure of a pile of bricks. Krākacikah (saw): Finding the measure of the timber sawn. Rāśih (mound): Calculations to find the amount of a heap of grain, etc. Chāyā (shadow): Finding the time from the shadow of a gnomon, etc. == Works dealing with pāṭīgaṇita == The earliest work dealing with the topics that come under pāṭīgaṇita that has survived to the present day is the Bakhshali manuscript some portions of which has been carbon dated as 224–383 CE. The following are the currently available texts which deal arithmetic and mensuration. They may contain more material than the 20 operations and the eight determinations that are listed as the topics that come under pāṭīgaṇita. Gaṇita-sāra-sañgraha of Mahavira (850 CE) Pātīgaṇita and Pātīgaṇita-sāra (or Trisātikā) of Śrīdharācarya Gaṇita-tilaka of Srīpati (1039 CE) (incomplete) Līlāvatī of Bhāskara II (1150 CE) Gaṇita-kaumudī of Nārāyaṇa (1356 CE) In these works one can see references to several older works, but none of them have survived to the present day. The lost works include Pātīgaṇita of Lalla (8th century CE) and Govindakṛti of Govindasvāmi (9th century CE). The following astronomical treatises deal with arithmetic and mensuration in one of the chapters: Brahma-sphuṭa-siddhānta of Brahmagupta (628 CE) (the twelfth chapter, entitled Gaṇitāddhyāya) Mahā-siddhānta of Āryabhaṭa II (c. 950 CE) (the fifteenth chapter, entitled Pātīgaṇita) Siddhānta-sekhara of Śrīpati (1039 CE) (the thirteenth chapter, entitled Vyakta-gaṇitāddhyāya) == Śrīdhara's Pāṭīgaṇita == In Indian mathematical literature, Śrīdhara is the only author who has composed a work titled Pāṭīgaṇita. He has composed another work titled Pāṭīgaṇita-sāra which is a short summary of his Pāṭīgaṇita. At the very beginning of the work, the author has listed the operations and the determinations that he is going to discuss in the work. According to Śrīdhara, there are 29 operations and nine determinations whereas Brahmagupta talks about only 20 operations and eight determinations. The operations specified in Śrīdhara's Pāṭīgaṇita are the following: The first eight operations specified by Brahmagupata These eight operations in respect of fractions Six operations involving reductions of fractions The five operations specified in items 12–17 in Brahmagupta's list Bhāṇḑa-pratibhāṇḍa (barter of commodities) Jīva-vikraya (sale of living beings) The nine determinations specified by Śrīdhara are the eight determinations specified by Brahmagupta and śūnya-tatva (mathematics of zero). Only one manuscript of Pāṭīgaṇita is currently available and it is incomplete. Discussions on some of the 29 operations and some of the nine determinations are missing from the extant manuscript. == Full texts of Śrīdhara's works == Full text of Śrīdhara's Pāṭīgaṇita is available in Internet Archive: Kripa Shankar Shukla (1959). The Patiganita of Sridharacharya with an ancient Sanskrit Commentary (with Introduction and English translation). Lucknow University: Department of Astronomy and Mathematics. Retrieved 27 August 2024. Full text of Śrīdhara's Pāṭīgaṇita-sāra is available in Internet Archive: Sridhara (2004). Pati-ganita-sara (with translation and commentary in Hindi by Sdyumna Acarya). New Delhi: Central Sanskrit University. Retrieved 3 September 2024. == References ==
|
Wikipedia:Q-Vandermonde identity#0
|
In combinatorics, Vandermonde's identity (or Vandermonde's convolution) is the following identity for binomial coefficients: ( m + n r ) = ∑ k = 0 r ( m k ) ( n r − k ) {\displaystyle {m+n \choose r}=\sum _{k=0}^{r}{m \choose k}{n \choose r-k}} for any nonnegative integers r, m, n. The identity is named after Alexandre-Théophile Vandermonde (1772), although it was already known in 1303 by the Chinese mathematician Zhu Shijie. There is a q-analog to this theorem called the q-Vandermonde identity. Vandermonde's identity can be generalized in numerous ways, including to the identity ( n 1 + ⋯ + n p m ) = ∑ k 1 + ⋯ + k p = m ( n 1 k 1 ) ( n 2 k 2 ) ⋯ ( n p k p ) . {\displaystyle {n_{1}+\dots +n_{p} \choose m}=\sum _{k_{1}+\cdots +k_{p}=m}{n_{1} \choose k_{1}}{n_{2} \choose k_{2}}\cdots {n_{p} \choose k_{p}}.} == Proofs == === Algebraic proof === In general, the product of two polynomials with degrees m and n, respectively, is given by ( ∑ i = 0 m a i x i ) ( ∑ j = 0 n b j x j ) = ∑ r = 0 m + n ( ∑ k = 0 r a k b r − k ) x r , {\displaystyle {\biggl (}\sum _{i=0}^{m}a_{i}x^{i}{\biggr )}{\biggl (}\sum _{j=0}^{n}b_{j}x^{j}{\biggr )}=\sum _{r=0}^{m+n}{\biggl (}\sum _{k=0}^{r}a_{k}b_{r-k}{\biggr )}x^{r},} where we use the convention that ai = 0 for all integers i > m and bj = 0 for all integers j > n. By the binomial theorem, ( 1 + x ) m + n = ∑ r = 0 m + n ( m + n r ) x r . {\displaystyle (1+x)^{m+n}=\sum _{r=0}^{m+n}{m+n \choose r}x^{r}.} Using the binomial theorem also for the exponents m and n, and then the above formula for the product of polynomials, we obtain ∑ r = 0 m + n ( m + n r ) x r = ( 1 + x ) m + n = ( 1 + x ) m ( 1 + x ) n = ( ∑ i = 0 m ( m i ) x i ) ( ∑ j = 0 n ( n j ) x j ) = ∑ r = 0 m + n ( ∑ k = 0 r ( m k ) ( n r − k ) ) x r , {\displaystyle {\begin{aligned}\sum _{r=0}^{m+n}{m+n \choose r}x^{r}&=(1+x)^{m+n}\\&=(1+x)^{m}(1+x)^{n}\\&={\biggl (}\sum _{i=0}^{m}{m \choose i}x^{i}{\biggr )}{\biggl (}\sum _{j=0}^{n}{n \choose j}x^{j}{\biggr )}\\&=\sum _{r=0}^{m+n}{\biggl (}\sum _{k=0}^{r}{m \choose k}{n \choose r-k}{\biggr )}x^{r},\end{aligned}}} where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for all i > m and j > n, respectively. By comparing coefficients of x r, Vandermonde's identity follows for all integers r with 0 ≤ r ≤ m + n. For larger integers r, both sides of Vandermonde's identity are zero due to the definition of binomial coefficients. === Combinatorial proof === Vandermonde's identity also admits a combinatorial double counting proof, as follows. Suppose a committee consists of m men and n women. In how many ways can a subcommittee of r members be formed? The answer is ( m + n r ) . {\displaystyle {m+n \choose r}.} The answer is also the sum over all possible values of k, of the number of subcommittees consisting of k men and r − k women: ∑ k = 0 r ( m k ) ( n r − k ) . {\displaystyle \sum _{k=0}^{r}{m \choose k}{n \choose r-k}.} === Geometrical proof === Take a rectangular grid of r x (m+n−r) squares. There are ( r + ( m + n − r ) r ) = ( m + n r ) {\displaystyle {\binom {r+(m+n-r)}{r}}={\binom {m+n}{r}}} paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is because r right moves and m+n-r up moves must be made (or vice versa) in any order, and the total path length is m + n). Call the bottom left vertex (0, 0). There are ( m k ) {\displaystyle {\binom {m}{k}}} paths starting at (0, 0) that end at (k, m−k), as k right moves and m−k upward moves must be made (and the path length is m). Similarly, there are ( n r − k ) {\displaystyle {\binom {n}{r-k}}} paths starting at (k, m−k) that end at (r, m+n−r), as a total of r−k right moves and (m+n−r) − (m−k) upward moves must be made and the path length must be r−k + (m+n−r) − (m−k) = n. Thus there are ( m k ) ( n r − k ) {\displaystyle {\binom {m}{k}}{\binom {n}{r-k}}} paths that start at (0, 0), end at (r, m+n−r), and go through (k, m−k). This is a subset of all paths that start at (0, 0) and end at (r, m+n−r), so sum from k = 0 to k = r (as the point (k, m−k) is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at (r, m+n−r). == Generalizations == === Generalized Vandermonde's identity === One can generalize Vandermonde's identity as follows: ∑ k 1 + ⋯ + k p = m ( n 1 k 1 ) ( n 2 k 2 ) ⋯ ( n p k p ) = ( n 1 + ⋯ + n p m ) . {\displaystyle \sum _{k_{1}+\cdots +k_{p}=m}{n_{1} \choose k_{1}}{n_{2} \choose k_{2}}\cdots {n_{p} \choose k_{p}}={n_{1}+\dots +n_{p} \choose m}.} This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simple double counting argument. On the one hand, one chooses k 1 {\displaystyle \textstyle k_{1}} elements out of a first set of n 1 {\displaystyle \textstyle n_{1}} elements; then k 2 {\displaystyle \textstyle k_{2}} out of another set, and so on, through p {\displaystyle \textstyle p} such sets, until a total of m {\displaystyle \textstyle m} elements have been chosen from the p {\displaystyle \textstyle p} sets. One therefore chooses m {\displaystyle \textstyle m} elements out of n 1 + ⋯ + n p {\displaystyle \textstyle n_{1}+\dots +n_{p}} in the left-hand side, which is also exactly what is done in the right-hand side. === Chu–Vandermonde identity === The identity generalizes to non-integer arguments. In this case, it is known as the Chu–Vandermonde identity (see Askey 1975, pp. 59–60) and takes the form ( s + t n ) = ∑ k = 0 n ( s k ) ( t n − k ) {\displaystyle {s+t \choose n}=\sum _{k=0}^{n}{s \choose k}{t \choose n-k}} for general complex-valued s and t and any non-negative integer n. It can be proved along the lines of the algebraic proof above by multiplying the binomial series for ( 1 + x ) s {\displaystyle (1+x)^{s}} and ( 1 + x ) t {\displaystyle (1+x)^{t}} and comparing terms with the binomial series for ( 1 + x ) s + t {\displaystyle (1+x)^{s+t}} . This identity may be rewritten in terms of the falling Pochhammer symbols as ( s + t ) n = ∑ k = 0 n ( n k ) ( s ) k ( t ) n − k {\displaystyle (s+t)_{n}=\sum _{k=0}^{n}{n \choose k}(s)_{k}(t)_{n-k}} in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type). The Chu–Vandermonde identity can also be seen to be a special case of Gauss's hypergeometric theorem, which states that 2 F 1 ( a , b ; c ; 1 ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) {\displaystyle \;_{2}F_{1}(a,b;c;1)={\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}}} where 2 F 1 {\displaystyle \;_{2}F_{1}} is the hypergeometric function and Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} is the gamma function. One regains the Chu–Vandermonde identity by taking a = −n and applying the identity ( n k ) = ( − 1 ) k ( k − n − 1 k ) {\displaystyle {n \choose k}=(-1)^{k}{k-n-1 \choose k}} liberally. The Rothe–Hagen identity is a further generalization of this identity. == The hypergeometric probability distribution == When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resulting probability distribution is the hypergeometric distribution. That is the probability distribution of the number of red marbles in r draws without replacement from an urn containing n red and m blue marbles. == See also == Pascal's identity Hockey-stick identity Rothe–Hagen identity == References ==
|
Wikipedia:Qaiser Mushtaq#0
|
Qaiser Mushtaq (born 28 February 1954), (D.Phil.(Oxon), ASA, KIA), is a Pakistani mathematician and academic who has made numerous contributions in the field of Group theory and Semigroup. He has been vice-chancellor of The Islamia University Bahawalpur from December 2014 to December 2018. Mushtaq is one of the leading mathematicians and educationists in Pakistan. Through his research and writings, he has exercised a profound influence on mathematics in Pakistan. Mushtaq is an honorary full professor at the Mathematics Division of the Institute for Basic Research, Florida, US. His research contributions in the fields of group theory and Left Almost Semigroup (LA-semigroup) theory have won him recognition at both national and international levels. In Graham Higman's words, "he has laid the foundation of coset diagrams for the modular group", to study the actions of groups on various spaces and projective lines over Galois fields. This work has been cited in the Encyclopedia of Design Theory. == Biography == Qaiser Mushtaq was born in Sheikhupura, Pakistan to Pir Mushtaq Ali and Begum Saghira Akhter, and belongs to the Qureshi family of Gujranwala. He is a descendant of Shah Jamal Nuri. Mushtaq married Aileen Qaiser, a senior journalist educated from the National University of Singapore and Wolfson College, Oxford. They have two daughters, Shayyan Qaiser and Zara Qaiser. He received primary education from the Convent of Jesus and Mary, Sialkot, and secondary education from Government Pilot Secondary School, Sialkot. Mushtaq studied for a certain period at Murray College till his family moved to Rawalpindi, where he studied at Gordon College. He did his MSc and M.Phil. from Quaid-i-Azam University, Islamabad. He then joined Bahauddin Zakariya University, Multan, as a lecturer for a period of one year before returning to Quaid-i-Azam University in 1979. Later, in 1980, he received the Royal Scholarship to do his D.Phil. at Wolfson College, Oxford. He was a doctoral student of Graham Higman and was awarded a doctorate in 1983 for a thesis entitled Coset Diagrams for the Modular Group. In 1990 he was at the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy, as a visiting mathematician. He also worked as an associate professor at the Universiti Brunei Darussalam from 1993 to 1999, after which he returned to Pakistan. Mushtaq is a tenured professor at Quaid-i-Azam University, and a former syndicate member, Quaid-i-Azam University, Islamabad. He is also an honorary full professor at the Mathematics Division, Institute for Basic Research, Florida, US. He served as vice chancellor of The Islamia University of Bahawalpur from 19 December 2014 to December 2018. == Research in mathematics == He has parametrised actions of the modular group on projective lines over Galois fields. This method has proven to be so effective and rewarding, that its wide uses can be seen in Combinatorial Group Theory, Algebraic Number Theory, and Theory of Group Graphs. His graphical technique helped to solve George Abram Miller's problem (1901) on alternating groups as homomorphic images of the modular group. He has also invented a new algebraic structure known as Locally Associative LA-semigroup, and has done some fundamental research on LA-semigroups producing some significant results in this theory. Consequently, a number of useful mathematical results have emerged which otherwise were applicable under restricted conditions only. Mushtaq has collaborated in research with the late Graham HigmanFRS (Oxford) and the late Gian-Carlo Rota (MIT). He has been an invited speaker at Oxford University, MSRI Berkeley, Harvard University, Massachusetts Institute of Technology and Southampton University; and an invited speaker at several international conferences. He has supervised, as a sole supervisor, the highest number of M.Phil. and PhD students in Pure Mathematics in Pakistan (see the Mathematics Genealogy Project). As a result, he has established a research group in Pakistan, the largest of its kind, which is producing high level original research in mathematics. === Research papers === Mushtaq has over a hundred research papers to his credit. He has written and edited several books, some of which are, Mathematics: The Islamic Legacy (which received a prize from the National Book Council of Pakistan), published by UNESCO and other international publishers; A Course in Group Theory, and Discrete Lectures in Mathematics. He has also written books on topics other than mathematics. They are, Focus on Pakistan, and Pakistan: An Introduction. He was also an invited writer for the monumental book, comprising six volumes, entitled the History of Civilizations of Central Asia, published by UNESCO (translated into several foreign languages). He is also known for his analytical writings and research articles on history, mathematics, science, education, and philosophy. He has been an active opposer of the use of impact factors and citation counts of the Higher Education Commission. He led the movement against its use which he believed has damaged the growth of mathematics in Pakistan. One of his essays was published by the American Mathematical Society. The International Mathematical Union has included it in its report on the use of impact factors. Mushtaq also founded the mathematical quarterly, PakMS Newsletter, in Pakistan. === Journals and bulletins === He is an editor of the Asian-European Journal of Mathematics (World Scientific). He is an associate editor of the Bulletin of the Southeast Asian Mathematical Society (Springer-Verlag). Additionally, he is an editor of the Quasigroups and Related Systems (Maldova Academy of Sciences) and the Bulletin of the Malaysian Mathematical Society, an advisory editor of the Journal of Interdisciplinary Mathematics, an associate editor of the journal Advances in Algebra and Analysis, a reviewer for the Mathematical Reviews of the American Mathematical Society (US) and the Zentrablatt fur Mathematik (Springer-Verlag, Germany). == Honours and awards == He was an overseas scholar of the Royal Commission for the Exhibition of 1851 in 1980 and a senior Fulbright Scholar in 1990. Mushtaq was elected an associate member of the International Centre for Theoretical Physics, Trieste, Italy in 1991. Mushtaq has been awarded several awards due to his contribution to mathematical sciences. Chowla Medal (1977) Salam Prize in Mathematics (1987) Mathematician of the Year Award (1987) by the National Book Council of Pakistan Gold Medal of Honour (1987) from United States Mathematician of the Year Award (1990) by the National Book Council of Pakistan M. Raziuddin Siddiqi Gold Medal (1991) from the Pakistan Academy of Sciences First Khwārizmī Award (1992) from the President of Iran Young Scientist of the South Award (1993) from Third World Academy of Sciences, Italy 5th National Education Award (1999) by the National Education Forum Gold Medal in Mathematics (2000) from the Pakistan Academy of Sciences. == Academic societies == Mushtaq founded the Pakistan Mathematical Society. He also founded the series of international conferences, namely IPMC, in Pakistan. The conference takes place every August in Islamabad. He is a member of the American Mathematical Society, the Oxford Society, the London Mathematical Society, and the Punjab Mathematical Society. He has been the president of the Brunei Darussalam Mathematics Society, and he is the current president of the Pakistan Mathematical Society. Mushtaq helped to start the Islamic Society at Oxford. He was its vice-president and secretary in 1980 to 1983. He also helped to rejuvenate the Oxford University Pakistan Society of which he was the vice-president from 1981 to 1982. In Pakistan, he started the 'Mathematical Seminar Series' at Quaid-i-Azam University in 1983 and developed it into an institution recognised nationally. At Quaid-i-Azam University, he founded the 'Algebra Forum' which has held advanced level seminars on algebra in particular and on various academic topics of general interest. He reformed and restructured the Mathematical Society of Brunei Darussalam as its president. == References ==
|
Wikipedia:Qiang Du#0
|
Qiang Du (Chinese: 杜强), the Fu Foundation Professor of Applied Mathematics at Columbia University, is a Chinese mathematician and computational scientist. Prior to moving to Columbia, he was the Verne M. Willaman Professor of Mathematics at Pennsylvania State University affiliated with the Pennsylvania State University Department of Mathematics and Materials Sciences. == Education == After completing his BS degree at University of Science and Technology of China in 1983, Du earned his Ph.D. degree from Carnegie Mellon University in 1988. His thesis was written under the direction of Max D. Gunzburger. == Selected publications == His two most often cited papers are Du, Qiang; Faber, Vance; Gunzburger, Max (1999). "Centroidal Voronoi Tessellations: Applications and Algorithms". SIAM Review. 41 (4). Society for Industrial & Applied Mathematics (SIAM): 637–676. Bibcode:1999SIAMR..41..637D. CiteSeerX 10.1.1.407.146. doi:10.1137/s0036144599352836. ISSN 0036-1445. Du, Qiang; Gunzburger, Max D.; Peterson, Janet S. (1992). "Analysis and Approximation of the Ginzburg–Landau Model of Superconductivity". SIAM Review. 34 (1). Society for Industrial & Applied Mathematics (SIAM): 54–81. doi:10.1137/1034003. ISSN 0036-1445. == Students and post-doctorates == As of June 2018, 17 students had completed their Ph.D. degrees under Du's supervision. He had also supported 10 post-doctorates. == Recognition == Du was elected a fellow of the Society for Industrial and Applied Mathematics in 2013 for "contributions to applied and computational mathematics with applications in material science, computational geometry, and biology." In 2017 he was elected as a Fellow of the American Association for the Advancement of Science. He was elected as a Fellow of the American Mathematical Society in the 2020 Class, for "contributions to applied and computational mathematics with applications in materials science, computational geometry, and biology". == References == == External links == Qiang Du at the Mathematics Genealogy Project Qiang Du's home page at Columbia University Qiang Du's home page at Penn State
|
Wikipedia:Quadratic Lie algebra#0
|
A quadratic Lie algebra is a Lie algebra together with a compatible symmetric bilinear form. Compatibility means that it is invariant under the adjoint representation. Examples of such are semisimple Lie algebras, such as su(n) and sl(n,R). == Definition == A quadratic Lie algebra is a Lie algebra (g,[.,.]) together with a non-degenerate symmetric bilinear form ( . , . ) : g ⊗ g → R {\displaystyle (.,.)\colon {\mathfrak {g}}\otimes {\mathfrak {g}}\to \mathbb {R} } that is invariant under the adjoint action, i.e. ([X,Y],Z)+(Y,[X,Z])=0 where X,Y,Z are elements of the Lie algebra g. A localization/ generalization is the concept of Courant algebroid where the vector space g is replaced by (sections of) a vector bundle. == Examples == As a first example, consider Rn with zero-bracket and standard inner product ( ( x 1 , … , x n ) , ( y 1 , … , y n ) ) := ∑ j x j y j {\displaystyle ((x_{1},\dots ,x_{n}),(y_{1},\dots ,y_{n})):=\sum _{j}x_{j}y_{j}} . Since the bracket is trivial the invariance is trivially fulfilled. As a more elaborate example consider so(3), i.e. R3 with base X,Y,Z, standard inner product, and Lie bracket [ X , Y ] = Z , [ Y , Z ] = X , [ Z , X ] = Y {\displaystyle [X,Y]=Z,\quad [Y,Z]=X,\quad [Z,X]=Y} . Straightforward computation shows that the inner product is indeed preserved. A generalization is the following. === Semisimple Lie algebras === A big group of examples fits into the category of semisimple Lie algebras, i.e. Lie algebras whose adjoint representation is faithful. Examples are sl(n,R) and su(n), as well as direct sums of them. Let thus g be a semi-simple Lie algebra with adjoint representation ad, i.e. a d : g → E n d ( g ) : X ↦ ( a d X : Y ↦ [ X , Y ] ) {\displaystyle \mathrm {ad} \colon {\mathfrak {g}}\to \mathrm {End} ({\mathfrak {g}}):X\mapsto (\mathrm {ad} _{X}\colon Y\mapsto [X,Y])} . Define now the Killing form k : g ⊗ g → R : X ⊗ Y ↦ − t r ( a d X ∘ a d Y ) {\displaystyle k\colon {\mathfrak {g}}\otimes {\mathfrak {g}}\to \mathbb {R} :X\otimes Y\mapsto -\mathrm {tr} (\mathrm {ad} _{X}\circ \mathrm {ad} _{Y})} . Due to the Cartan criterion, the Killing form is non-degenerate if and only if the Lie algebra is semisimple. If g is in addition a simple Lie algebra, then the Killing form is up to rescaling the only invariant symmetric bilinear form. == References == This article incorporates material from Quadratic Lie algebra on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia:Quadratic algebra#0
|
In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra with the additional structure of a distinguished subspace. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879). The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)Riemannian Clifford algebras, as distinct from symplectic Clifford algebras. == Introduction and basic properties == A Clifford algebra is a unital associative algebra that contains and is generated by a vector space V over a field K, where V is equipped with a quadratic form Q : V → K. The Clifford algebra Cl(V, Q) is the "freest" unital associative algebra generated by V subject to the condition v 2 = Q ( v ) 1 for all v ∈ V , {\displaystyle v^{2}=Q(v)1\ {\text{ for all }}v\in V,} where the product on the left is that of the algebra, and the 1 on the right is the algebra's multiplicative identity (not to be confused with the multiplicative identity of K). The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below. When V is a finite-dimensional real vector space and Q is nondegenerate, Cl(V, Q) may be identified by the label Clp,q(R), indicating that V has an orthogonal basis with p elements with ei2 = +1, q with ei2 = −1, and where R indicates that this is a Clifford algebra over the reals; i.e. coefficients of elements of the algebra are real numbers. This basis may be found by orthogonal diagonalization. The free algebra generated by V may be written as the tensor algebra ⨁n≥0 V ⊗ ⋯ ⊗ V, that is, the direct sum of the tensor product of n copies of V over all n. Therefore one obtains a Clifford algebra as the quotient of this tensor algebra by the two-sided ideal generated by elements of the form v ⊗ v − Q(v)1 for all elements v ∈ V. The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. uv). Its associativity follows from the associativity of the tensor product. The Clifford algebra has a distinguished subspace V, being the image of the embedding map. Such a subspace cannot in general be uniquely determined given only a K-algebra that is isomorphic to the Clifford algebra. If 2 is invertible in the ground field K, then one can rewrite the fundamental identity above in the form u v + v u = 2 ⟨ u , v ⟩ 1 for all u , v ∈ V , {\displaystyle uv+vu=2\langle u,v\rangle 1\ {\text{ for all }}u,v\in V,} where ⟨ u , v ⟩ = 1 2 ( Q ( u + v ) − Q ( u ) − Q ( v ) ) {\displaystyle \langle u,v\rangle ={\frac {1}{2}}\left(Q(u+v)-Q(u)-Q(v)\right)} is the symmetric bilinear form associated with Q, via the polarization identity. Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case in this respect. In particular, if char(K) = 2 it is not true that a quadratic form necessarily or uniquely determines a symmetric bilinear form that satisfies Q(v) = ⟨v, v⟩, Many of the statements in this article include the condition that the characteristic is not 2, and are false if this condition is removed. === As a quantization of the exterior algebra === Clifford algebras are closely related to exterior algebras. Indeed, if Q = 0 then the Clifford algebra Cl(V, Q) is just the exterior algebra ⋀V. Whenever 2 is invertible in the ground field K, there exists a canonical linear isomorphism between ⋀V and Cl(V, Q). That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication together with the distinguished subspace is strictly richer than the exterior product since it makes use of the extra information provided by Q. The Clifford algebra is a filtered algebra; the associated graded algebra is the exterior algebra. More precisely, Clifford algebras may be thought of as quantizations (cf. quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of the symmetric algebra. Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras. == Universal property and construction == Let V be a vector space over a field K, and let Q : V → K be a quadratic form on V. In most cases of interest the field K is either the field of real numbers R, or the field of complex numbers C, or a finite field. A Clifford algebra Cl(V, Q) is a pair (B, i), where B is a unital associative algebra over K and i is a linear map i : V → B that satisfies i(v)2 = Q(v)1B for all v in V, defined by the following universal property: given any unital associative algebra A over K and any linear map j : V → A such that j ( v ) 2 = Q ( v ) 1 A for all v ∈ V {\displaystyle j(v)^{2}=Q(v)1_{A}{\text{ for all }}v\in V} (where 1A denotes the multiplicative identity of A), there is a unique algebra homomorphism f : B → A such that the following diagram commutes (i.e. such that f ∘ i = j): The quadratic form Q may be replaced by a (not necessarily symmetric) bilinear form ⟨⋅,⋅⟩ that has the property ⟨v, v⟩ = Q(v), v ∈ V, in which case an equivalent requirement on j is j ( v ) j ( v ) = ⟨ v , v ⟩ 1 A for all v ∈ V . {\displaystyle j(v)j(v)=\langle v,v\rangle 1_{A}\quad {\text{ for all }}v\in V.} When the characteristic of the field is not 2, this may be replaced by what is then an equivalent requirement, j ( v ) j ( w ) + j ( w ) j ( v ) = ( ⟨ v , w ⟩ + ⟨ w , v ⟩ ) 1 A for all v , w ∈ V , {\displaystyle j(v)j(w)+j(w)j(v)=(\langle v,w\rangle +\langle w,v\rangle )1_{A}\quad {\text{ for all }}v,w\in V,} where the bilinear form may additionally be restricted to being symmetric without loss of generality. A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains V, namely the tensor algebra T(V), and then enforce the fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal IQ in T(V) generated by all elements of the form v ⊗ v − Q ( v ) 1 {\displaystyle v\otimes v-Q(v)1} for all v ∈ V {\displaystyle v\in V} and define Cl(V, Q) as the quotient algebra Cl ( V , Q ) = T ( V ) / I Q . {\displaystyle \operatorname {Cl} (V,Q)=T(V)/I_{Q}.} The ring product inherited by this quotient is sometimes referred to as the Clifford product to distinguish it from the exterior product and the scalar product. It is then straightforward to show that Cl(V, Q) contains V and satisfies the above universal property, so that Cl is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra Cl(V, Q). It also follows from this construction that i is injective. One usually drops the i and considers V as a linear subspace of Cl(V, Q). The universal characterization of the Clifford algebra shows that the construction of Cl(V, Q) is functorial in nature. Namely, Cl can be considered as a functor from the category of vector spaces with quadratic forms (whose morphisms are linear maps that preserve the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces (that preserve the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras. == Basis and dimension == Since V comes equipped with a quadratic form Q, in characteristic not equal to 2 there exist bases for V that are orthogonal. An orthogonal basis is one such that for a symmetric bilinear form ⟨ e i , e j ⟩ = 0 {\displaystyle \langle e_{i},e_{j}\rangle =0} for i ≠ j {\displaystyle i\neq j} , and ⟨ e i , e i ⟩ = Q ( e i ) . {\displaystyle \langle e_{i},e_{i}\rangle =Q(e_{i}).} The fundamental Clifford identity implies that for an orthogonal basis e i e j = − e j e i {\displaystyle e_{i}e_{j}=-e_{j}e_{i}} for i ≠ j {\displaystyle i\neq j} , and e i 2 = Q ( e i ) . {\displaystyle e_{i}^{2}=Q(e_{i}).} This makes manipulation of orthogonal basis vectors quite simple. Given a product e i 1 e i 2 ⋯ e i k {\displaystyle e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}} of distinct orthogonal basis vectors of V, one can put them into a standard order while including an overall sign determined by the number of pairwise swaps needed to do so (i.e. the signature of the ordering permutation). If the dimension of V over K is n and {e1, ..., en} is an orthogonal basis of (V, Q), then Cl(V, Q) is free over K with a basis { e i 1 e i 2 ⋯ e i k ∣ 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n and 0 ≤ k ≤ n } . {\displaystyle \{e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}\mid 1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n{\text{ and }}0\leq k\leq n\}.} The empty product (k = 0) is defined as being the multiplicative identity element. For each value of k there are n choose k basis elements, so the total dimension of the Clifford algebra is dim Cl ( V , Q ) = ∑ k = 0 n ( n k ) = 2 n . {\displaystyle \dim \operatorname {Cl} (V,Q)=\sum _{k=0}^{n}{\binom {n}{k}}=2^{n}.} == Examples: real and complex Clifford algebras == The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms. Each of the algebras Clp,q(R) and Cln(C) is isomorphic to A or A ⊕ A, where A is a full matrix ring with entries from R, C, or H. For a complete classification of these algebras see Classification of Clifford algebras. === Real numbers === Clifford algebras are also sometimes referred to as geometric algebras, most often over the real numbers. Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form: Q ( v ) = v 1 2 + ⋯ + v p 2 − v p + 1 2 − ⋯ − v p + q 2 , {\displaystyle Q(v)=v_{1}^{2}+\dots +v_{p}^{2}-v_{p+1}^{2}-\dots -v_{p+q}^{2},} where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Clp,q(R). The symbol Cln(R) means either Cln,0(R) or Cl0,n(R), depending on whether the author prefers positive-definite or negative-definite spaces. A standard basis {e1, ..., en} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. Of such a basis, the algebra Clp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1. A few low-dimensional cases are: Cl0,0(R) is naturally isomorphic to R since there are no nonzero vectors. Cl0,1(R) is a two-dimensional algebra generated by e1 that squares to −1, and is algebra-isomorphic to C, the field of complex numbers. Cl1,0(R) is a two-dimensional algebra generated by e1 that squares to 1, and is algebra-isomorphic to the split-complex numbers. Cl0,2(R) is a four-dimensional algebra spanned by {1, e1, e2, e1e2}. The latter three elements all square to −1 and anticommute, and so the algebra is isomorphic to the quaternions H. Cl2,0(R) ≅ Cl1,1(R) is isomorphic to the algebra of split-quaternions. Cl0,3(R) is an 8-dimensional algebra isomorphic to the direct sum H ⊕ H, the split-biquaternions. Cl3,0(R) ≅ Cl1,2(R), also called the Pauli algebra, is isomorphic to the algebra of biquaternions. === Complex numbers === One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension n is equivalent to the standard diagonal form Q ( z ) = z 1 2 + z 2 2 + ⋯ + z n 2 . {\displaystyle Q(z)=z_{1}^{2}+z_{2}^{2}+\dots +z_{n}^{2}.} Thus, for each dimension n, up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form. We will denote the Clifford algebra on Cn with the standard quadratic form by Cln(C). For the first few cases one finds that Cl0(C) ≅ C, the complex numbers Cl1(C) ≅ C ⊕ C, the bicomplex numbers Cl2(C) ≅ M2(C), the biquaternions where Mn(C) denotes the algebra of n × n matrices over C. == Examples: constructing quaternions and dual quaternions == === Quaternions === In this section, Hamilton's quaternions are constructed as the even subalgebra of the Clifford algebra Cl3,0(R). Let the vector space V be real three-dimensional space R3, and the quadratic form be the usual quadratic form. Then, for v, w in R3 we have the bilinear form (or scalar product) v ⋅ w = v 1 w 1 + v 2 w 2 + v 3 w 3 . {\displaystyle v\cdot w=v_{1}w_{1}+v_{2}w_{2}+v_{3}w_{3}.} Now introduce the Clifford product of vectors v and w given by v w + w v = 2 ( v ⋅ w ) . {\displaystyle vw+wv=2(v\cdot w).} Denote a set of orthogonal unit vectors of R3 as {e1, e2, e3}, then the Clifford product yields the relations e 2 e 3 = − e 3 e 2 , e 1 e 3 = − e 3 e 1 , e 1 e 2 = − e 2 e 1 , {\displaystyle e_{2}e_{3}=-e_{3}e_{2},\,\,\,e_{1}e_{3}=-e_{3}e_{1},\,\,\,e_{1}e_{2}=-e_{2}e_{1},} and e 1 2 = e 2 2 = e 3 2 = 1. {\displaystyle e_{1}^{2}=e_{2}^{2}=e_{3}^{2}=1.} The general element of the Clifford algebra Cl3,0(R) is given by A = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 3 + a 4 e 2 e 3 + a 5 e 1 e 3 + a 6 e 1 e 2 + a 7 e 1 e 2 e 3 . {\displaystyle A=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{2}e_{3}+a_{5}e_{1}e_{3}+a_{6}e_{1}e_{2}+a_{7}e_{1}e_{2}e_{3}.} The linear combination of the even degree elements of Cl3,0(R) defines the even subalgebra Cl[0]3,0(R) with the general element q = q 0 + q 1 e 2 e 3 + q 2 e 1 e 3 + q 3 e 1 e 2 . {\displaystyle q=q_{0}+q_{1}e_{2}e_{3}+q_{2}e_{1}e_{3}+q_{3}e_{1}e_{2}.} The basis elements can be identified with the quaternion basis elements i, j, k as i = e 2 e 3 , j = e 1 e 3 , k = e 1 e 2 , {\displaystyle i=e_{2}e_{3},j=e_{1}e_{3},k=e_{1}e_{2},} which shows that the even subalgebra Cl[0]3,0(R) is Hamilton's real quaternion algebra. To see this, compute i 2 = ( e 2 e 3 ) 2 = e 2 e 3 e 2 e 3 = − e 2 e 2 e 3 e 3 = − 1 , {\displaystyle i^{2}=(e_{2}e_{3})^{2}=e_{2}e_{3}e_{2}e_{3}=-e_{2}e_{2}e_{3}e_{3}=-1,} and i j = e 2 e 3 e 1 e 3 = − e 2 e 3 e 3 e 1 = − e 2 e 1 = e 1 e 2 = k . {\displaystyle ij=e_{2}e_{3}e_{1}e_{3}=-e_{2}e_{3}e_{3}e_{1}=-e_{2}e_{1}=e_{1}e_{2}=k.} Finally, i j k = e 2 e 3 e 1 e 3 e 1 e 2 = − 1. {\displaystyle ijk=e_{2}e_{3}e_{1}e_{3}e_{1}e_{2}=-1.} === Dual quaternions === In this section, dual quaternions are constructed as the even subalgebra of a Clifford algebra of real four-dimensional space with a degenerate quadratic form. Let the vector space V be real four-dimensional space R4, and let the quadratic form Q be a degenerate form derived from the Euclidean metric on R3. For v, w in R4 introduce the degenerate bilinear form d ( v , w ) = v 1 w 1 + v 2 w 2 + v 3 w 3 . {\displaystyle d(v,w)=v_{1}w_{1}+v_{2}w_{2}+v_{3}w_{3}.} This degenerate scalar product projects distance measurements in R4 onto the R3 hyperplane. The Clifford product of vectors v and w is given by v w + w v = − 2 d ( v , w ) . {\displaystyle vw+wv=-2\,d(v,w).} Note the negative sign is introduced to simplify the correspondence with quaternions. Denote a set of mutually orthogonal unit vectors of R4 as {e1, e2, e3, e4}, then the Clifford product yields the relations e m e n = − e n e m , m ≠ n , {\displaystyle e_{m}e_{n}=-e_{n}e_{m},\,\,\,m\neq n,} and e 1 2 = e 2 2 = e 3 2 = − 1 , e 4 2 = 0. {\displaystyle e_{1}^{2}=e_{2}^{2}=e_{3}^{2}=-1,\,\,e_{4}^{2}=0.} The general element of the Clifford algebra Cl(R4, d) has 16 components. The linear combination of the even degree elements defines the even subalgebra Cl[0](R4, d) with the general element H = h 0 + h 1 e 2 e 3 + h 2 e 3 e 1 + h 3 e 1 e 2 + h 4 e 4 e 1 + h 5 e 4 e 2 + h 6 e 4 e 3 + h 7 e 1 e 2 e 3 e 4 . {\displaystyle H=h_{0}+h_{1}e_{2}e_{3}+h_{2}e_{3}e_{1}+h_{3}e_{1}e_{2}+h_{4}e_{4}e_{1}+h_{5}e_{4}e_{2}+h_{6}e_{4}e_{3}+h_{7}e_{1}e_{2}e_{3}e_{4}.} The basis elements can be identified with the quaternion basis elements i, j, k and the dual unit ε as i = e 2 e 3 , j = e 3 e 1 , k = e 1 e 2 , ε = e 1 e 2 e 3 e 4 . {\displaystyle i=e_{2}e_{3},j=e_{3}e_{1},k=e_{1}e_{2},\,\,\varepsilon =e_{1}e_{2}e_{3}e_{4}.} This provides the correspondence of Cl[0]0,3,1(R) with dual quaternion algebra. To see this, compute ε 2 = ( e 1 e 2 e 3 e 4 ) 2 = e 1 e 2 e 3 e 4 e 1 e 2 e 3 e 4 = − e 1 e 2 e 3 ( e 4 e 4 ) e 1 e 2 e 3 = 0 , {\displaystyle \varepsilon ^{2}=(e_{1}e_{2}e_{3}e_{4})^{2}=e_{1}e_{2}e_{3}e_{4}e_{1}e_{2}e_{3}e_{4}=-e_{1}e_{2}e_{3}(e_{4}e_{4})e_{1}e_{2}e_{3}=0,} and ε i = ( e 1 e 2 e 3 e 4 ) e 2 e 3 = e 1 e 2 e 3 e 4 e 2 e 3 = e 2 e 3 ( e 1 e 2 e 3 e 4 ) = i ε . {\displaystyle \varepsilon i=(e_{1}e_{2}e_{3}e_{4})e_{2}e_{3}=e_{1}e_{2}e_{3}e_{4}e_{2}e_{3}=e_{2}e_{3}(e_{1}e_{2}e_{3}e_{4})=i\varepsilon .} The exchanges of e1 and e4 alternate signs an even number of times, and show the dual unit ε commutes with the quaternion basis elements i, j, k. == Examples: in small dimension == Let K be any field of characteristic not 2. === Dimension 1 === For dim V = 1, if Q has diagonalization diag(a), that is there is a non-zero vector x such that Q(x) = a, then Cl(V, Q) is algebra-isomorphic to a K-algebra generated by an element x that satisfies x2 = a, the quadratic algebra K[X] / (X2 − a). In particular, if a = 0 (that is, Q is the zero quadratic form) then Cl(V, Q) is algebra-isomorphic to the dual numbers algebra over K. If a is a non-zero square in K, then Cl(V, Q) ≃ K ⊕ K. Otherwise, Cl(V, Q) is isomorphic to the quadratic field extension K(√a) of K. === Dimension 2 === For dim V = 2, if Q has diagonalization diag(a, b) with non-zero a and b (which always exists if Q is non-degenerate), then Cl(V, Q) is isomorphic to a K-algebra generated by elements x and y that satisfies x2 = a, y2 = b and xy = −yx. Thus Cl(V, Q) is isomorphic to the (generalized) quaternion algebra (a, b)K. We retrieve Hamilton's quaternions when a = b = −1, since H = (−1, −1)R. As a special case, if some x in V satisfies Q(x) = 1, then Cl(V, Q) ≃ M2(K). == Properties == === Relation to the exterior algebra === Given a vector space V, one can construct the exterior algebra ⋀V, whose definition is independent of any quadratic form on V. It turns out that if K does not have characteristic 2 then there is a natural isomorphism between ⋀V and Cl(V, Q) considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only if Q = 0. One can thus consider the Clifford algebra Cl(V, Q) as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on V with a multiplication that depends on Q (one can still define the exterior product independently of Q). The easiest way to establish the isomorphism is to choose an orthogonal basis {e1, ..., en} for V and extend it to a basis for Cl(V, Q) as described above. The map Cl(V, Q) → ⋀V is determined by e i 1 e i 2 ⋯ e i k ↦ e i 1 ∧ e i 2 ∧ ⋯ ∧ e i k . {\displaystyle e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}\mapsto e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}.} Note that this works only if the basis {e1, ..., en} is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism. If the characteristic of K is 0, one can also establish the isomorphism by antisymmetrizing. Define functions fk : V × ⋯ × V → Cl(V, Q) by f k ( v 1 , … , v k ) = 1 k ! ∑ σ ∈ S k sgn ( σ ) v σ ( 1 ) ⋯ v σ ( k ) {\displaystyle f_{k}(v_{1},\ldots ,v_{k})={\frac {1}{k!}}\sum _{\sigma \in \mathrm {S} _{k}}\operatorname {sgn}(\sigma )\,v_{\sigma (1)}\cdots v_{\sigma (k)}} where the sum is taken over the symmetric group on k elements, Sk. Since fk is alternating, it induces a unique linear map ⋀k V → Cl(V, Q). The direct sum of these maps gives a linear map between ⋀V and Cl(V, Q). This map can be shown to be a linear isomorphism, and it is natural. A more sophisticated way to view the relationship is to construct a filtration on Cl(V, Q). Recall that the tensor algebra T(V) has a natural filtration: F0 ⊂ F1 ⊂ F2 ⊂ ⋯, where Fk contains sums of tensors with order ≤ k. Projecting this down to the Clifford algebra gives a filtration on Cl(V, Q). The associated graded algebra Gr F Cl ( V , Q ) = ⨁ k F k / F k − 1 {\displaystyle \operatorname {Gr} _{F}\operatorname {Cl} (V,Q)=\bigoplus _{k}F^{k}/F^{k-1}} is naturally isomorphic to the exterior algebra ⋀V. Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing complements of Fk in Fk+1 for all k), this provides an isomorphism (although not a natural one) in any characteristic, even two. === Grading === In the following, assume that the characteristic is not 2. Clifford algebras are Z2-graded algebras (also known as superalgebras). Indeed, the linear map on V defined by v ↦ −v (reflection through the origin) preserves the quadratic form Q and so by the universal property of Clifford algebras extends to an algebra automorphism α : Cl ( V , Q ) → Cl ( V , Q ) . {\displaystyle \alpha :\operatorname {Cl} (V,Q)\to \operatorname {Cl} (V,Q).} Since α is an involution (i.e. it squares to the identity) one can decompose Cl(V, Q) into positive and negative eigenspaces of α Cl ( V , Q ) = Cl [ 0 ] ( V , Q ) ⊕ Cl [ 1 ] ( V , Q ) {\displaystyle \operatorname {Cl} (V,Q)=\operatorname {Cl} ^{[0]}(V,Q)\oplus \operatorname {Cl} ^{[1]}(V,Q)} where Cl [ i ] ( V , Q ) = { x ∈ Cl ( V , Q ) ∣ α ( x ) = ( − 1 ) i x } . {\displaystyle \operatorname {Cl} ^{[i]}(V,Q)=\left\{x\in \operatorname {Cl} (V,Q)\mid \alpha (x)=(-1)^{i}x\right\}.} Since α is an automorphism it follows that: Cl [ i ] ( V , Q ) Cl [ j ] ( V , Q ) = Cl [ i + j ] ( V , Q ) {\displaystyle \operatorname {Cl} ^{[i]}(V,Q)\operatorname {Cl} ^{[j]}(V,Q)=\operatorname {Cl} ^{[i+j]}(V,Q)} where the bracketed superscripts are read modulo 2. This gives Cl(V, Q) the structure of a Z2-graded algebra. The subspace Cl[0](V, Q) forms a subalgebra of Cl(V, Q), called the even subalgebra. The subspace Cl[1](V, Q) is called the odd part of Cl(V, Q) (it is not a subalgebra). This Z2-grading plays an important role in the analysis and application of Clifford algebras. The automorphism α is called the main involution or grade involution. Elements that are pure in this Z2-grading are simply said to be even or odd. Remark. The Clifford algebra is not a Z-graded algebra, but is Z-filtered, where Cl≤i(V, Q) is the subspace spanned by all products of at most i elements of V. Cl ⩽ i ( V , Q ) ⋅ Cl ⩽ j ( V , Q ) ⊂ Cl ⩽ i + j ( V , Q ) . {\displaystyle \operatorname {Cl} ^{\leqslant i}(V,Q)\cdot \operatorname {Cl} ^{\leqslant j}(V,Q)\subset \operatorname {Cl} ^{\leqslant i+j}(V,Q).} The degree of a Clifford number usually refers to the degree in the Z-grading. The even subalgebra Cl[0](V, Q) of a Clifford algebra is itself isomorphic to a Clifford algebra. If V is the orthogonal direct sum of a vector a of nonzero norm Q(a) and a subspace U, then Cl[0](V, Q) is isomorphic to Cl(U, −Q(a)Q|U), where Q|U is the form Q restricted to U. In particular over the reals this implies that: Cl p , q [ 0 ] ( R ) ≅ { Cl p , q − 1 ( R ) q > 0 Cl q , p − 1 ( R ) p > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}(\mathbf {R} )\cong {\begin{cases}\operatorname {Cl} _{p,q-1}(\mathbf {R} )&q>0\\\operatorname {Cl} _{q,p-1}(\mathbf {R} )&p>0\end{cases}}} In the negative-definite case this gives an inclusion Cl0,n − 1(R) ⊂ Cl0,n(R), which extends the sequence Likewise, in the complex case, one can show that the even subalgebra of Cln(C) is isomorphic to Cln−1(C). === Antiautomorphisms === In addition to the automorphism α, there are two antiautomorphisms that play an important role in the analysis of Clifford algebras. Recall that the tensor algebra T(V) comes with an antiautomorphism that reverses the order in all products of vectors: v 1 ⊗ v 2 ⊗ ⋯ ⊗ v k ↦ v k ⊗ ⋯ ⊗ v 2 ⊗ v 1 . {\displaystyle v_{1}\otimes v_{2}\otimes \cdots \otimes v_{k}\mapsto v_{k}\otimes \cdots \otimes v_{2}\otimes v_{1}.} Since the ideal IQ is invariant under this reversal, this operation descends to an antiautomorphism of Cl(V, Q) called the transpose or reversal operation, denoted by xt. The transpose is an antiautomorphism: (xy)t = yt xt. The transpose operation makes no use of the Z2-grading so we define a second antiautomorphism by composing α and the transpose. We call this operation Clifford conjugation denoted x ¯ {\displaystyle {\bar {x}}} x ¯ = α ( x t ) = α ( x ) t . {\displaystyle {\bar {x}}=\alpha (x^{\mathrm {t} })=\alpha (x)^{\mathrm {t} }.} Of the two antiautomorphisms, the transpose is the more fundamental. Note that all of these operations are involutions. One can show that they act as ±1 on elements that are pure in the Z-grading. In fact, all three operations depend on only the degree modulo 4. That is, if x is pure with degree k then α ( x ) = ± x x t = ± x x ¯ = ± x {\displaystyle \alpha (x)=\pm x\qquad x^{\mathrm {t} }=\pm x\qquad {\bar {x}}=\pm x} where the signs are given by the following table: === Clifford scalar product === When the characteristic is not 2, the quadratic form Q on V can be extended to a quadratic form on all of Cl(V, Q) (which we also denoted by Q). A basis-independent definition of one such extension is Q ( x ) = ⟨ x t x ⟩ 0 {\displaystyle Q(x)=\left\langle x^{\mathrm {t} }x\right\rangle _{0}} where ⟨a⟩0 denotes the scalar part of a (the degree-0 part in the Z-grading). One can show that Q ( v 1 v 2 ⋯ v k ) = Q ( v 1 ) Q ( v 2 ) ⋯ Q ( v k ) {\displaystyle Q(v_{1}v_{2}\cdots v_{k})=Q(v_{1})Q(v_{2})\cdots Q(v_{k})} where the vi are elements of V – this identity is not true for arbitrary elements of Cl(V, Q). The associated symmetric bilinear form on Cl(V, Q) is given by ⟨ x , y ⟩ = ⟨ x t y ⟩ 0 . {\displaystyle \langle x,y\rangle =\left\langle x^{\mathrm {t} }y\right\rangle _{0}.} One can check that this reduces to the original bilinear form when restricted to V. The bilinear form on all of Cl(V, Q) is nondegenerate if and only if it is nondegenerate on V. The operator of left (respectively right) Clifford multiplication by the transpose at of an element a is the adjoint of left (respectively right) Clifford multiplication by a with respect to this inner product. That is, ⟨ a x , y ⟩ = ⟨ x , a t y ⟩ , {\displaystyle \langle ax,y\rangle =\left\langle x,a^{\mathrm {t} }y\right\rangle ,} and ⟨ x a , y ⟩ = ⟨ x , y a t ⟩ . {\displaystyle \langle xa,y\rangle =\left\langle x,ya^{\mathrm {t} }\right\rangle .} == Structure of Clifford algebras == In this section we assume that characteristic is not 2, the vector space V is finite-dimensional and that the associated symmetric bilinear form of Q is nondegenerate. A central simple algebra over K is a matrix algebra over a (finite-dimensional) division algebra with center K. For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions. If V has even dimension then Cl(V, Q) is a central simple algebra over K. If V has even dimension then the even subalgebra Cl[0](V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. If V has odd dimension then Cl(V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. If V has odd dimension then the even subalgebra Cl[0](V, Q) is a central simple algebra over K. The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that U has even dimension and a non-singular bilinear form with discriminant d, and suppose that V is another vector space with a quadratic form. The Clifford algebra of U + V is isomorphic to the tensor product of the Clifford algebras of U and (−1)dim(U)/2dV, which is the space V with its quadratic form multiplied by (−1)dim(U)/2d. Over the reals, this implies in particular that Cl p + 2 , q ( R ) = M 2 ( R ) ⊗ Cl q , p ( R ) {\displaystyle \operatorname {Cl} _{p+2,q}(\mathbf {R} )=\mathrm {M} _{2}(\mathbf {R} )\otimes \operatorname {Cl} _{q,p}(\mathbf {R} )} Cl p + 1 , q + 1 ( R ) = M 2 ( R ) ⊗ Cl p , q ( R ) {\displaystyle \operatorname {Cl} _{p+1,q+1}(\mathbf {R} )=\mathrm {M} _{2}(\mathbf {R} )\otimes \operatorname {Cl} _{p,q}(\mathbf {R} )} Cl p , q + 2 ( R ) = H ⊗ Cl q , p ( R ) . {\displaystyle \operatorname {Cl} _{p,q+2}(\mathbf {R} )=\mathbf {H} \otimes \operatorname {Cl} _{q,p}(\mathbf {R} ).} These formulas can be used to find the structure of all real Clifford algebras and all complex Clifford algebras; see the classification of Clifford algebras. Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends on only the signature (p − q) mod 8. This is an algebraic form of Bott periodicity. == Lipschitz group == The class of Lipschitz groups (a.k.a. Clifford groups or Clifford–Lipschitz groups) was discovered by Rudolf Lipschitz. In this section we assume that V is finite-dimensional and the quadratic form Q is nondegenerate. An action on the elements of a Clifford algebra by its group of units may be defined in terms of a twisted conjugation: twisted conjugation by x maps y ↦ α(x) y x−1, where α is the main involution defined above. The Lipschitz group Γ is defined to be the set of invertible elements x that stabilize the set of vectors under this action, meaning that for all v in V we have: α ( x ) v x − 1 ∈ V . {\displaystyle \alpha (x)vx^{-1}\in V.} This formula also defines an action of the Lipschitz group on the vector space V that preserves the quadratic form Q, and so gives a homomorphism from the Lipschitz group to the orthogonal group. The Lipschitz group contains all elements r of V for which Q(r) is invertible in K, and these act on V by the corresponding reflections that take v to v − (⟨r, v⟩ + ⟨v, r⟩)r/Q(r). (In characteristic 2 these are called orthogonal transvections rather than reflections.) If V is a finite-dimensional real vector space with a non-degenerate quadratic form then the Lipschitz group maps onto the orthogonal group of V with respect to the form (by the Cartan–Dieudonné theorem) and the kernel consists of the nonzero elements of the field K. This leads to exact sequences 1 → K × → Γ → O V ( K ) → 1 , {\displaystyle 1\rightarrow K^{\times }\rightarrow \Gamma \rightarrow \operatorname {O} _{V}(K)\rightarrow 1,} 1 → K × → Γ 0 → SO V ( K ) → 1. {\displaystyle 1\rightarrow K^{\times }\rightarrow \Gamma ^{0}\rightarrow \operatorname {SO} _{V}(K)\rightarrow 1.} Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm. === Spinor norm === In arbitrary characteristic, the spinor norm Q is defined on the Lipschitz group by Q ( x ) = x t x . {\displaystyle Q(x)=x^{\mathrm {t} }x.} It is a homomorphism from the Lipschitz group to the group K× of non-zero elements of K. It coincides with the quadratic form Q of V when V is identified with a subspace of the Clifford algebra. Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of −1, 2, or −2 on Γ1. The difference is not very important in characteristic other than 2. The nonzero elements of K have spinor norm in the group (K×)2 of squares of nonzero elements of the field K. So when V is finite-dimensional and non-singular we get an induced map from the orthogonal group of V to the group K×/(K×)2, also called the spinor norm. The spinor norm of the reflection about r⊥, for any vector r, has image Q(r) in K×/(K×)2, and this property uniquely defines it on the orthogonal group. This gives exact sequences: 1 → { ± 1 } → Pin V ( K ) → O V ( K ) → K × / ( K × ) 2 , 1 → { ± 1 } → Spin V ( K ) → SO V ( K ) → K × / ( K × ) 2 . {\displaystyle {\begin{aligned}1\to \{\pm 1\}\to \operatorname {Pin} _{V}(K)&\to \operatorname {O} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2},\\1\to \{\pm 1\}\to \operatorname {Spin} _{V}(K)&\to \operatorname {SO} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2}.\end{aligned}}} Note that in characteristic 2 the group {±1} has just one element. From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing μ2 for the algebraic group of square roots of 1 (over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action), the short exact sequence 1 → μ 2 → Pin V → O V → 1 {\displaystyle 1\to \mu _{2}\rightarrow \operatorname {Pin} _{V}\rightarrow \operatorname {O} _{V}\rightarrow 1} yields a long exact sequence on cohomology, which begins 1 → H 0 ( μ 2 ; K ) → H 0 ( Pin V ; K ) → H 0 ( O V ; K ) → H 1 ( μ 2 ; K ) . {\displaystyle 1\to H^{0}(\mu _{2};K)\to H^{0}(\operatorname {Pin} _{V};K)\to H^{0}(\operatorname {O} _{V};K)\to H^{1}(\mu _{2};K).} The 0th Galois cohomology group of an algebraic group with coefficients in K is just the group of K-valued points: H0(G; K) = G(K), and H1(μ2; K) ≅ K×/(K×)2, which recovers the previous sequence 1 → { ± 1 } → Pin V ( K ) → O V ( K ) → K × / ( K × ) 2 , {\displaystyle 1\to \{\pm 1\}\to \operatorname {Pin} _{V}(K)\to \operatorname {O} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2},} where the spinor norm is the connecting homomorphism H0(OV; K) → H1(μ2; K). == Spin and pin groups == In this section we assume that V is finite-dimensional and its bilinear form is non-singular. The pin group PinV(K) is the subgroup of the Lipschitz group Γ of elements of spinor norm 1, and similarly the spin group SpinV(K) is the subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. The spin group usually has index 2 in the pin group. Recall from the previous section that there is a homomorphism from the Lipschitz group onto the orthogonal group. We define the special orthogonal group to be the image of Γ0. If K does not have characteristic 2 this is just the group of elements of the orthogonal group of determinant 1. If K does have characteristic 2, then all elements of the orthogonal group have determinant 1, and the special orthogonal group is the set of elements of Dickson invariant 0. There is a homomorphism from the pin group to the orthogonal group. The image consists of the elements of spinor norm 1 ∈ K×/(K×)2. The kernel consists of the elements +1 and −1, and has order 2 unless K has characteristic 2. Similarly there is a homomorphism from the Spin group to the special orthogonal group of V. In the common case when V is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when V has dimension at least 3. Further the kernel of this homomorphism consists of 1 and −1. So in this case the spin group, Spin(n), is a double cover of SO(n). Note, however, that the simple connectedness of the spin group is not true in general: if V is Rp,q for p and q both at least 2 then the spin group is not simply connected. In this case the algebraic group Spinp,q is simply connected as an algebraic group, even though its group of real valued points Spinp,q(R) is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups. == Spinors == Clifford algebras Clp,q(C), with p + q = 2n even, are matrix algebras that have a complex representation of dimension 2n. By restricting to the group Pinp,q(R) we get a complex representation of the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group Spinp,q(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2n−1. If p + q = 2n + 1 is odd then the Clifford algebra Clp,q(C) is a sum of two matrix algebras, each of which has a representation of dimension 2n, and these are also both representations of the pin group Pinp,q(R). On restriction to the spin group Spinp,q(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2n. More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors. === Real spinors === To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The pin group, Pinp,q is the set of invertible elements in Clp,q that can be written as a product of unit vectors: P i n p , q = { v 1 v 2 ⋯ v r ∣ ∀ i ‖ v i ‖ = ± 1 } . {\displaystyle \mathrm {Pin} _{p,q}=\left\{v_{1}v_{2}\cdots v_{r}\mid \forall i\,\|v_{i}\|=\pm 1\right\}.} Comparing with the above concrete realizations of the Clifford algebras, the pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group O(p, q). The spin group consists of those elements of Pinp,q that are products of an even number of unit vectors. Thus by the Cartan–Dieudonné theorem Spin is a cover of the group of proper rotations SO(p, q). Let α : Cl → Cl be the automorphism that is given by the mapping v ↦ −v acting on pure vectors. Then in particular, Spinp,q is the subgroup of Pinp,q whose elements are fixed by α. Let Cl p , q [ 0 ] = { x ∈ Cl p , q ∣ α ( x ) = x } . {\displaystyle \operatorname {Cl} _{p,q}^{[0]}=\{x\in \operatorname {Cl} _{p,q}\mid \alpha (x)=x\}.} (These are precisely the elements of even degree in Clp,q.) Then the spin group lies within Cl[0]p,q. The irreducible representations of Clp,q restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of Cl[0]p,q. To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can first make use of either of the isomorphisms (see above) Cl p , q [ 0 ] ≈ Cl p , q − 1 , for q > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}\approx \operatorname {Cl} _{p,q-1},{\text{ for }}q>0} Cl p , q [ 0 ] ≈ Cl q , p − 1 , for p > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}\approx \operatorname {Cl} _{q,p-1},{\text{ for }}p>0} and realize a spin representation in signature (p, q) as a pin representation in either signature (p, q − 1) or (q, p − 1). == Applications == === Differential geometry === One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more important is the link to a spin manifold, its associated spinor bundle and spinc manifolds. === Physics === Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra that has a basis that is generated by the matrices γ0, ..., γ3, called Dirac matrices, which have the property that γ i γ j + γ j γ i = 2 η i j , {\displaystyle \gamma _{i}\gamma _{j}+\gamma _{j}\gamma _{i}=2\eta _{ij},} where η is the matrix of a quadratic form of signature (1, 3) (or (3, 1) corresponding to the two equivalent choices of metric signature). These are exactly the defining relations for the Clifford algebra Cl1,3(R), whose complexification is Cl1,3(R)C, which, by the classification of Clifford algebras, is isomorphic to the algebra of 4 × 4 complex matrices Cl4(C) ≈ M4(C). However, it is best to retain the notation Cl1,3(R)C, since any transformation that takes the bilinear form to the canonical form is not a Lorentz transformation of the underlying spacetime. The Clifford algebra of spacetime used in physics thus has more structure than Cl4(C). It has in addition a set of preferred transformations – Lorentz transformations. Whether complexification is necessary to begin with depends in part on conventions used and in part on how much one wants to incorporate straightforwardly, but complexification is most often necessary in quantum mechanics where the spin representation of the Lie algebra so(1, 3) sitting inside the Clifford algebra conventionally requires a complex Clifford algebra. For reference, the spin Lie algebra is given by σ μ ν = − i 4 [ γ μ , γ ν ] , [ σ μ ν , σ ρ τ ] = i ( η τ μ σ ρ ν + η ν τ σ μ ρ − η ρ μ σ τ ν − η ν ρ σ μ τ ) . {\displaystyle {\begin{aligned}\sigma ^{\mu \nu }&=-{\frac {i}{4}}\left[\gamma ^{\mu },\,\gamma ^{\nu }\right],\\\left[\sigma ^{\mu \nu },\,\sigma ^{\rho \tau }\right]&=i\left(\eta ^{\tau \mu }\sigma ^{\rho \nu }+\eta ^{\nu \tau }\sigma ^{\mu \rho }-\eta ^{\rho \mu }\sigma ^{\tau \nu }-\eta ^{\nu \rho }\sigma ^{\mu \tau }\right).\end{aligned}}} This is in the (3, 1) convention, hence fits in Cl3,1(R)C. The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears. The use of Clifford algebras to describe quantum theory has been advanced among others by Mario Schönberg, by David Hestenes in terms of geometric calculus, by David Bohm and Basil Hiley and co-workers in form of a hierarchy of Clifford algebras, and by Elio Conte et al. === Computer vision === Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier Transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford correlation. The authors demonstrate the effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television. == Generalizations == While this article focuses on a Clifford algebra of a vector space over a field, the definition extends without change to a module over any unital, associative, commutative ring. Clifford algebras may be generalized to a form of degree higher than quadratic over a vector space. == History == == See also == == Notes == == Citations == == References == == Further reading == == External links == "Clifford algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Planetmath entry on Clifford algebras Archived 2005-04-15 at the Wayback Machine A history of Clifford algebras (unverified) John Baez on Clifford algebras Clifford Algebra: A Visual Introduction Clifford Algebra Explorer : A Pedagogical Tool
|
Wikipedia:Quadratic eigenvalue problem#0
|
In mathematics, the quadratic eigenvalue problem (QEP), is to find scalar eigenvalues λ {\displaystyle \lambda } , left eigenvectors y {\displaystyle y} and right eigenvectors x {\displaystyle x} such that Q ( λ ) x = 0 and y ∗ Q ( λ ) = 0 , {\displaystyle Q(\lambda )x=0~{\text{ and }}~y^{\ast }Q(\lambda )=0,} where Q ( λ ) = λ 2 M + λ C + K {\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K} , with matrix coefficients M , C , K ∈ C n × n {\displaystyle M,\,C,K\in \mathbb {C} ^{n\times n}} and we require that M ≠ 0 {\displaystyle M\,\neq 0} , (so that we have a nonzero leading coefficient). There are 2 n {\displaystyle 2n} eigenvalues that may be infinite or finite, and possibly zero. This is a special case of a nonlinear eigenproblem. Q ( λ ) {\displaystyle Q(\lambda )} is also known as a quadratic polynomial matrix. == Spectral theory == A QEP is said to be regular if det ( Q ( λ ) ) ≢ 0 {\displaystyle {\text{det}}(Q(\lambda ))\not \equiv 0} identically. The coefficient of the λ 2 n {\displaystyle \lambda ^{2n}} term in det ( Q ( λ ) ) {\displaystyle {\text{det}}(Q(\lambda ))} is det ( M ) {\displaystyle {\text{det}}(M)} , implying that the QEP is regular if M {\displaystyle M} is nonsingular. Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial, λ 2 Q ( λ − 1 ) = λ 2 K + λ C + M {\displaystyle \lambda ^{2}Q(\lambda ^{-1})=\lambda ^{2}K+\lambda C+M} . As there are 2 n {\displaystyle 2n} eigenvectors in a n {\displaystyle n} dimensional space, the eigenvectors cannot be orthogonal. It is possible to have the same eigenvector attached to different eigenvalues. == Applications == === Systems of differential equations === Quadratic eigenvalue problems arise naturally in the solution of systems of second order linear differential equations without forcing: M q ″ ( t ) + C q ′ ( t ) + K q ( t ) = 0 {\displaystyle Mq''(t)+Cq'(t)+Kq(t)=0} Where q ( t ) ∈ R n {\displaystyle q(t)\in \mathbb {R} ^{n}} , and M , C , K ∈ R n × n {\displaystyle M,C,K\in \mathbb {R} ^{n\times n}} . If all quadratic eigenvalues of Q ( λ ) = λ 2 M + λ C + K {\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K} are distinct, then the solution can be written in terms of the quadratic eigenvalues and right quadratic eigenvectors as q ( t ) = ∑ j = 1 2 n α j x j e λ j t = X e Λ t α {\displaystyle q(t)=\sum _{j=1}^{2n}\alpha _{j}x_{j}e^{\lambda _{j}t}=Xe^{\Lambda t}\alpha } Where Λ = Diag ( [ λ 1 , … , λ 2 n ] ) ∈ R 2 n × 2 n {\displaystyle \Lambda ={\text{Diag}}([\lambda _{1},\ldots ,\lambda _{2n}])\in \mathbb {R} ^{2n\times 2n}} are the quadratic eigenvalues, X = [ x 1 , … , x 2 n ] ∈ R n × 2 n {\displaystyle X=[x_{1},\ldots ,x_{2n}]\in \mathbb {R} ^{n\times 2n}} are the 2 n {\displaystyle 2n} right quadratic eigenvectors, and α = [ α 1 , ⋯ , α 2 n ] ⊤ ∈ R 2 n {\displaystyle \alpha =[\alpha _{1},\cdots ,\alpha _{2n}]^{\top }\in \mathbb {R} ^{2n}} is a parameter vector determined from the initial conditions on q {\displaystyle q} and q ′ {\displaystyle q'} . Stability theory for linear systems can now be applied, as the behavior of a solution depends explicitly on the (quadratic) eigenvalues. === Finite element methods === A QEP can result in part of the dynamic analysis of structures discretized by the finite element method. In this case the quadratic, Q ( λ ) {\displaystyle Q(\lambda )} has the form Q ( λ ) = λ 2 M + λ C + K {\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K} , where M {\displaystyle M} is the mass matrix, C {\displaystyle C} is the damping matrix and K {\displaystyle K} is the stiffness matrix. Other applications include vibro-acoustics and fluid dynamics. == Methods of solution == Direct methods for solving the standard or generalized eigenvalue problems A x = λ x {\displaystyle Ax=\lambda x} and A x = λ B x {\displaystyle Ax=\lambda Bx} are based on transforming the problem to Schur or Generalized Schur form. However, there is no analogous form for quadratic matrix polynomials. One approach is to transform the quadratic matrix polynomial to a linear matrix pencil ( A − λ B {\displaystyle A-\lambda B} ), and solve a generalized eigenvalue problem. Once eigenvalues and eigenvectors of the linear problem have been determined, eigenvectors and eigenvalues of the quadratic can be determined. The most common linearization is the first companion linearization L 1 ( λ ) = [ 0 N − K − C ] − λ [ N 0 0 M ] , {\displaystyle L1(\lambda )={\begin{bmatrix}0&N\\-K&-C\end{bmatrix}}-\lambda {\begin{bmatrix}N&0\\0&M\end{bmatrix}},} with corresponding eigenvector z = [ x λ x ] . {\displaystyle z={\begin{bmatrix}x\\\lambda x\end{bmatrix}}.} For convenience, one often takes N {\displaystyle N} to be the n × n {\displaystyle n\times n} identity matrix. We solve L ( λ ) z = 0 {\displaystyle L(\lambda )z=0} for λ {\displaystyle \lambda } and z {\displaystyle z} , for example by computing the Generalized Schur form. We can then take the first n {\displaystyle n} components of z {\displaystyle z} as the eigenvector x {\displaystyle x} of the original quadratic Q ( λ ) {\displaystyle Q(\lambda )} . Another common linearization is given by L 2 ( λ ) = [ − K 0 0 N ] − λ [ C M N 0 ] . {\displaystyle L2(\lambda )={\begin{bmatrix}-K&0\\0&N\end{bmatrix}}-\lambda {\begin{bmatrix}C&M\\N&0\end{bmatrix}}.} In the case when either A {\displaystyle A} or B {\displaystyle B} is a Hamiltonian matrix and the other is a skew-Hamiltonian matrix, the following linearizations can be used. L 3 ( λ ) = [ K 0 C K ] − λ [ 0 K − M 0 ] . {\displaystyle L3(\lambda )={\begin{bmatrix}K&0\\C&K\end{bmatrix}}-\lambda {\begin{bmatrix}0&K\\-M&0\end{bmatrix}}.} L 4 ( λ ) = [ 0 − K M 0 ] − λ [ M C 0 M ] . {\displaystyle L4(\lambda )={\begin{bmatrix}0&-K\\M&0\end{bmatrix}}-\lambda {\begin{bmatrix}M&C\\0&M\end{bmatrix}}.} == References ==
|
Wikipedia:Quadratic equation#0
|
In mathematics, a quadratic equation (from Latin quadratus 'square') is an equation that can be rearranged in standard form as a x 2 + b x + c = 0 , {\displaystyle ax^{2}+bx+c=0\,,} where the variable x represents an unknown number, and a, b, and c represent known numbers, where a ≠ 0. (If a = 0 and b ≠ 0 then the equation is linear, not quadratic.) The numbers a, b, and c are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term. The values of x that satisfy the equation are called solutions of the equation, and roots or zeros of the quadratic function on its left-hand side. A quadratic equation has at most two solutions. If there is only one solution, one says that it is a double root. If all the coefficients are real numbers, there are either two real solutions, or a single real double root, or two complex solutions that are complex conjugates of each other. A quadratic equation always has two roots, if complex roots are included and a double root is counted for two. A quadratic equation can be factored into an equivalent equation a x 2 + b x + c = a ( x − r ) ( x − s ) = 0 {\displaystyle ax^{2}+bx+c=a(x-r)(x-s)=0} where r and s are the solutions for x. The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} expresses the solutions in terms of a, b, and c. Completing the square is one of several ways for deriving the formula. Solutions to problems that can be expressed in terms of quadratic equations were known as early as 2000 BC. Because the quadratic equation involves only one unknown, it is called "univariate". The quadratic equation contains only powers of x that are non-negative integers, and therefore it is a polynomial equation. In particular, it is a second-degree polynomial equation, since the greatest power is two. == Solving the quadratic equation == A quadratic equation whose coefficients are real numbers can have either zero, one, or two distinct real-valued solutions, also called roots. When there is only one distinct root, it can be interpreted as two roots with the same value, called a double root. When there are no real roots, the coefficients can be considered as complex numbers with zero imaginary part, and the quadratic equation still has two complex-valued roots, complex conjugates of each-other with a non-zero imaginary part. A quadratic equation whose coefficients are arbitrary complex numbers always has two complex-valued roots which may or may not be distinct. The solutions of a quadratic equation can be found by several alternative methods. === Factoring by inspection === It may be possible to express a quadratic equation ax2 + bx + c = 0 as a product (px + q)(rx + s) = 0. In some cases, it is possible, by simple inspection, to determine values of p, q, r, and s that make the two forms equivalent to one another. If the quadratic equation is written in the second form, then the "Zero Factor Property" states that the quadratic equation is satisfied if px + q = 0 or rx + s = 0. Solving these two linear equations provides the roots of the quadratic. For most students, factoring by inspection is the first method of solving quadratic equations to which they are exposed.: 202–207 If one is given a quadratic equation in the form x2 + bx + c = 0, the sought factorization has the form (x + q)(x + s), and one has to find two numbers q and s that add up to b and whose product is c (this is sometimes called "Vieta's rule" and is related to Vieta's formulas). As an example, x2 + 5x + 6 factors as (x + 3)(x + 2). The more general case where a does not equal 1 can require a considerable effort in trial and error guess-and-check, assuming that it can be factored at all by inspection. Except for special cases such as where b = 0 or c = 0, factoring by inspection only works for quadratic equations that have rational roots. This means that the great majority of quadratic equations that arise in practical applications cannot be solved by factoring by inspection.: 207 === Completing the square === The process of completing the square makes use of the algebraic identity x 2 + 2 h x + h 2 = ( x + h ) 2 , {\displaystyle x^{2}+2hx+h^{2}=(x+h)^{2},} which represents a well-defined algorithm that can be used to solve any quadratic equation.: 207 Starting with a quadratic equation in standard form, ax2 + bx + c = 0 Divide each side by a, the coefficient of the squared term. Subtract the constant term c/a from both sides. Add the square of one-half of b/a, the coefficient of x, to both sides. This "completes the square", converting the left side into a perfect square. Write the left side as a square and simplify the right side if necessary. Produce two linear equations by equating the square root of the left side with the positive and negative square roots of the right side. Solve each of the two linear equations. We illustrate use of this algorithm by solving 2x2 + 4x − 4 = 0 2 x 2 + 4 x − 4 = 0 {\displaystyle 2x^{2}+4x-4=0} x 2 + 2 x − 2 = 0 {\displaystyle \ x^{2}+2x-2=0} x 2 + 2 x = 2 {\displaystyle \ x^{2}+2x=2} x 2 + 2 x + 1 = 2 + 1 {\displaystyle \ x^{2}+2x+1=2+1} ( x + 1 ) 2 = 3 {\displaystyle \left(x+1\right)^{2}=3} x + 1 = ± 3 {\displaystyle \ x+1=\pm {\sqrt {3}}} x = − 1 ± 3 {\displaystyle \ x=-1\pm {\sqrt {3}}} The plus–minus symbol "±" indicates that both x = − 1 + 3 {\textstyle x=-1+{\sqrt {3}}} and x = − 1 − 3 {\textstyle x=-1-{\sqrt {3}}} are solutions of the quadratic equation. === Quadratic formula and its derivation === Completing the square can be used to derive a general formula for solving quadratic equations, called the quadratic formula. The mathematical proof will now be briefly summarized. It can easily be seen, by polynomial expansion, that the following equation is equivalent to the quadratic equation: ( x + b 2 a ) 2 = b 2 − 4 a c 4 a 2 . {\displaystyle \left(x+{\frac {b}{2a}}\right)^{2}={\frac {b^{2}-4ac}{4a^{2}}}.} Taking the square root of both sides, and isolating x, gives: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} Some sources, particularly older ones, use alternative parameterizations of the quadratic equation such as ax2 + 2bx + c = 0 or ax2 − 2bx + c = 0 , where b has a magnitude one half of the more common one, possibly with opposite sign. These result in slightly different forms for the solution, but are otherwise equivalent. A number of alternative derivations can be found in the literature. These proofs are simpler than the standard completing the square method, represent interesting applications of other frequently used techniques in algebra, or offer insight into other areas of mathematics. A lesser known quadratic formula, as used in Muller's method, provides the same roots via the equation x = 2 c − b ± b 2 − 4 a c . {\displaystyle x={\frac {2c}{-b\pm {\sqrt {b^{2}-4ac}}}}.} This can be deduced from the standard quadratic formula by Vieta's formulas, which assert that the product of the roots is c/a. It also follows from dividing the quadratic equation by x 2 {\displaystyle x^{2}} giving c x − 2 + b x − 1 + a = 0 , {\displaystyle cx^{-2}+bx^{-1}+a=0,} solving this for x − 1 , {\displaystyle x^{-1},} and then inverting. One property of this form is that it yields one valid root when a = 0, while the other root contains division by zero, because when a = 0, the quadratic equation becomes a linear equation, which has one root. By contrast, in this case, the more common formula has a division by zero for one root and an indeterminate form 0/0 for the other root. On the other hand, when c = 0, the more common formula yields two correct roots whereas this form yields the zero root and an indeterminate form 0/0. When neither a nor c is zero, the equality between the standard quadratic formula and Muller's method, 2 c − b − b 2 − 4 a c = − b + b 2 − 4 a c 2 a , {\displaystyle {\frac {2c}{-b-{\sqrt {b^{2}-4ac}}}}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\,,} can be verified by cross multiplication, and similarly for the other choice of signs. === Reduced quadratic equation === It is sometimes convenient to reduce a quadratic equation so that its leading coefficient is one. This is done by dividing both sides by a, which is always possible since a is non-zero. This produces the reduced quadratic equation: x 2 + p x + q = 0 , {\displaystyle x^{2}+px+q=0,} where p = b/a and q = c/a. This monic polynomial equation has the same solutions as the original. The quadratic formula for the solutions of the reduced quadratic equation, written in terms of its coefficients, is x = − p 2 ± ( p 2 ) 2 − q . {\displaystyle x=-{\frac {p}{2}}\pm {\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}\,.} === Discriminant === In the quadratic formula, the expression underneath the square root sign is called the discriminant of the quadratic equation, and is often represented using an upper case D or an upper case Greek delta: Δ = b 2 − 4 a c . {\displaystyle \Delta =b^{2}-4ac.} A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the discriminant determines the number and nature of the roots. There are three cases: If the discriminant is positive, then there are two distinct roots − b + Δ 2 a and − b − Δ 2 a , {\displaystyle {\frac {-b+{\sqrt {\Delta }}}{2a}}\quad {\text{and}}\quad {\frac {-b-{\sqrt {\Delta }}}{2a}},} both of which are real numbers. For quadratic equations with rational coefficients, if the discriminant is a square number, then the roots are rational—in other cases they may be quadratic irrationals. If the discriminant is zero, then there is exactly one real root − b 2 a , {\displaystyle -{\frac {b}{2a}},} sometimes called a repeated or double root or two equal roots. If the discriminant is negative, then there are no real roots. Rather, there are two distinct (non-real) complex roots − b 2 a + i − Δ 2 a and − b 2 a − i − Δ 2 a , {\displaystyle -{\frac {b}{2a}}+i{\frac {\sqrt {-\Delta }}{2a}}\quad {\text{and}}\quad -{\frac {b}{2a}}-i{\frac {\sqrt {-\Delta }}{2a}},} which are complex conjugates of each other. In these expressions i is the imaginary unit. Thus the roots are distinct if and only if the discriminant is non-zero, and the roots are real if and only if the discriminant is non-negative. === Geometric interpretation === The function f(x) = ax2 + bx + c is a quadratic function. The graph of any quadratic function has the same general shape, which is called a parabola. The location and size of the parabola, and how it opens, depend on the values of a, b, and c. If a > 0, the parabola has a minimum point and opens upward. If a < 0, the parabola has a maximum point and opens downward. The extreme point of the parabola, whether minimum or maximum, corresponds to its vertex. The x-coordinate of the vertex will be located at x = − b 2 a {\displaystyle \scriptstyle x={\tfrac {-b}{2a}}} , and the y-coordinate of the vertex may be found by substituting this x-value into the function. The y-intercept is located at the point (0, c). The solutions of the quadratic equation ax2 + bx + c = 0 correspond to the roots of the function f(x) = ax2 + bx + c, since they are the values of x for which f(x) = 0. If a, b, and c are real numbers and the domain of f is the set of real numbers, then the roots of f are exactly the x-coordinates of the points where the graph touches the x-axis. If the discriminant is positive, the graph touches the x-axis at two points; if zero, the graph touches at one point; and if negative, the graph does not touch the x-axis. === Quadratic factorization === The term x − r {\displaystyle x-r} is a factor of the polynomial a x 2 + b x + c {\displaystyle ax^{2}+bx+c} if and only if r is a root of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} It follows from the quadratic formula that a x 2 + b x + c = a ( x − − b + b 2 − 4 a c 2 a ) ( x − − b − b 2 − 4 a c 2 a ) . {\displaystyle ax^{2}+bx+c=a\left(x-{\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\right)\left(x-{\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}\right).} In the special case b2 = 4ac where the quadratic has only one distinct root (i.e. the discriminant is zero), the quadratic polynomial can be factored as a x 2 + b x + c = a ( x + b 2 a ) 2 . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}.} === Graphical solution === The solutions of the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} may be deduced from the graph of the quadratic function f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} which is a parabola. If the parabola intersects the x-axis in two points, there are two real roots, which are the x-coordinates of these two points (also called x-intercept). If the parabola is tangent to the x-axis, there is a double root, which is the x-coordinate of the contact point between the graph and parabola. If the parabola does not intersect the x-axis, there are two complex conjugate roots. Although these roots cannot be visualized on the graph, their real and imaginary parts can be. Let h and k be respectively the x-coordinate and the y-coordinate of the vertex of the parabola (that is the point with maximal or minimal y-coordinate. The quadratic function may be rewritten y = a ( x − h ) 2 + k . {\displaystyle y=a(x-h)^{2}+k.} Let d be the distance between the point of y-coordinate 2k on the axis of the parabola, and a point on the parabola with the same y-coordinate (see the figure; there are two such points, which give the same distance, because of the symmetry of the parabola). Then the real part of the roots is h, and their imaginary part are ±d. That is, the roots are h + i d and h − i d , {\displaystyle h+id\quad {\text{and}}\quad h-id,} or in the case of the example of the figure 5 + 3 i and 5 − 3 i . {\displaystyle 5+3i\quad {\text{and}}\quad 5-3i.} === Avoiding loss of significance === Although the quadratic formula provides an exact solution, the result is not exact if real numbers are approximated during the computation, as usual in numerical analysis, where real numbers are approximated by floating point numbers (called "reals" in many programming languages). In this context, the quadratic formula is not completely stable. This occurs when the roots have different order of magnitude, or, equivalently, when b2 and b2 − 4ac are close in magnitude. In this case, the subtraction of two nearly equal numbers will cause loss of significance or catastrophic cancellation in the smaller root. To avoid this, the root that is smaller in magnitude, r, can be computed as ( c / a ) / R {\displaystyle (c/a)/R} where R is the root that is bigger in magnitude. This is equivalent to using the formula x = − 2 c b ± b 2 − 4 a c {\displaystyle x={\frac {-2c}{b\pm {\sqrt {b^{2}-4ac}}}}} using the plus sign if b > 0 {\displaystyle b>0} and the minus sign if b < 0. {\displaystyle b<0.} A second form of cancellation can occur between the terms b2 and 4ac of the discriminant, that is when the two roots are very close. This can lead to loss of up to half of correct significant figures in the roots. == Examples and applications == The golden ratio is found as the positive solution of the quadratic equation x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} The equations of the circle and the other conic sections—ellipses, parabolas, and hyperbolas—are quadratic equations in two variables. Given the cosine or sine of an angle, finding the cosine or sine of the angle that is half as large involves solving a quadratic equation. The process of simplifying expressions involving the square root of an expression involving the square root of another expression involves finding the two solutions of a quadratic equation. Descartes' theorem states that for every four kissing (mutually tangent) circles, their radii satisfy a particular quadratic equation. The equation given by Fuss' theorem, giving the relation among the radius of a bicentric quadrilateral's inscribed circle, the radius of its circumscribed circle, and the distance between the centers of those circles, can be expressed as a quadratic equation for which the distance between the two circles' centers in terms of their radii is one of the solutions. The other solution of the same equation in terms of the relevant radii gives the distance between the circumscribed circle's center and the center of the excircle of an ex-tangential quadrilateral. Critical points of a cubic function and inflection points of a quartic function are found by solving a quadratic equation. In physics, for motion with constant acceleration a {\displaystyle a} , the displacement or position x {\displaystyle x} of a moving body can be expressed as a quadratic function of time t {\displaystyle t} given the initial position x 0 {\displaystyle x_{0}} and initial velocity v 0 {\displaystyle v_{0}} : x = x 0 + v 0 t + 1 2 a t 2 {\textstyle x=x_{0}+v_{0}t+{\frac {1}{2}}at^{2}} . In chemistry, the pH of a solution of weak acid can be calculated from the negative base-10 logarithm of the positive root of a quadratic equation in terms of the acidity constant and the analytical concentration of the acid. == History == Babylonian mathematicians, as early as 2000 BC (displayed on Old Babylonian clay tablets) could solve problems relating the areas and sides of rectangles. There is evidence dating this algorithm as far back as the Third Dynasty of Ur. In modern notation, the problems typically involved solving a pair of simultaneous equations of the form: x + y = p , x y = q , {\displaystyle x+y=p,\ \ xy=q,} which is equivalent to the statement that x and y are the roots of the equation:: 86 z 2 + q = p z . {\displaystyle z^{2}+q=pz.} The steps given by Babylonian scribes for solving the above rectangle problem, in terms of x and y, were as follows: Compute half of p. Square the result. Subtract q. Find the (positive) square root using a table of squares. Add together the results of steps (1) and (4) to give x. In modern notation this means calculating x = p 2 + ( p 2 ) 2 − q {\displaystyle x={\frac {p}{2}}+{\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}} , which is equivalent to the modern day quadratic formula for the larger real root (if any) x = − b + b 2 − 4 a c 2 a {\displaystyle x={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}} with a = 1, b = −p, and c = q. Geometric methods were used to solve quadratic equations in Babylonia, Egypt, Greece, China, and India. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. Babylonian mathematicians from circa 400 BC and Chinese mathematicians from circa 200 BC used geometric methods of dissection to solve quadratic equations with positive roots. Rules for quadratic equations were given in The Nine Chapters on the Mathematical Art, a Chinese treatise on mathematics. These early geometric methods do not appear to have had a general formula. Euclid, the Greek mathematician, produced a more abstract geometrical method around 300 BC. With a purely geometric approach Pythagoras and Euclid created a general procedure to find solutions of the quadratic equation. In his work Arithmetica, the Greek mathematician Diophantus solved the quadratic equation, but giving only one root, even when both roots were positive. In 628 AD, Brahmagupta, an Indian mathematician, gave in his book Brāhmasphuṭasiddhānta the first explicit (although still not completely general) solution of the quadratic equation ax2 + bx = c as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." This is equivalent to x = 4 a c + b 2 − b 2 a . {\displaystyle x={\frac {{\sqrt {4ac+b^{2}}}-b}{2a}}.} The Bakhshali Manuscript written in India in the 7th century AD contained an algebraic formula for solving quadratic equations, as well as linear indeterminate equations (originally of type ax/c = y). Muhammad ibn Musa al-Khwarizmi (9th century) developed a set of formulas that worked for positive solutions. Al-Khwarizmi goes further in providing a full solution to the general quadratic equation, accepting one or two numerical answers for every quadratic equation, while providing geometric proofs in the process. He also described the method of completing the square and recognized that the discriminant must be positive,: 230 which was proven by his contemporary 'Abd al-Hamīd ibn Turk (Central Asia, 9th century) who gave geometric figures to prove that if the discriminant is negative, a quadratic equation has no solution.: 234 While al-Khwarizmi himself did not accept negative solutions, later Islamic mathematicians that succeeded him accepted negative solutions,: 191 as well as irrational numbers as solutions. Abū Kāmil Shujā ibn Aslam (Egypt, 10th century) in particular was the first to accept irrational numbers (often in the form of a square root, cube root or fourth root) as solutions to quadratic equations or as coefficients in an equation. The 9th century Indian mathematician Sridhara wrote down rules for solving quadratic equations. The Jewish mathematician Abraham bar Hiyya Ha-Nasi (12th century, Spain) authored the first European book to include the full solution to the general quadratic equation. His solution was largely based on Al-Khwarizmi's work. The writing of the Chinese mathematician Yang Hui (1238–1298 AD) is the first known one in which quadratic equations with negative coefficients of 'x' appear, although he attributes this to the earlier Liu Yi. By 1545 Gerolamo Cardano compiled the works related to the quadratic equations. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing the quadratic formula in the form we know today. == Advanced topics == === Alternative methods of root calculation === ==== Vieta's formulas ==== Vieta's formulas (named after François Viète) are the relations x 1 + x 2 = − b a , x 1 x 2 = c a {\displaystyle x_{1}+x_{2}=-{\frac {b}{a}},\quad x_{1}x_{2}={\frac {c}{a}}} between the roots of a quadratic polynomial and its coefficients. They result from comparing term by term the relation ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 2 ) x + x 1 x 2 = 0 {\displaystyle \left(x-x_{1}\right)\left(x-x_{2}\right)=x^{2}-\left(x_{1}+x_{2}\right)x+x_{1}x_{2}=0} with the equation x 2 + b a x + c a = 0. {\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}=0.} The first Vieta's formula is useful for graphing a quadratic function. Since the graph is symmetric with respect to a vertical line through the vertex, the vertex's x-coordinate is located at the average of the roots (or intercepts). Thus the x-coordinate of the vertex is x V = x 1 + x 2 2 = − b 2 a . {\displaystyle x_{V}={\frac {x_{1}+x_{2}}{2}}=-{\frac {b}{2a}}.} The y-coordinate can be obtained by substituting the above result into the given quadratic equation, giving y V = − b 2 4 a + c = − b 2 − 4 a c 4 a . {\displaystyle y_{V}=-{\frac {b^{2}}{4a}}+c=-{\frac {b^{2}-4ac}{4a}}.} Also, these formulas for the vertex can be deduced directly from the formula (see Completing the square) a x 2 + b x + c = a ( x + b 2 a ) 2 − b 2 − 4 a c 4 a . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}-{\frac {b^{2}-4ac}{4a}}.} For numerical computation, Vieta's formulas provide a useful method for finding the roots of a quadratic equation in the case where one root is much smaller than the other. If |x2| << |x1|, then x1 + x2 ≈ x1, and we have the estimate: x 1 ≈ − b a . {\displaystyle x_{1}\approx -{\frac {b}{a}}.} The second Vieta's formula then provides: x 2 = c a x 1 ≈ − c b . {\displaystyle x_{2}={\frac {c}{ax_{1}}}\approx -{\frac {c}{b}}.} These formulas are much easier to evaluate than the quadratic formula under the condition of one large and one small root, because the quadratic formula evaluates the small root as the difference of two very nearly equal numbers (the case of large b), which causes round-off error in a numerical evaluation. The figure shows the difference between (i) a direct evaluation using the quadratic formula (accurate when the roots are near each other in value) and (ii) an evaluation based upon the above approximation of Vieta's formulas (accurate when the roots are widely spaced). As the linear coefficient b increases, initially the quadratic formula is accurate, and the approximate formula improves in accuracy, leading to a smaller difference between the methods as b increases. However, at some point the quadratic formula begins to lose accuracy because of round off error, while the approximate method continues to improve. Consequently, the difference between the methods begins to increase as the quadratic formula becomes worse and worse. This situation arises commonly in amplifier design, where widely separated roots are desired to ensure a stable operation (see Step response). ==== Trigonometric solution ==== In the days before calculators, people would use mathematical tables—lists of numbers showing the results of calculation with varying arguments—to simplify and speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks. Specialized tables were published for applications such as astronomy, celestial navigation and statistics. Methods of numerical approximation existed, called prosthaphaeresis, that offered shortcuts around time-consuming operations such as multiplication and taking powers and roots. Astronomers, especially, were concerned with methods that could speed up the long series of computations involved in celestial mechanics calculations. It is within this context that we may understand the development of means of solving quadratic equations by the aid of trigonometric substitution. Consider the following alternate form of the quadratic equation, where the sign of the ± symbol is chosen so that a and c may both be positive. By substituting and then multiplying through by cos2(θ) / c, we obtain Introducing functions of 2θ and rearranging, we obtain where the subscripts n and p correspond, respectively, to the use of a negative or positive sign in equation [1]. Substituting the two values of θn or θp found from equations [4] or [5] into [2] gives the required roots of [1]. Complex roots occur in the solution based on equation [5] if the absolute value of sin 2θp exceeds unity. The amount of effort involved in solving quadratic equations using this mixed trigonometric and logarithmic table look-up strategy was two-thirds the effort using logarithmic tables alone. Calculating complex roots would require using a different trigonometric form. To illustrate, let us assume we had available seven-place logarithm and trigonometric tables, and wished to solve the following to six-significant-figure accuracy: 4.16130 x 2 + 9.15933 x − 11.4207 = 0 {\displaystyle 4.16130x^{2}+9.15933x-11.4207=0} A seven-place lookup table might have only 100,000 entries, and computing intermediate results to seven places would generally require interpolation between adjacent entries. log a = 0.6192290 , log b = 0.9618637 , log c = 1.0576927 {\displaystyle \log a=0.6192290,\log b=0.9618637,\log c=1.0576927} 2 a c / b = 2 × 10 ( 0.6192290 + 1.0576927 ) / 2 − 0.9618637 = 1.505314 {\displaystyle 2{\sqrt {ac}}/b=2\times 10^{(0.6192290+1.0576927)/2-0.9618637}=1.505314} θ = ( tan − 1 1.505314 ) / 2 = 28.20169 ∘ or − 61.79831 ∘ {\displaystyle \theta =(\tan ^{-1}1.505314)/2=28.20169^{\circ }{\text{ or }}-61.79831^{\circ }} log | tan θ | = − 0.2706462 or 0.2706462 {\displaystyle \log |\tan \theta |=-0.2706462{\text{ or }}0.2706462} log c / a = ( 1.0576927 − 0.6192290 ) / 2 = 0.2192318 {\displaystyle \log {\textstyle {\sqrt {c/a}}}=(1.0576927-0.6192290)/2=0.2192318} x 1 = 10 0.2192318 − 0.2706462 = 0.888353 {\displaystyle x_{1}=10^{0.2192318-0.2706462}=0.888353} (rounded to six significant figures) x 2 = − 10 0.2192318 + 0.2706462 = − 3.08943 {\displaystyle x_{2}=-10^{0.2192318+0.2706462}=-3.08943} ==== Solution for complex roots in polar coordinates ==== If the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} with real coefficients has two complex roots—the case where b 2 − 4 a c < 0 , {\displaystyle b^{2}-4ac<0,} requiring a and c to have the same sign as each other—then the solutions for the roots can be expressed in polar form as x 1 , x 2 = r ( cos θ ± i sin θ ) , {\displaystyle x_{1},\,x_{2}=r(\cos \theta \pm i\sin \theta ),} where r = c a {\displaystyle r={\sqrt {\tfrac {c}{a}}}} and θ = cos − 1 ( − b 2 a c ) . {\displaystyle \theta =\cos ^{-1}\left({\tfrac {-b}{2{\sqrt {ac}}}}\right).} ==== Geometric solution ==== The quadratic equation may be solved geometrically in a number of ways. One way is via Lill's method. The three coefficients a, b, c are drawn with right angles between them as in SA, AB, and BC in Figure 6. A circle is drawn with the start and end point SC as a diameter. If this cuts the middle line AB of the three then the equation has a solution, and the solutions are given by negative of the distance along this line from A divided by the first coefficient a or SA. If a is 1 the coefficients may be read off directly. Thus the solutions in the diagram are −AX1/SA and −AX2/SA. The Carlyle circle, named after Thomas Carlyle, has the property that the solutions of the quadratic equation are the horizontal coordinates of the intersections of the circle with the horizontal axis. Carlyle circles have been used to develop ruler-and-compass constructions of regular polygons. === Generalization of quadratic equation === The formula and its derivation remain correct if the coefficients a, b and c are complex numbers, or more generally members of any field whose characteristic is not 2. (In a field of characteristic 2, the element 2a is zero and it is impossible to divide by it.) The symbol ± b 2 − 4 a c {\displaystyle \pm {\sqrt {b^{2}-4ac}}} in the formula should be understood as "either of the two elements whose square is b2 − 4ac, if such elements exist". In some fields, some elements have no square roots and some have two; only zero has just one square root, except in fields of characteristic 2. Even if a field does not contain a square root of some number, there is always a quadratic extension field which does, so the quadratic formula will always make sense as a formula in that extension field. ==== Characteristic 2 ==== In a field of characteristic 2, the quadratic formula, which relies on 2 being a unit, does not hold. Consider the monic quadratic polynomial x 2 + b x + c {\displaystyle x^{2}+bx+c} over a field of characteristic 2. If b = 0, then the solution reduces to extracting a square root, so the solution is x = c {\displaystyle x={\sqrt {c}}} and there is only one root since − c = − c + 2 c = c . {\displaystyle -{\sqrt {c}}=-{\sqrt {c}}+2{\sqrt {c}}={\sqrt {c}}.} In summary, x 2 + c = ( x + c ) 2 . {\displaystyle \displaystyle x^{2}+c=(x+{\sqrt {c}})^{2}.} See quadratic residue for more information about extracting square roots in finite fields. In the case that b ≠ 0, there are two distinct roots, but if the polynomial is irreducible, they cannot be expressed in terms of square roots of numbers in the coefficient field. Instead, define the 2-root R(c) of c to be a root of the polynomial x2 + x + c, an element of the splitting field of that polynomial. One verifies that R(c) + 1 is also a root. In terms of the 2-root operation, the two roots of the (non-monic) quadratic ax2 + bx + c are b a R ( a c b 2 ) {\displaystyle {\frac {b}{a}}R\left({\frac {ac}{b^{2}}}\right)} and b a ( R ( a c b 2 ) + 1 ) . {\displaystyle {\frac {b}{a}}\left(R\left({\frac {ac}{b^{2}}}\right)+1\right).} For example, let a denote a multiplicative generator of the group of units of F4, the Galois field of order four (thus a and a + 1 are roots of x2 + x + 1 over F4. Because (a + 1)2 = a, a + 1 is the unique solution of the quadratic equation x2 + a = 0. On the other hand, the polynomial x2 + ax + 1 is irreducible over F4, but it splits over F16, where it has the two roots ab and ab + a, where b is a root of x2 + x + a in F16. This is a special case of Artin–Schreier theory. == See also == Solving quadratic equations with continued fractions Linear equation Cubic function Quartic equation Quintic equation Fundamental theorem of algebra == References == == External links == "Quadratic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Quadratic equations". MathWorld. 101 uses of a quadratic equation Archived 2007-11-10 at the Wayback Machine 101 uses of a quadratic equation: Part II Archived 2007-10-22 at the Wayback Machine
|
Wikipedia:Quadratic form#0
|
In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, 4 x 2 + 2 x y − 3 y 2 {\displaystyle 4x^{2}+2xy-3y^{2}} is a quadratic form in the variables x and y. The coefficients usually belong to a fixed field K, such as the real or complex numbers, and one speaks of a quadratic form over K. Over the reals, a quadratic form is said to be definite if it takes the value zero only when all its variables are simultaneously zero; otherwise it is isotropic. Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of manifolds, especially four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form − x T Σ − 1 x {\displaystyle -\mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Sigma }}^{-1}\mathbf {x} } ) Quadratic forms are not to be confused with quadratic equations, which have only one variable and may include terms of degree less than two. A quadratic form is a specific instance of the more general concept of forms. == Introduction == Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form: q ( x ) = a x 2 (unary) q ( x , y ) = a x 2 + b x y + c y 2 (binary) q ( x , y , z ) = a x 2 + b x y + c y 2 + d y z + e z 2 + f x z (ternary) {\displaystyle {\begin{aligned}q(x)&=ax^{2}&&{\textrm {(unary)}}\\q(x,y)&=ax^{2}+bxy+cy^{2}&&{\textrm {(binary)}}\\q(x,y,z)&=ax^{2}+bxy+cy^{2}+dyz+ez^{2}+fxz&&{\textrm {(ternary)}}\end{aligned}}} where a, ..., f are the coefficients. The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp. Binary quadratic forms have been extensively studied in number theory, in particular, in the theory of quadratic fields, continued fractions, and modular forms. The theory of integral quadratic forms in n variables has important applications to algebraic topology. Using homogeneous coordinates, a non-zero quadratic form in n variables defines an (n − 2)-dimensional quadric in the (n − 1)-dimensional projective space. This is a basic construction in projective geometry. In this way one may visualize 3-dimensional real quadratic forms as conic sections. An example is given by the three-dimensional Euclidean space and the square of the Euclidean norm expressing the distance between a point with coordinates (x, y, z) and the origin: q ( x , y , z ) = d ( ( x , y , z ) , ( 0 , 0 , 0 ) ) 2 = ‖ ( x , y , z ) ‖ 2 = x 2 + y 2 + z 2 . {\displaystyle q(x,y,z)=d((x,y,z),(0,0,0))^{2}=\left\|(x,y,z)\right\|^{2}=x^{2}+y^{2}+z^{2}.} A closely related notion with geometric overtones is a quadratic space, which is a pair (V, q), with V a vector space over a field K, and q : V → K a quadratic form on V. See § Definitions below for the definition of a quadratic form on a vector space. == History == The study of quadratic forms, in particular the question of whether a given integer can be the value of a quadratic form over the integers, dates back many centuries. One such case is Fermat's theorem on sums of two squares, which determines when an integer may be expressed in the form x2 + y2, where x, y are integers. This problem is related to the problem of finding Pythagorean triples, which appeared in the second millennium BCE. In 628, the Indian mathematician Brahmagupta wrote Brāhmasphuṭasiddhānta, which includes, among many other things, a study of equations of the form x2 − ny2 = c. He considered what is now called Pell's equation, x2 − ny2 = 1, and found a method for its solution. In Europe this problem was studied by Brouncker, Euler and Lagrange. In 1801 Gauss published Disquisitiones Arithmeticae, a major portion of which was devoted to a complete theory of binary quadratic forms over the integers. Since then, the concept has been generalized, and the connections with quadratic number fields, the modular group, and other areas of mathematics have been further elucidated. == Associated symmetric matrix == Any n × n matrix A determines a quadratic form qA in n variables by q A ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j = x T A x , {\displaystyle q_{A}(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}}=\mathbf {x} ^{\mathsf {T}}A\mathbf {x} ,} where A = (aij). === Example === Consider the case of quadratic forms in three variables x, y, z. The matrix A has the form A = [ a b c d e f g h k ] . {\displaystyle A={\begin{bmatrix}a&b&c\\d&e&f\\g&h&k\end{bmatrix}}.} The above formula gives q A ( x , y , z ) = a x 2 + e y 2 + k z 2 + ( b + d ) x y + ( c + g ) x z + ( f + h ) y z . {\displaystyle q_{A}(x,y,z)=ax^{2}+ey^{2}+kz^{2}+(b+d)xy+(c+g)xz+(f+h)yz.} So, two different matrices define the same quadratic form if and only if they have the same elements on the diagonal and the same values for the sums b + d, c + g and f + h. In particular, the quadratic form qA is defined by a unique symmetric matrix A = [ a b + d 2 c + g 2 b + d 2 e f + h 2 c + g 2 f + h 2 k ] . {\displaystyle A={\begin{bmatrix}a&{\frac {b+d}{2}}&{\frac {c+g}{2}}\\{\frac {b+d}{2}}&e&{\frac {f+h}{2}}\\{\frac {c+g}{2}}&{\frac {f+h}{2}}&k\end{bmatrix}}.} This generalizes to any number of variables as follows. === General case === Given a quadratic form qA over the real numbers, defined by the matrix A = (aij), the matrix B = ( a i j + a j i 2 ) = 1 2 ( A + A T ) {\displaystyle B=\left({\frac {a_{ij}+a_{ji}}{2}}\right)={\frac {1}{2}}(A+A^{\text{T}})} is symmetric, defines the same quadratic form as A, and is the unique symmetric matrix that defines qA. So, over the real numbers (and, more generally, over a field of characteristic different from two), there is a one-to-one correspondence between quadratic forms and symmetric matrices that determine them. == Real quadratic forms == A fundamental problem is the classification of real quadratic forms under a linear change of variables. Jacobi proved that, for every real quadratic form, there is an orthogonal diagonalization; that is, an orthogonal change of variables that puts the quadratic form in a "diagonal form" λ 1 x ~ 1 2 + λ 2 x ~ 2 2 + ⋯ + λ n x ~ n 2 , {\displaystyle \lambda _{1}{\tilde {x}}_{1}^{2}+\lambda _{2}{\tilde {x}}_{2}^{2}+\cdots +\lambda _{n}{\tilde {x}}_{n}^{2},} where the associated symmetric matrix is diagonal. Moreover, the coefficients λ1, λ2, ..., λn are determined uniquely up to a permutation. If the change of variables is given by an invertible matrix that is not necessarily orthogonal, one can suppose that all coefficients λi are 0, 1, or −1. Sylvester's law of inertia states that the numbers of each 0, 1, and −1 are invariants of the quadratic form, in the sense that any other diagonalization will contain the same number of each. The signature of the quadratic form is the triple (n0, n+, n−), where these components count the number of 0s, number of 1s, and the number of −1s, respectively. Sylvester's law of inertia shows that this is a well-defined quantity attached to the quadratic form. The case when all λi have the same sign is especially important: in this case the quadratic form is called positive definite (all 1) or negative definite (all −1). If none of the terms are 0, then the form is called nondegenerate; this includes positive definite, negative definite, and isotropic quadratic form (a mix of 1 and −1); equivalently, a nondegenerate quadratic form is one whose associated symmetric form is a nondegenerate bilinear form. A real vector space with an indefinite nondegenerate quadratic form of index (p, q) (denoting p 1s and q −1s) is often denoted as Rp,q particularly in the physical theory of spacetime. The discriminant of a quadratic form, concretely the class of the determinant of a representing matrix in K / (K×)2 (up to non-zero squares) can also be defined, and for a real quadratic form is a cruder invariant than signature, taking values of only "positive, zero, or negative". Zero corresponds to degenerate, while for a non-degenerate form it is the parity of the number of negative coefficients, (−1)n−. These results are reformulated in a different way below. Let q be a quadratic form defined on an n-dimensional real vector space. Let A be the matrix of the quadratic form q in a given basis. This means that A is a symmetric n × n matrix such that q ( v ) = x T A x , {\displaystyle q(v)=x^{\mathsf {T}}Ax,} where x is the column vector of coordinates of v in the chosen basis. Under a change of basis, the column x is multiplied on the left by an n × n invertible matrix S, and the symmetric square matrix A is transformed into another symmetric square matrix B of the same size according to the formula A → B = S T A S . {\displaystyle A\to B=S^{\mathsf {T}}AS.} Any symmetric matrix A can be transformed into a diagonal matrix B = ( λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ 0 0 0 ⋯ λ n ) {\displaystyle B={\begin{pmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &0\\0&0&\cdots &\lambda _{n}\end{pmatrix}}} by a suitable choice of an orthogonal matrix S, and the diagonal entries of B are uniquely determined – this is Jacobi's theorem. If S is allowed to be any invertible matrix then B can be made to have only 0, 1, and −1 on the diagonal, and the number of the entries of each type (n0 for 0, n+ for 1, and n− for −1) depends only on A. This is one of the formulations of Sylvester's law of inertia and the numbers n+ and n− are called the positive and negative indices of inertia. Although their definition involved a choice of basis and consideration of the corresponding real symmetric matrix A, Sylvester's law of inertia means that they are invariants of the quadratic form q. The quadratic form q is positive definite if q(v) > 0 (similarly, negative definite if q(v) < 0) for every nonzero vector v. When q(v) assumes both positive and negative values, q is an isotropic quadratic form. The theorems of Jacobi and Sylvester show that any positive definite quadratic form in n variables can be brought to the sum of n squares by a suitable invertible linear transformation: geometrically, there is only one positive definite real quadratic form of every dimension. Its isometry group is a compact orthogonal group O(n). This stands in contrast with the case of isotropic forms, when the corresponding group, the indefinite orthogonal group O(p, q), is non-compact. Further, the isometry groups of Q and −Q are the same (O(p, q) ≈ O(q, p)), but the associated Clifford algebras (and hence pin groups) are different. == Definitions == A quadratic form over a field K is a map q : V → K from a finite-dimensional K-vector space to K such that q(av) = a2q(v) for all a ∈ K, v ∈ V and the function q(u + v) − q(u) − q(v) is bilinear. More concretely, an n-ary quadratic form over a field K is a homogeneous polynomial of degree 2 in n variables with coefficients in K: q ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j , a i j ∈ K . {\displaystyle q(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}},\quad a_{ij}\in K.} This formula may be rewritten using matrices: let x be the column vector with components x1, ..., xn and A = (aij) be the n × n matrix over K whose entries are the coefficients of q. Then q ( x ) = x T A x . {\displaystyle q(x)=x^{\mathsf {T}}Ax.} A vector v = (x1, ..., xn) is a null vector if q(v) = 0. Two n-ary quadratic forms φ and ψ over K are equivalent if there exists a nonsingular linear transformation C ∈ GL(n, K) such that ψ ( x ) = φ ( C x ) . {\displaystyle \psi (x)=\varphi (Cx).} Let the characteristic of K be different from 2. The coefficient matrix A of q may be replaced by the symmetric matrix (A + AT)/2 with the same quadratic form, so it may be assumed from the outset that A is symmetric. Moreover, a symmetric matrix A is uniquely determined by the corresponding quadratic form. Under an equivalence C, the symmetric matrix A of φ and the symmetric matrix B of ψ are related as follows: B = C T A C . {\displaystyle B=C^{\mathsf {T}}AC.} The associated bilinear form of a quadratic form q is defined by b q ( x , y ) = 1 2 ( q ( x + y ) − q ( x ) − q ( y ) ) = x T A y = y T A x . {\displaystyle b_{q}(x,y)={\tfrac {1}{2}}(q(x+y)-q(x)-q(y))=x^{\mathsf {T}}Ay=y^{\mathsf {T}}Ax.} Thus, bq is a symmetric bilinear form over K with matrix A. Conversely, any symmetric bilinear form b defines a quadratic form q ( x ) = b ( x , x ) , {\displaystyle q(x)=b(x,x),} and these two processes are the inverses of each other. As a consequence, over a field of characteristic not equal to 2, the theories of symmetric bilinear forms and of quadratic forms in n variables are essentially the same. === Quadratic space === Given an n-dimensional vector space V over a field K, a quadratic form on V is a function Q : V → K that has the following property: for some basis, the function q that maps the coordinates of v ∈ V to Q(v) is a quadratic form. In particular, if V = Kn with its standard basis, one has q ( v 1 , … , v n ) = Q ( [ v 1 , … , v n ] ) for [ v 1 , … , v n ] ∈ K n . {\displaystyle q(v_{1},\ldots ,v_{n})=Q([v_{1},\ldots ,v_{n}])\quad {\text{for}}\quad [v_{1},\ldots ,v_{n}]\in K^{n}.} The change of basis formulas show that the property of being a quadratic form does not depend on the choice of a specific basis in V, although the quadratic form q depends on the choice of the basis. A finite-dimensional vector space with a quadratic form is called a quadratic space. The map Q is a homogeneous function of degree 2, which means that it has the property that, for all a in K and v in V: Q ( a v ) = a 2 Q ( v ) . {\displaystyle Q(av)=a^{2}Q(v).} When the characteristic of K is not 2, the bilinear map B : V × V → K over K is defined: B ( v , w ) = 1 2 ( Q ( v + w ) − Q ( v ) − Q ( w ) ) . {\displaystyle B(v,w)={\tfrac {1}{2}}(Q(v+w)-Q(v)-Q(w)).} This bilinear form B is symmetric. That is, B(x, y) = B(y, x) for all x, y in V, and it determines Q: Q(x) = B(x, x) for all x in V. When the characteristic of K is 2, so that 2 is not a unit, it is still possible to use a quadratic form to define a symmetric bilinear form B′(x, y) = Q(x + y) − Q(x) − Q(y). However, Q(x) can no longer be recovered from this B′ in the same way, since B′(x, x) = 0 for all x (and is thus alternating). Alternatively, there always exists a bilinear form B″ (not in general either unique or symmetric) such that B″(x, x) = Q(x). The pair (V, Q) consisting of a finite-dimensional vector space V over K and a quadratic map Q from V to K is called a quadratic space, and B as defined here is the associated symmetric bilinear form of Q. The notion of a quadratic space is a coordinate-free version of the notion of quadratic form. Sometimes, Q is also called a quadratic form. Two n-dimensional quadratic spaces (V, Q) and (V′, Q′) are isometric if there exists an invertible linear transformation T : V → V′ (isometry) such that Q ( v ) = Q ′ ( T v ) for all v ∈ V . {\displaystyle Q(v)=Q'(Tv){\text{ for all }}v\in V.} The isometry classes of n-dimensional quadratic spaces over K correspond to the equivalence classes of n-ary quadratic forms over K. === Generalization === Let R be a commutative ring, M be an R-module, and b : M × M → R be an R-bilinear form. A mapping q : M → R : v ↦ b(v, v) is the associated quadratic form of b, and B : M × M → R : (u, v) ↦ q(u + v) − q(u) − q(v) is the polar form of q. A quadratic form q : M → R may be characterized in the following equivalent ways: There exists an R-bilinear form b : M × M → R such that q(v) is the associated quadratic form. q(av) = a2q(v) for all a ∈ R and v ∈ M, and the polar form of q is R-bilinear. === Related concepts === Two elements v and w of V are called orthogonal if B(v, w) = 0. The kernel of a bilinear form B consists of the elements that are orthogonal to every element of V. Q is non-singular if the kernel of its associated bilinear form is {0}. If there exists a non-zero v in V such that Q(v) = 0, the quadratic form Q is isotropic, otherwise it is definite. This terminology also applies to vectors and subspaces of a quadratic space. If the restriction of Q to a subspace U of V is identically zero, then U is totally singular. The orthogonal group of a non-singular quadratic form Q is the group of the linear automorphisms of V that preserve Q: that is, the group of isometries of (V, Q) into itself. If a quadratic space (A, Q) has a product so that A is an algebra over a field, and satisfies ∀ x , y ∈ A Q ( x y ) = Q ( x ) Q ( y ) , {\displaystyle \forall x,y\in A\quad Q(xy)=Q(x)Q(y),} then it is a composition algebra. == Equivalence of forms == Every quadratic form q in n variables over a field of characteristic not equal to 2 is equivalent to a diagonal form q ( x ) = a 1 x 1 2 + a 2 x 2 2 + ⋯ + a n x n 2 . {\displaystyle q(x)=a_{1}x_{1}^{2}+a_{2}x_{2}^{2}+\cdots +a_{n}x_{n}^{2}.} Such a diagonal form is often denoted by ⟨a1, ..., an⟩. Classification of all quadratic forms up to equivalence can thus be reduced to the case of diagonal forms. == Geometric meaning == Using Cartesian coordinates in three dimensions, let x = (x, y, z)T, and let A be a symmetric 3-by-3 matrix. Then the geometric nature of the solution set of the equation xTAx + bTx = 1 depends on the eigenvalues of the matrix A. If all eigenvalues of A are non-zero, then the solution set is an ellipsoid or a hyperboloid. If all the eigenvalues are positive, then it is an ellipsoid; if all the eigenvalues are negative, then it is an imaginary ellipsoid (we get the equation of an ellipsoid but with imaginary radii); if some eigenvalues are positive and some are negative, then it is a hyperboloid. If there exist one or more eigenvalues λi = 0, then the shape depends on the corresponding bi. If the corresponding bi ≠ 0, then the solution set is a paraboloid (either elliptic or hyperbolic); if the corresponding bi = 0, then the dimension i degenerates and does not come into play, and the geometric meaning will be determined by other eigenvalues and other components of b. When the solution set is a paraboloid, whether it is elliptic or hyperbolic is determined by whether all other non-zero eigenvalues are of the same sign: if they are, then it is elliptic; otherwise, it is hyperbolic. == Integral quadratic forms == Quadratic forms over the ring of integers are called integral quadratic forms, whereas the corresponding modules are quadratic lattices (sometimes, simply lattices). They play an important role in number theory and topology. An integral quadratic form has integer coefficients, such as x2 + xy + y2; equivalently, given a lattice Λ in a vector space V (over a field with characteristic 0, such as Q or R), a quadratic form Q is integral with respect to Λ if and only if it is integer-valued on Λ, meaning Q(x, y) ∈ Z if x, y ∈ Λ. This is the current use of the term; in the past it was sometimes used differently, as detailed below. === Historical use === Historically there was some confusion and controversy over whether the notion of integral quadratic form should mean: twos in the quadratic form associated to a symmetric matrix with integer coefficients twos out a polynomial with integer coefficients (so the associated symmetric matrix may have half-integer coefficients off the diagonal) This debate was due to the confusion of quadratic forms (represented by polynomials) and symmetric bilinear forms (represented by matrices), and "twos out" is now the accepted convention; "twos in" is instead the theory of integral symmetric bilinear forms (integral symmetric matrices). In "twos in", binary quadratic forms are of the form ax2 + 2bxy + cy2, represented by the symmetric matrix ( a b b c ) {\displaystyle {\begin{pmatrix}a&b\\b&c\end{pmatrix}}} This is the convention Gauss uses in Disquisitiones Arithmeticae. In "twos out", binary quadratic forms are of the form ax2 + bxy + cy2, represented by the symmetric matrix ( a b / 2 b / 2 c ) . {\displaystyle {\begin{pmatrix}a&b/2\\b/2&c\end{pmatrix}}.} Several points of view mean that twos out has been adopted as the standard convention. Those include: better understanding of the 2-adic theory of quadratic forms, the 'local' source of the difficulty; the lattice point of view, which was generally adopted by the experts in the arithmetic of quadratic forms during the 1950s; the actual needs for integral quadratic form theory in topology for intersection theory; the Lie group and algebraic group aspects. === Universal quadratic forms === An integral quadratic form whose image consists of all the positive integers is sometimes called universal. Lagrange's four-square theorem shows that w2 + x2 + y2 + z2 is universal. Ramanujan generalized this aw2 + bx2 + cy2 + dz2 and found 54 multisets {a, b, c, d} that can each generate all positive integers, namely, There are also forms whose image consists of all but one of the positive integers. For example, {1, 2, 5, 5} has 15 as the exception. Recently, the 15 and 290 theorems have completely characterized universal integral quadratic forms: if all coefficients are integers, then it represents all positive integers if and only if it represents all integers up through 290; if it has an integral matrix, it represents all positive integers if and only if it represents all integers up through 15. == See also == ε-quadratic form Cubic form Discriminant of a quadratic form Hasse–Minkowski theorem Quadric Ramanujan's ternary quadratic form Square class Witt group Witt's theorem == Notes == == References == O'Meara, O.T. (2000), Introduction to Quadratic Forms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66564-9 Conway, John Horton; Fung, Francis Y. C. (1997), The Sensual (Quadratic) Form, Carus Mathematical Monographs, The Mathematical Association of America, ISBN 978-0-88385-030-5 Shafarevich, I. R.; Remizov, A. O. (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9. == Further reading == Cassels, J.W.S. (1978). Rational Quadratic Forms. London Mathematical Society Monographs. Vol. 13. Academic Press. ISBN 0-12-163260-1. Zbl 0395.10029. Kitaoka, Yoshiyuki (1993). Arithmetic of quadratic forms. Cambridge Tracts in Mathematics. Vol. 106. Cambridge University Press. ISBN 0-521-40475-4. Zbl 0785.11021. Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023. Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016. O'Meara, O.T. (1973). Introduction to quadratic forms. Die Grundlehren der mathematischen Wissenschaften. Vol. 117. Springer-Verlag. ISBN 3-540-66564-1. Zbl 0259.10018. Pfister, Albrecht (1995). Quadratic Forms with Applications to Algebraic Geometry and Topology. London Mathematical Society lecture note series. Vol. 217. Cambridge University Press. ISBN 0-521-46755-1. Zbl 0847.11014. == External links == A.V.Malyshev (2001) [1994], "Quadratic form", Encyclopedia of Mathematics, EMS Press A.V.Malyshev (2001) [1994], "Binary quadratic form", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Quadratic formula#0
|
In elementary algebra, the quadratic formula is a closed-form expression describing the solutions of a quadratic equation. Other ways of solving quadratic equations, such as completing the square, yield the same solutions. Given a general quadratic equation of the form a x 2 + b x + c = 0 {\displaystyle \textstyle ax^{2}+bx+c=0} , with x {\displaystyle x} representing an unknown, and coefficients a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} representing known real or complex numbers with a ≠ 0 {\displaystyle a\neq 0} , the values of x {\displaystyle x} satisfying the equation, called the roots or zeros, can be found using the quadratic formula, x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}},} where the plus–minus symbol " ± {\displaystyle \pm } " indicates that the equation has two roots. Written separately, these are: x 1 = − b + b 2 − 4 a c 2 a , x 2 = − b − b 2 − 4 a c 2 a . {\displaystyle x_{1}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}},\qquad x_{2}={\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}.} The quantity Δ = b 2 − 4 a c {\displaystyle \textstyle \Delta =b^{2}-4ac} is known as the discriminant of the quadratic equation. If the coefficients a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} are real numbers then when Δ > 0 {\displaystyle \Delta >0} , the equation has two distinct real roots; when Δ = 0 {\displaystyle \Delta =0} , the equation has one repeated real root; and when Δ < 0 {\displaystyle \Delta <0} , the equation has no real roots but has two distinct complex roots, which are complex conjugates of each other. Geometrically, the roots represent the x {\displaystyle x} values at which the graph of the quadratic function y = a x 2 + b x + c {\displaystyle \textstyle y=ax^{2}+bx+c} , a parabola, crosses the x {\displaystyle x} -axis: the graph's x {\displaystyle x} -intercepts. The quadratic formula can also be used to identify the parabola's axis of symmetry. == Derivation by completing the square == The standard way to derive the quadratic formula is to apply the method of completing the square to the generic quadratic equation a x 2 + b x + c = 0 {\displaystyle \textstyle ax^{2}+bx+c=0} . The idea is to manipulate the equation into the form ( x + k ) 2 = s {\displaystyle \textstyle (x+k)^{2}=s} for some expressions k {\displaystyle k} and s {\displaystyle s} written in terms of the coefficients; take the square root of both sides; and then isolate x {\displaystyle x} . We start by dividing the equation by the quadratic coefficient a {\displaystyle a} , which is allowed because a {\displaystyle a} is non-zero. Afterwards, we subtract the constant term c / a {\displaystyle c/a} to isolate it on the right-hand side: a x 2 | + b x + c = 0 x 2 + b a x + c a = 0 x 2 + b a x = − c a . {\displaystyle {\begin{aligned}ax^{2{\vphantom {|}}}+bx+c&=0\\[3mu]x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}&=0\\[3mu]x^{2}+{\frac {b}{a}}x&=-{\frac {c}{a}}.\end{aligned}}} The left-hand side is now of the form x 2 + 2 k x {\displaystyle \textstyle x^{2}+2kx} , and we can "complete the square" by adding a constant k 2 {\displaystyle \textstyle k^{2}} to obtain a squared binomial x 2 + 2 k x + k 2 = {\displaystyle \textstyle x^{2}+2kx+k^{2}={}} ( x + k ) 2 {\displaystyle \textstyle (x+k)^{2}} . In this example we add ( b / 2 a ) 2 {\displaystyle \textstyle (b/2a)^{2}} to both sides so that the left-hand side can be factored (see the figure): x 2 + 2 ( b 2 a ) x + ( b 2 a ) 2 = − c a + ( b 2 a ) 2 ( x + b 2 a ) 2 = b 2 − 4 a c 4 a 2 . {\displaystyle {\begin{aligned}x^{2}+2\left({\frac {b}{2a}}\right)x+\left({\frac {b}{2a}}\right)^{2}&=-{\frac {c}{a}}+\left({\frac {b}{2a}}\right)^{2}\\[5mu]\left(x+{\frac {b}{2a}}\right)^{2}&={\frac {b^{2}-4ac}{4a^{2}}}.\end{aligned}}} Because the left-hand side is now a perfect square, we can easily take the square root of both sides: x + b 2 a = ± b 2 − 4 a c 2 a . {\displaystyle x+{\frac {b}{2a}}=\pm {\frac {\sqrt {b^{2}-4ac}}{2a}}.} Finally, subtracting b / 2 a {\displaystyle b/2a} from both sides to isolate x {\displaystyle x} produces the quadratic formula: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} == Equivalent formulations == The quadratic formula can equivalently be written using various alternative expressions, for instance x = − b 2 a ± ( b 2 a ) 2 − c a , {\displaystyle x=-{\frac {b}{2a}}\pm {\sqrt {\left({\frac {b}{2a}}\right)^{2}-{\frac {c}{a}}}},} which can be derived by first dividing a quadratic equation by 2 a {\displaystyle 2a} , resulting in 1 2 x 2 + b 2 a x + c 2 a = 0 {\displaystyle \textstyle {\tfrac {1}{2}}x^{2}+{\tfrac {b}{2a}}x+{\tfrac {c}{2a}}=0} , then substituting the new coefficients into the standard quadratic formula. Because this variant allows re-use of the intermediately calculated quantity b 2 a {\displaystyle {\tfrac {b}{2a}}} , it can slightly reduce the arithmetic involved. === Square root in the denominator === A lesser known quadratic formula, first mentioned by Giulio Fagnano, describes the same roots via an equation with the square root in the denominator (assuming c ≠ 0 {\displaystyle c\neq 0} ): x = 2 c − b ∓ b 2 − 4 a c . {\displaystyle x={\frac {2c}{-b\mp {\sqrt {b^{2}-4ac}}}}.} Here the minus–plus symbol " ∓ {\displaystyle \mp } " indicates that the two roots of the quadratic equation, in the same order as the standard quadratic formula, are x 1 = 2 c − b − b 2 − 4 a c , x 2 = 2 c − b + b 2 − 4 a c . {\displaystyle x_{1}={\frac {2c}{-b-{\sqrt {b^{2}-4ac}}}},\qquad x_{2}={\frac {2c}{-b+{\sqrt {b^{2}-4ac}}}}.} This variant has been jokingly called the "citardauq" formula ("quadratic" spelled backwards). When − b {\displaystyle -b} has the opposite sign as either + b 2 − 4 a c {\displaystyle \textstyle +{\sqrt {b^{2}-4ac}}} or − b 2 − 4 a c {\displaystyle \textstyle -{\sqrt {b^{2}-4ac}}} , subtraction can cause catastrophic cancellation, resulting in poor accuracy in numerical calculations; choosing between the version of the quadratic formula with the square root in the numerator or denominator depending on the sign of b {\displaystyle b} can avoid this problem. See § Numerical calculation below. This version of the quadratic formula is used in Muller's method for finding the roots of general functions. It can be derived from the standard formula from the identity x 1 x 2 = c / a {\displaystyle x_{1}x_{2}=c/a} , one of Vieta's formulas. Alternately, it can be derived by dividing each side of the equation a x 2 + b x + c = 0 {\displaystyle \textstyle ax^{2}+bx+c=0} by x 2 {\displaystyle \textstyle x^{2}} to get c x − 2 + b x − 1 + a = 0 {\displaystyle \textstyle cx^{-2}+bx^{-1}+a=0} , applying the standard formula to find the two roots x − 1 {\displaystyle \textstyle x^{-1}\!} , and then taking the reciprocal to find the roots x {\displaystyle x} of the original equation. == Other derivations == Any generic method or algorithm for solving quadratic equations can be applied to an equation with symbolic coefficients and used to derive some closed-form expression equivalent to the quadratic formula. Alternative methods are sometimes simpler than completing the square, and may offer interesting insight into other areas of mathematics. === Completing the square by Śrīdhara's method === Instead of dividing by a {\displaystyle a} to isolate x 2 {\displaystyle \textstyle x^{2}\!} , it can be slightly simpler to multiply by 4 a {\displaystyle 4a} instead to produce ( 2 a x ) 2 {\displaystyle \textstyle (2ax)^{2}\!} , which allows us to complete the square without need for fractions. Then the steps of the derivation are: Multiply each side by 4 a {\displaystyle 4a} . Add b 2 − 4 a c {\displaystyle \textstyle b^{2}-4ac} to both sides to complete the square. Take the square root of both sides. Isolate x {\displaystyle x} . Applying this method to a generic quadratic equation with symbolic coefficients yields the quadratic formula: a x 2 + b x + c = 0 4 a 2 x 2 + 4 a b x + 4 a c = 0 4 a 2 x 2 + 4 a b x + b 2 = b 2 − 4 a c ( 2 a x + b ) 2 = b 2 − 4 a c 2 a x + b = ± b 2 − 4 a c x = − b ± b 2 − 4 a c 2 a . ) {\displaystyle {\begin{aligned}ax^{2}+bx+c&=0\\[3mu]4a^{2}x^{2}+4abx+4ac&=0\\[3mu]4a^{2}x^{2}+4abx+b^{2}&=b^{2}-4ac\\[3mu](2ax+b)^{2}&=b^{2}-4ac\\[3mu]2ax+b&=\pm {\sqrt {b^{2}-4ac}}\\[5mu]x&={\dfrac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.{\vphantom {\bigg )}}\end{aligned}}} This method for completing the square is ancient and was known to the 8th–9th century Indian mathematician Śrīdhara. Compared with the modern standard method for completing the square, this alternate method avoids fractions until the last step and hence does not require a rearrangement after step 3 to obtain a common denominator in the right side. === By substitution === Another derivation uses a change of variables to eliminate the linear term. Then the equation takes the form u 2 = s {\displaystyle \textstyle u^{2}=s} in terms of a new variable u {\displaystyle u} and some constant expression s {\displaystyle s} , whose roots are then u = ± s {\displaystyle u=\pm {\sqrt {s}}} . By substituting x = u − b 2 a {\displaystyle x=u-{\tfrac {b}{2a}}} into a x 2 + b x + c = 0 {\displaystyle \textstyle ax^{2}+bx+c=0} , expanding the products and combining like terms, and then solving for u 2 {\displaystyle \textstyle u^{2}\!} , we have: a ( u − b 2 a ) 2 + b ( u − b 2 a ) + c = 0 a ( u 2 − b a u + b 2 4 a 2 ) + b ( u − b 2 a ) + c = 0 a u 2 − b u + b 2 4 a + b u − b 2 2 a + c = 0 a u 2 + 4 a c − b 2 4 a = 0 u 2 = b 2 − 4 a c 4 a 2 . {\displaystyle {\begin{aligned}a\left(u-{\frac {b}{2a}}\right)^{2}+b\left(u-{\frac {b}{2a}}\right)+c&=0\\[5mu]a\left(u^{2}-{\frac {b}{a}}u+{\frac {b^{2}}{4a^{2}}}\right)+b\left(u-{\frac {b}{2a}}\right)+c&=0\\[5mu]au^{2}-bu+{\frac {b^{2}}{4a}}+bu-{\frac {b^{2}}{2a}}+c&=0\\[5mu]au^{2}+{\frac {4ac-b^{2}}{4a}}&=0\\[5mu]u^{2}&={\frac {b^{2}-4ac}{4a^{2}}}.\end{aligned}}} Finally, after taking a square root of both sides and substituting the resulting expression for u {\displaystyle u} back into x = u − b 2 a , {\displaystyle x=u-{\tfrac {b}{2a}},} the familiar quadratic formula emerges: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} === By using algebraic identities === The following method was used by many historical mathematicians: Let the roots of the quadratic equation a x 2 + b x + c = 0 {\displaystyle \textstyle ax^{2}+bx+c=0} be α {\displaystyle \alpha } and β {\displaystyle \beta } . The derivation starts from an identity for the square of a difference (valid for any two complex numbers), of which we can take the square root on both sides: ( α − β ) 2 = ( α + β ) 2 − 4 α β α − β = ± ( α + β ) 2 − 4 α β . {\displaystyle {\begin{aligned}(\alpha -\beta )^{2}&=(\alpha +\beta )^{2}-4\alpha \beta \\[3mu]\alpha -\beta &=\pm {\sqrt {(\alpha +\beta )^{2}-4\alpha \beta }}.\end{aligned}}} Since the coefficient a ≠ 0 {\displaystyle a\neq 0} , we can divide the quadratic equation by a {\displaystyle a} to obtain a monic polynomial with the same roots. Namely, x 2 + b a x + c a = ( x − α ) ( x − β ) = x 2 − ( α + β ) x + α β . {\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}=(x-\alpha )(x-\beta )=x^{2}-(\alpha +\beta )x+\alpha \beta .} This implies that the sum α + β = − b a {\displaystyle \alpha +\beta =-{\tfrac {b}{a}}} and the product α β = c a {\displaystyle \alpha \beta ={\tfrac {c}{a}}} . Thus the identity can be rewritten: α − β = ± ( − b a ) 2 − 4 c a = ± b 2 − 4 a c a . {\displaystyle \alpha -\beta =\pm {\sqrt {\left(-{\frac {b}{a}}\right)^{2}-4{\frac {c}{a}}}}=\pm {\frac {\sqrt {b^{2}-4ac}}{a}}.} Therefore, α = 1 2 ( α + β ) + 1 2 ( α − β ) = − b 2 a ± b 2 − 4 a c 2 a , β = 1 2 ( α + β ) − 1 2 ( α − β ) = − b 2 a ∓ b 2 − 4 a c 2 a . {\displaystyle {\begin{aligned}\alpha &={\tfrac {1}{2}}(\alpha +\beta )+{\tfrac {1}{2}}(\alpha -\beta )=-{\frac {b}{2a}}\pm {\frac {\sqrt {b^{2}-4ac}}{2a}},\\[10mu]\beta &={\tfrac {1}{2}}(\alpha +\beta )-{\tfrac {1}{2}}(\alpha -\beta )=-{\frac {b}{2a}}\mp {\frac {\sqrt {b^{2}-4ac}}{2a}}.\end{aligned}}} The two possibilities for each of α {\displaystyle \alpha } and β {\displaystyle \beta } are the same two roots in opposite order, so we can combine them into the standard quadratic equation: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} === By Lagrange resolvents === An alternative way of deriving the quadratic formula is via the method of Lagrange resolvents, which is an early part of Galois theory. This method can be generalized to give the roots of cubic polynomials and quartic polynomials, and leads to Galois theory, which allows one to understand the solution of algebraic equations of any degree in terms of the symmetry group of their roots, the Galois group. This approach focuses on the roots themselves rather than algebraically rearranging the original equation. Given a monic quadratic polynomial x 2 + p x + q {\displaystyle \textstyle x^{2}+px+q} assume that α {\displaystyle \alpha } and β {\displaystyle \beta } are the two roots. So the polynomial factors as x 2 + p x + q = ( x − α ) ( x − β ) = x 2 − ( α + β ) x + α β {\displaystyle {\begin{aligned}x^{2}+px+q&=(x-\alpha )(x-\beta )\\[3mu]&=x^{2}-(\alpha +\beta )x+\alpha \beta \end{aligned}}} which implies p = − ( α + β ) {\displaystyle p=-(\alpha +\beta )} and q = α β {\displaystyle q=\alpha \beta } . Since multiplication and addition are both commutative, exchanging the roots α {\displaystyle \alpha } and β {\displaystyle \beta } will not change the coefficients p {\displaystyle p} and q {\displaystyle q} : one can say that p {\displaystyle p} and q {\displaystyle q} are symmetric polynomials in α {\displaystyle \alpha } and β {\displaystyle \beta } . Specifically, they are the elementary symmetric polynomials – any symmetric polynomial in α {\displaystyle \alpha } and β {\displaystyle \beta } can be expressed in terms of α + β {\displaystyle \alpha +\beta } and α β {\displaystyle \alpha \beta } instead. The Galois theory approach to analyzing and solving polynomials is to ask whether, given coefficients of a polynomial each of which is a symmetric function in the roots, one can "break" the symmetry and thereby recover the roots. Using this approach, solving a polynomial of degree n {\displaystyle n} is related to the ways of rearranging ("permuting") n {\displaystyle n} terms, called the symmetric group on n {\displaystyle n} letters and denoted S n {\displaystyle S_{n}} . For the quadratic polynomial, the only ways to rearrange two roots are to either leave them be or to transpose them, so solving a quadratic polynomial is simple. To find the roots α {\displaystyle \alpha } and β {\displaystyle \beta } , consider their sum and difference: r 1 = α + β , r 2 = α − β . {\displaystyle r_{1}=\alpha +\beta ,\quad r_{2}=\alpha -\beta .} These are called the Lagrange resolvents of the polynomial, from which the roots can be recovered as α = 1 2 ( r 1 + r 2 ) , β = 1 2 ( r 1 − r 2 ) . {\displaystyle \alpha ={\tfrac {1}{2}}(r_{1}+r_{2}),\quad \beta ={\tfrac {1}{2}}(r_{1}-r_{2}).} Because r 1 = α + β {\displaystyle r_{1}=\alpha +\beta } is a symmetric function in α {\displaystyle \alpha } and β {\displaystyle \beta } , it can be expressed in terms of p {\displaystyle p} and q , {\displaystyle q,} specifically r 1 = − p {\displaystyle r_{1}=-p} as described above. However, r 2 = α − β {\displaystyle r_{2}=\alpha -\beta } is not symmetric, since exchanging α {\displaystyle \alpha } and β {\displaystyle \beta } yields the additive inverse − r 2 = β − α {\displaystyle -r_{2}=\beta -\alpha } . So r 2 {\displaystyle r_{2}} cannot be expressed in terms of the symmetric polynomials. However, its square r 2 2 = ( α − β ) 2 {\displaystyle \textstyle r_{2}^{2}=(\alpha -\beta )^{2}} is symmetric in the roots, expressible in terms of p {\displaystyle p} and q {\displaystyle q} . Specifically r 2 2 = ( α − β ) 2 = {\displaystyle \textstyle r_{2}^{2}=(\alpha -\beta )^{2}={}} ( α + β ) 2 − 4 α β = {\displaystyle \textstyle (\alpha +\beta )^{2}-4\alpha \beta ={}} p 2 − 4 q {\displaystyle \textstyle p^{2}-4q} , which implies r 2 = ± p 2 − 4 q {\displaystyle \textstyle r_{2}=\pm {\sqrt {p^{2}-4q}}} . Taking the positive root "breaks" the symmetry, resulting in r 1 = − p , r 2 = p 2 − 4 q {\displaystyle r_{1}=-p,\qquad r_{2}={\textstyle {\sqrt {p^{2}-4q}}}} from which the roots α {\displaystyle \alpha } and β {\displaystyle \beta } are recovered as x = 1 2 ( r 1 ± r 2 ) = 1 2 ( − p ± p 2 − 4 q ) {\displaystyle x={\tfrac {1}{2}}(r_{1}\pm r_{2})={\tfrac {1}{2}}{\bigl (}{-p}\pm {\textstyle {\sqrt {p^{2}-4q}}}\,{\bigr )}} which is the quadratic formula for a monic polynomial. Substituting p = b / a {\displaystyle p=b/a} , q = c / a {\displaystyle q=c/a} yields the usual expression for an arbitrary quadratic polynomial. The resolvents can be recognized as 1 2 r 1 = − 1 2 p = − b 2 a , r 2 2 = p 2 − 4 q = b 2 − 4 a c a 2 , {\displaystyle {\tfrac {1}{2}}r_{1}=-{\tfrac {1}{2}}p=-{\frac {b}{2a}},\qquad r_{2}^{2}=p_{2}-4q={\frac {b^{2}-4ac}{a^{2}}},} respectively the vertex and the discriminant of the monic polynomial. A similar but more complicated method works for cubic equations, which have three resolvents and a quadratic equation (the "resolving polynomial") relating r 2 {\displaystyle r_{2}} and r 3 {\displaystyle r_{3}} , which one can solve by the quadratic equation, and similarly for a quartic equation (degree 4), whose resolving polynomial is a cubic, which can in turn be solved. The same method for a quintic equation yields a polynomial of degree 24, which does not simplify the problem, and, in fact, solutions to quintic equations in general cannot be expressed using only roots. == Numerical calculation == The quadratic formula is exactly correct when performed using the idealized arithmetic of real numbers, but when approximate arithmetic is used instead, for example pen-and-paper arithmetic carried out to a fixed number of decimal places or the floating-point binary arithmetic available on computers, the limitations of the number representation can lead to substantially inaccurate results unless great care is taken in the implementation. Specific difficulties include catastrophic cancellation in computing the sum − b ± Δ {\displaystyle \textstyle -b\pm {\sqrt {\Delta }}} if b ≈ ± Δ {\displaystyle \textstyle b\approx \pm {\sqrt {\Delta }}} ; catastrophic calculation in computing the discriminant Δ = b 2 − 4 a c {\displaystyle \textstyle \Delta =b^{2}-4ac} itself in cases where b 2 ≈ 4 a c {\displaystyle \textstyle b^{2}\approx 4ac} ; degeneration of the formula when a {\displaystyle a} , b {\displaystyle b} , or c {\displaystyle c} is represented as zero or infinite; and possible overflow or underflow when multiplying or dividing extremely large or small numbers, even in cases where the roots can be accurately represented. Catastrophic cancellation occurs when two numbers which are approximately equal are subtracted. While each of the numbers may independently be representable to a certain number of digits of precision, the identical leading digits of each number cancel, resulting in a difference of lower relative precision. When b ≈ Δ {\displaystyle \textstyle b\approx {\sqrt {\Delta }}} , evaluation of − b + Δ {\displaystyle \textstyle -b+{\sqrt {\Delta }}} causes catastrophic cancellation, as does the evaluation of − b − Δ {\displaystyle \textstyle -b-{\sqrt {\Delta }}} when b ≈ − Δ {\displaystyle \textstyle b\approx -{\sqrt {\Delta }}} . When using the standard quadratic formula, calculating one of the two roots always involves addition, which preserves the working precision of the intermediate calculations, while calculating the other root involves subtraction, which compromises it. Therefore, naïvely following the standard quadratic formula often yields one result with less relative precision than expected. Unfortunately, introductory algebra textbooks typically do not address this problem, even though it causes students to obtain inaccurate results in other school subjects such as introductory chemistry. For example, if trying to solve the equation x 2 − 1634 x + 2 = 0 {\displaystyle \textstyle x^{2}-1634x+2=0} using a pocket calculator, the result of the quadratic formula x = 817 ± 667 487 {\displaystyle \textstyle x=817\pm {\sqrt {667\,487}}} might be approximately calculated as: x 1 = 817 + 816.998 776 0 = 1.633 998 776 × 10 3 , x 2 = 817 − 816.998 776 0 = 1.224 × 10 − 3 . {\displaystyle {\begin{alignedat}{3}x_{1}&=817+816.998\,776\,0&&=1.633\,998\,776\times 10^{3},\\x_{2}&=817-816.998\,776\,0&&=1.224\times 10^{-3}.\end{alignedat}}} Even though the calculator used ten decimal digits of precision for each step, calculating the difference between two approximately equal numbers has yielded a result for x 2 {\displaystyle x_{2}} with only four correct digits. One way to recover an accurate result is to use the identity x 1 x 2 = c / a {\displaystyle x_{1}x_{2}=c/a} . In this example x 2 {\displaystyle x_{2}} can be calculated as x 2 = 2 / x 1 = {\displaystyle x_{2}=2/x_{1}={}} 1.223 991 125 × 10 − 3 {\displaystyle 1.223\,991\,125\times 10^{-3}\!} , which is correct to the full ten digits. Another more or less equivalent approach is to use the version of the quadratic formula with the square root in the denominator to calculate one of the roots (see § Square root in the denominator above). Practical computer implementations of the solution of quadratic equations commonly choose which formula to use for each root depending on the sign of b {\displaystyle b} . These methods do not prevent possible overflow or underflow of the floating-point exponent in computing b 2 {\displaystyle \textstyle b^{2}} or 4 a c {\displaystyle 4ac} , which can lead to numerically representable roots not being computed accurately. A more robust but computationally expensive strategy is to start with the substitution x = − u sgn ( b ) | c | / | a | {\displaystyle \textstyle x=-u\operatorname {sgn}(b){\sqrt {\vert c\vert }}{\big /}\!{\sqrt {\vert a\vert }}} , turning the quadratic equation into u 2 − 2 | b | 2 | a | | c | u + sgn ( c ) = 0 , {\displaystyle u^{2}-2{\frac {|b|}{2{\sqrt {|a|}}{\sqrt {|c|}}}}u+\operatorname {sgn}(c)=0,} where sgn {\displaystyle \operatorname {sgn} } is the sign function. Letting d = | b | / 2 | a | | c | {\displaystyle \textstyle d=\vert b\vert {\big /}2{\sqrt {\vert a\vert }}{\sqrt {\vert c\vert }}} , this equation has the form u 2 − 2 d u ± 1 = 0 {\displaystyle \textstyle u^{2}-2du\pm 1=0} , for which one solution is u 1 = d + d 2 ∓ 1 {\displaystyle \textstyle u_{1}=d+{\sqrt {d^{2}\mp 1}}} and the other solution is u 2 = ± 1 / u 1 {\displaystyle \textstyle u_{2}=\pm 1/u_{1}} . The roots of the original equation are then x 1 = − sgn ( b ) ( | c | / | a | ) u 1 {\displaystyle \textstyle x_{1}=-\operatorname {sgn}(b){\bigl (}{\sqrt {\vert c\vert }}{\big /}\!{\sqrt {\vert a\vert }}~\!{\bigr )}u_{1}} and x 2 = − sgn ( b ) ( | c | / | a | ) u 2 {\displaystyle \textstyle x_{2}=-\operatorname {sgn}(b){\bigl (}{\sqrt {\vert c\vert }}{\big /}\!{\sqrt {\vert a\vert }}~\!{\bigr )}u_{2}} . With additional complication the expense and extra rounding of the square roots can be avoided by approximating them as powers of two, while still avoiding exponent overflow for representable roots. == Historical development == The earliest methods for solving quadratic equations were geometric. Babylonian cuneiform tablets contain problems reducible to solving quadratic equations. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. The Greek mathematician Euclid (circa 300 BC) used geometric methods to solve quadratic equations in Book 2 of his Elements, an influential mathematical treatise Rules for quadratic equations appear in the Chinese The Nine Chapters on the Mathematical Art circa 200 BC. In his work Arithmetica, the Greek mathematician Diophantus (circa 250 AD) solved quadratic equations with a method more recognizably algebraic than the geometric algebra of Euclid. His solution gives only one root, even when both roots are positive. The Indian mathematician Brahmagupta included a generic method for finding one root of a quadratic equation in his treatise Brāhmasphuṭasiddhānta (circa 628 AD), written out in words in the style of that time. His solution of the quadratic equation a x 2 + b x = c {\displaystyle \textstyle ax^{2}+bx=c} was as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." In modern notation, this can be written x = ( c ⋅ 4 a + b 2 − b ) / 2 a {\displaystyle \textstyle x={\bigl (}{\sqrt {c\cdot 4a+b^{2}}}-b{\bigr )}{\big /}2a} . The Indian mathematician Śrīdhara (8th–9th century) came up with a similar algorithm for solving quadratic equations in a now-lost work on algebra quoted by Bhāskara II. The modern quadratic formula is sometimes called Sridharacharya's formula in India and Bhaskara's formula in Brazil. The 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī solved quadratic equations algebraically. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing special cases of the quadratic formula in the form we know today. == Geometric significance == In terms of coordinate geometry, an axis-aligned parabola is a curve whose ( x , y ) {\displaystyle (x,y)} -coordinates are the graph of a second-degree polynomial, of the form y = a x 2 + b x + c {\displaystyle \textstyle y=ax^{2}+bx+c} , where a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} are real-valued constant coefficients with a ≠ 0 {\displaystyle a\neq 0} . Geometrically, the quadratic formula defines the points ( x , 0 ) {\displaystyle (x,0)} on the graph, where the parabola crosses the x {\displaystyle x} -axis. Furthermore, it can be separated into two terms, x = − b ± b 2 − 4 a c 2 a = − b 2 a ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}=-{\frac {b}{2a}}\pm {\frac {\sqrt {b^{2}-4ac}}{2a}}.} The first term describes the axis of symmetry, the line x = − b 2 a {\displaystyle x=-{\tfrac {b}{2a}}} . The second term, b 2 − 4 a c / 2 a {\displaystyle \textstyle {\sqrt {b^{2}-4ac}}{\big /}2a} , gives the distance the roots are away from the axis of symmetry. If the parabola's vertex is on the x {\displaystyle x} -axis, then the corresponding equation has a single repeated root on the line of symmetry, and this distance term is zero; algebraically, the discriminant b 2 − 4 a c = 0 {\displaystyle \textstyle b^{2}-4ac=0} . If the discriminant is positive, then the vertex is not on the x {\displaystyle x} -axis but the parabola opens in the direction of the x {\displaystyle x} -axis, crossing it twice, so the corresponding equation has two real roots. If the discriminant is negative, then the parabola opens in the opposite direction, never crossing the x {\displaystyle x} -axis, and the equation has no real roots; in this case the two complex-valued roots will be complex conjugates whose real part is the x {\displaystyle x} value of the axis of symmetry. == Dimensional analysis == If the constants a {\displaystyle a} , b {\displaystyle b} , and/or c {\displaystyle c} are not unitless then the quantities x {\displaystyle x} and b a {\displaystyle {\tfrac {b}{a}}} must have the same units, because the terms a x 2 {\displaystyle \textstyle ax^{2}} and b x {\displaystyle bx} agree on their units. By the same logic, the coefficient c {\displaystyle c} must have the same units as b 2 a {\displaystyle {\tfrac {b^{2}}{a}}} , irrespective of the units of x {\displaystyle x} . This can be a powerful tool for verifying that a quadratic expression of physical quantities has been set up correctly. == See also == Fundamental theorem of algebra Vieta's formulas == Notes == == References == Smith, David Eugene (1923), History of Mathematics, vol. 2, Boston: Ginn Irving, Ron (2013), Beyond the Quadratic Formula, MAA, ISBN 978-0-88385-783-0
|
Wikipedia:Quadratic growth#0
|
In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position. "Quadratic growth" often means more generally "quadratic growth in the limit", as the argument or sequence position goes to infinity – in big Theta notation, f ( x ) = Θ ( x 2 ) {\displaystyle f(x)=\Theta (x^{2})} . This can be defined both continuously (for a real-valued function of a real variable) or discretely (for a sequence of real numbers, i.e., real-valued function of an integer or natural number variable). == Examples == Examples of quadratic growth include: Any quadratic polynomial. Certain integer sequences such as the triangular numbers. The n {\displaystyle n} th triangular number has value n ( n + 1 ) / 2 {\displaystyle n(n+1)/2} , approximately n 2 / 2 {\displaystyle n^{2}/2} . For a real function of a real variable, quadratic growth is equivalent to the second derivative being constant (i.e., the third derivative being zero), and thus functions with quadratic growth are exactly the quadratic polynomials, as these are the kernel of the third derivative operator D 3 {\displaystyle D^{3}} . Similarly, for a sequence (a real function of an integer or natural number variable), quadratic growth is equivalent to the second finite difference being constant (the third finite difference being zero), and thus a sequence with quadratic growth is also a quadratic polynomial. Indeed, an integer-valued sequence with quadratic growth is a polynomial in the zeroth, first, and second binomial coefficient with integer values. The coefficients can be determined by taking the Taylor polynomial (if continuous) or Newton polynomial (if discrete). Algorithmic examples include: The amount of time taken in the worst case by certain algorithms, such as insertion sort, as a function of the input length. The numbers of live cells in space-filling cellular automaton patterns such as the breeder, as a function of the number of time steps for which the pattern is simulated. Metcalfe's law stating that the value of a communications network grows quadratically as a function of its number of users. == See also == Exponential growth == References ==
|
Wikipedia:Quadratic-linear algebra#0
|
In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, 4 x 2 + 2 x y − 3 y 2 {\displaystyle 4x^{2}+2xy-3y^{2}} is a quadratic form in the variables x and y. The coefficients usually belong to a fixed field K, such as the real or complex numbers, and one speaks of a quadratic form over K. Over the reals, a quadratic form is said to be definite if it takes the value zero only when all its variables are simultaneously zero; otherwise it is isotropic. Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of manifolds, especially four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form − x T Σ − 1 x {\displaystyle -\mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Sigma }}^{-1}\mathbf {x} } ) Quadratic forms are not to be confused with quadratic equations, which have only one variable and may include terms of degree less than two. A quadratic form is a specific instance of the more general concept of forms. == Introduction == Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form: q ( x ) = a x 2 (unary) q ( x , y ) = a x 2 + b x y + c y 2 (binary) q ( x , y , z ) = a x 2 + b x y + c y 2 + d y z + e z 2 + f x z (ternary) {\displaystyle {\begin{aligned}q(x)&=ax^{2}&&{\textrm {(unary)}}\\q(x,y)&=ax^{2}+bxy+cy^{2}&&{\textrm {(binary)}}\\q(x,y,z)&=ax^{2}+bxy+cy^{2}+dyz+ez^{2}+fxz&&{\textrm {(ternary)}}\end{aligned}}} where a, ..., f are the coefficients. The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp. Binary quadratic forms have been extensively studied in number theory, in particular, in the theory of quadratic fields, continued fractions, and modular forms. The theory of integral quadratic forms in n variables has important applications to algebraic topology. Using homogeneous coordinates, a non-zero quadratic form in n variables defines an (n − 2)-dimensional quadric in the (n − 1)-dimensional projective space. This is a basic construction in projective geometry. In this way one may visualize 3-dimensional real quadratic forms as conic sections. An example is given by the three-dimensional Euclidean space and the square of the Euclidean norm expressing the distance between a point with coordinates (x, y, z) and the origin: q ( x , y , z ) = d ( ( x , y , z ) , ( 0 , 0 , 0 ) ) 2 = ‖ ( x , y , z ) ‖ 2 = x 2 + y 2 + z 2 . {\displaystyle q(x,y,z)=d((x,y,z),(0,0,0))^{2}=\left\|(x,y,z)\right\|^{2}=x^{2}+y^{2}+z^{2}.} A closely related notion with geometric overtones is a quadratic space, which is a pair (V, q), with V a vector space over a field K, and q : V → K a quadratic form on V. See § Definitions below for the definition of a quadratic form on a vector space. == History == The study of quadratic forms, in particular the question of whether a given integer can be the value of a quadratic form over the integers, dates back many centuries. One such case is Fermat's theorem on sums of two squares, which determines when an integer may be expressed in the form x2 + y2, where x, y are integers. This problem is related to the problem of finding Pythagorean triples, which appeared in the second millennium BCE. In 628, the Indian mathematician Brahmagupta wrote Brāhmasphuṭasiddhānta, which includes, among many other things, a study of equations of the form x2 − ny2 = c. He considered what is now called Pell's equation, x2 − ny2 = 1, and found a method for its solution. In Europe this problem was studied by Brouncker, Euler and Lagrange. In 1801 Gauss published Disquisitiones Arithmeticae, a major portion of which was devoted to a complete theory of binary quadratic forms over the integers. Since then, the concept has been generalized, and the connections with quadratic number fields, the modular group, and other areas of mathematics have been further elucidated. == Associated symmetric matrix == Any n × n matrix A determines a quadratic form qA in n variables by q A ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j = x T A x , {\displaystyle q_{A}(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}}=\mathbf {x} ^{\mathsf {T}}A\mathbf {x} ,} where A = (aij). === Example === Consider the case of quadratic forms in three variables x, y, z. The matrix A has the form A = [ a b c d e f g h k ] . {\displaystyle A={\begin{bmatrix}a&b&c\\d&e&f\\g&h&k\end{bmatrix}}.} The above formula gives q A ( x , y , z ) = a x 2 + e y 2 + k z 2 + ( b + d ) x y + ( c + g ) x z + ( f + h ) y z . {\displaystyle q_{A}(x,y,z)=ax^{2}+ey^{2}+kz^{2}+(b+d)xy+(c+g)xz+(f+h)yz.} So, two different matrices define the same quadratic form if and only if they have the same elements on the diagonal and the same values for the sums b + d, c + g and f + h. In particular, the quadratic form qA is defined by a unique symmetric matrix A = [ a b + d 2 c + g 2 b + d 2 e f + h 2 c + g 2 f + h 2 k ] . {\displaystyle A={\begin{bmatrix}a&{\frac {b+d}{2}}&{\frac {c+g}{2}}\\{\frac {b+d}{2}}&e&{\frac {f+h}{2}}\\{\frac {c+g}{2}}&{\frac {f+h}{2}}&k\end{bmatrix}}.} This generalizes to any number of variables as follows. === General case === Given a quadratic form qA over the real numbers, defined by the matrix A = (aij), the matrix B = ( a i j + a j i 2 ) = 1 2 ( A + A T ) {\displaystyle B=\left({\frac {a_{ij}+a_{ji}}{2}}\right)={\frac {1}{2}}(A+A^{\text{T}})} is symmetric, defines the same quadratic form as A, and is the unique symmetric matrix that defines qA. So, over the real numbers (and, more generally, over a field of characteristic different from two), there is a one-to-one correspondence between quadratic forms and symmetric matrices that determine them. == Real quadratic forms == A fundamental problem is the classification of real quadratic forms under a linear change of variables. Jacobi proved that, for every real quadratic form, there is an orthogonal diagonalization; that is, an orthogonal change of variables that puts the quadratic form in a "diagonal form" λ 1 x ~ 1 2 + λ 2 x ~ 2 2 + ⋯ + λ n x ~ n 2 , {\displaystyle \lambda _{1}{\tilde {x}}_{1}^{2}+\lambda _{2}{\tilde {x}}_{2}^{2}+\cdots +\lambda _{n}{\tilde {x}}_{n}^{2},} where the associated symmetric matrix is diagonal. Moreover, the coefficients λ1, λ2, ..., λn are determined uniquely up to a permutation. If the change of variables is given by an invertible matrix that is not necessarily orthogonal, one can suppose that all coefficients λi are 0, 1, or −1. Sylvester's law of inertia states that the numbers of each 0, 1, and −1 are invariants of the quadratic form, in the sense that any other diagonalization will contain the same number of each. The signature of the quadratic form is the triple (n0, n+, n−), where these components count the number of 0s, number of 1s, and the number of −1s, respectively. Sylvester's law of inertia shows that this is a well-defined quantity attached to the quadratic form. The case when all λi have the same sign is especially important: in this case the quadratic form is called positive definite (all 1) or negative definite (all −1). If none of the terms are 0, then the form is called nondegenerate; this includes positive definite, negative definite, and isotropic quadratic form (a mix of 1 and −1); equivalently, a nondegenerate quadratic form is one whose associated symmetric form is a nondegenerate bilinear form. A real vector space with an indefinite nondegenerate quadratic form of index (p, q) (denoting p 1s and q −1s) is often denoted as Rp,q particularly in the physical theory of spacetime. The discriminant of a quadratic form, concretely the class of the determinant of a representing matrix in K / (K×)2 (up to non-zero squares) can also be defined, and for a real quadratic form is a cruder invariant than signature, taking values of only "positive, zero, or negative". Zero corresponds to degenerate, while for a non-degenerate form it is the parity of the number of negative coefficients, (−1)n−. These results are reformulated in a different way below. Let q be a quadratic form defined on an n-dimensional real vector space. Let A be the matrix of the quadratic form q in a given basis. This means that A is a symmetric n × n matrix such that q ( v ) = x T A x , {\displaystyle q(v)=x^{\mathsf {T}}Ax,} where x is the column vector of coordinates of v in the chosen basis. Under a change of basis, the column x is multiplied on the left by an n × n invertible matrix S, and the symmetric square matrix A is transformed into another symmetric square matrix B of the same size according to the formula A → B = S T A S . {\displaystyle A\to B=S^{\mathsf {T}}AS.} Any symmetric matrix A can be transformed into a diagonal matrix B = ( λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ 0 0 0 ⋯ λ n ) {\displaystyle B={\begin{pmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &0\\0&0&\cdots &\lambda _{n}\end{pmatrix}}} by a suitable choice of an orthogonal matrix S, and the diagonal entries of B are uniquely determined – this is Jacobi's theorem. If S is allowed to be any invertible matrix then B can be made to have only 0, 1, and −1 on the diagonal, and the number of the entries of each type (n0 for 0, n+ for 1, and n− for −1) depends only on A. This is one of the formulations of Sylvester's law of inertia and the numbers n+ and n− are called the positive and negative indices of inertia. Although their definition involved a choice of basis and consideration of the corresponding real symmetric matrix A, Sylvester's law of inertia means that they are invariants of the quadratic form q. The quadratic form q is positive definite if q(v) > 0 (similarly, negative definite if q(v) < 0) for every nonzero vector v. When q(v) assumes both positive and negative values, q is an isotropic quadratic form. The theorems of Jacobi and Sylvester show that any positive definite quadratic form in n variables can be brought to the sum of n squares by a suitable invertible linear transformation: geometrically, there is only one positive definite real quadratic form of every dimension. Its isometry group is a compact orthogonal group O(n). This stands in contrast with the case of isotropic forms, when the corresponding group, the indefinite orthogonal group O(p, q), is non-compact. Further, the isometry groups of Q and −Q are the same (O(p, q) ≈ O(q, p)), but the associated Clifford algebras (and hence pin groups) are different. == Definitions == A quadratic form over a field K is a map q : V → K from a finite-dimensional K-vector space to K such that q(av) = a2q(v) for all a ∈ K, v ∈ V and the function q(u + v) − q(u) − q(v) is bilinear. More concretely, an n-ary quadratic form over a field K is a homogeneous polynomial of degree 2 in n variables with coefficients in K: q ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j , a i j ∈ K . {\displaystyle q(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}},\quad a_{ij}\in K.} This formula may be rewritten using matrices: let x be the column vector with components x1, ..., xn and A = (aij) be the n × n matrix over K whose entries are the coefficients of q. Then q ( x ) = x T A x . {\displaystyle q(x)=x^{\mathsf {T}}Ax.} A vector v = (x1, ..., xn) is a null vector if q(v) = 0. Two n-ary quadratic forms φ and ψ over K are equivalent if there exists a nonsingular linear transformation C ∈ GL(n, K) such that ψ ( x ) = φ ( C x ) . {\displaystyle \psi (x)=\varphi (Cx).} Let the characteristic of K be different from 2. The coefficient matrix A of q may be replaced by the symmetric matrix (A + AT)/2 with the same quadratic form, so it may be assumed from the outset that A is symmetric. Moreover, a symmetric matrix A is uniquely determined by the corresponding quadratic form. Under an equivalence C, the symmetric matrix A of φ and the symmetric matrix B of ψ are related as follows: B = C T A C . {\displaystyle B=C^{\mathsf {T}}AC.} The associated bilinear form of a quadratic form q is defined by b q ( x , y ) = 1 2 ( q ( x + y ) − q ( x ) − q ( y ) ) = x T A y = y T A x . {\displaystyle b_{q}(x,y)={\tfrac {1}{2}}(q(x+y)-q(x)-q(y))=x^{\mathsf {T}}Ay=y^{\mathsf {T}}Ax.} Thus, bq is a symmetric bilinear form over K with matrix A. Conversely, any symmetric bilinear form b defines a quadratic form q ( x ) = b ( x , x ) , {\displaystyle q(x)=b(x,x),} and these two processes are the inverses of each other. As a consequence, over a field of characteristic not equal to 2, the theories of symmetric bilinear forms and of quadratic forms in n variables are essentially the same. === Quadratic space === Given an n-dimensional vector space V over a field K, a quadratic form on V is a function Q : V → K that has the following property: for some basis, the function q that maps the coordinates of v ∈ V to Q(v) is a quadratic form. In particular, if V = Kn with its standard basis, one has q ( v 1 , … , v n ) = Q ( [ v 1 , … , v n ] ) for [ v 1 , … , v n ] ∈ K n . {\displaystyle q(v_{1},\ldots ,v_{n})=Q([v_{1},\ldots ,v_{n}])\quad {\text{for}}\quad [v_{1},\ldots ,v_{n}]\in K^{n}.} The change of basis formulas show that the property of being a quadratic form does not depend on the choice of a specific basis in V, although the quadratic form q depends on the choice of the basis. A finite-dimensional vector space with a quadratic form is called a quadratic space. The map Q is a homogeneous function of degree 2, which means that it has the property that, for all a in K and v in V: Q ( a v ) = a 2 Q ( v ) . {\displaystyle Q(av)=a^{2}Q(v).} When the characteristic of K is not 2, the bilinear map B : V × V → K over K is defined: B ( v , w ) = 1 2 ( Q ( v + w ) − Q ( v ) − Q ( w ) ) . {\displaystyle B(v,w)={\tfrac {1}{2}}(Q(v+w)-Q(v)-Q(w)).} This bilinear form B is symmetric. That is, B(x, y) = B(y, x) for all x, y in V, and it determines Q: Q(x) = B(x, x) for all x in V. When the characteristic of K is 2, so that 2 is not a unit, it is still possible to use a quadratic form to define a symmetric bilinear form B′(x, y) = Q(x + y) − Q(x) − Q(y). However, Q(x) can no longer be recovered from this B′ in the same way, since B′(x, x) = 0 for all x (and is thus alternating). Alternatively, there always exists a bilinear form B″ (not in general either unique or symmetric) such that B″(x, x) = Q(x). The pair (V, Q) consisting of a finite-dimensional vector space V over K and a quadratic map Q from V to K is called a quadratic space, and B as defined here is the associated symmetric bilinear form of Q. The notion of a quadratic space is a coordinate-free version of the notion of quadratic form. Sometimes, Q is also called a quadratic form. Two n-dimensional quadratic spaces (V, Q) and (V′, Q′) are isometric if there exists an invertible linear transformation T : V → V′ (isometry) such that Q ( v ) = Q ′ ( T v ) for all v ∈ V . {\displaystyle Q(v)=Q'(Tv){\text{ for all }}v\in V.} The isometry classes of n-dimensional quadratic spaces over K correspond to the equivalence classes of n-ary quadratic forms over K. === Generalization === Let R be a commutative ring, M be an R-module, and b : M × M → R be an R-bilinear form. A mapping q : M → R : v ↦ b(v, v) is the associated quadratic form of b, and B : M × M → R : (u, v) ↦ q(u + v) − q(u) − q(v) is the polar form of q. A quadratic form q : M → R may be characterized in the following equivalent ways: There exists an R-bilinear form b : M × M → R such that q(v) is the associated quadratic form. q(av) = a2q(v) for all a ∈ R and v ∈ M, and the polar form of q is R-bilinear. === Related concepts === Two elements v and w of V are called orthogonal if B(v, w) = 0. The kernel of a bilinear form B consists of the elements that are orthogonal to every element of V. Q is non-singular if the kernel of its associated bilinear form is {0}. If there exists a non-zero v in V such that Q(v) = 0, the quadratic form Q is isotropic, otherwise it is definite. This terminology also applies to vectors and subspaces of a quadratic space. If the restriction of Q to a subspace U of V is identically zero, then U is totally singular. The orthogonal group of a non-singular quadratic form Q is the group of the linear automorphisms of V that preserve Q: that is, the group of isometries of (V, Q) into itself. If a quadratic space (A, Q) has a product so that A is an algebra over a field, and satisfies ∀ x , y ∈ A Q ( x y ) = Q ( x ) Q ( y ) , {\displaystyle \forall x,y\in A\quad Q(xy)=Q(x)Q(y),} then it is a composition algebra. == Equivalence of forms == Every quadratic form q in n variables over a field of characteristic not equal to 2 is equivalent to a diagonal form q ( x ) = a 1 x 1 2 + a 2 x 2 2 + ⋯ + a n x n 2 . {\displaystyle q(x)=a_{1}x_{1}^{2}+a_{2}x_{2}^{2}+\cdots +a_{n}x_{n}^{2}.} Such a diagonal form is often denoted by ⟨a1, ..., an⟩. Classification of all quadratic forms up to equivalence can thus be reduced to the case of diagonal forms. == Geometric meaning == Using Cartesian coordinates in three dimensions, let x = (x, y, z)T, and let A be a symmetric 3-by-3 matrix. Then the geometric nature of the solution set of the equation xTAx + bTx = 1 depends on the eigenvalues of the matrix A. If all eigenvalues of A are non-zero, then the solution set is an ellipsoid or a hyperboloid. If all the eigenvalues are positive, then it is an ellipsoid; if all the eigenvalues are negative, then it is an imaginary ellipsoid (we get the equation of an ellipsoid but with imaginary radii); if some eigenvalues are positive and some are negative, then it is a hyperboloid. If there exist one or more eigenvalues λi = 0, then the shape depends on the corresponding bi. If the corresponding bi ≠ 0, then the solution set is a paraboloid (either elliptic or hyperbolic); if the corresponding bi = 0, then the dimension i degenerates and does not come into play, and the geometric meaning will be determined by other eigenvalues and other components of b. When the solution set is a paraboloid, whether it is elliptic or hyperbolic is determined by whether all other non-zero eigenvalues are of the same sign: if they are, then it is elliptic; otherwise, it is hyperbolic. == Integral quadratic forms == Quadratic forms over the ring of integers are called integral quadratic forms, whereas the corresponding modules are quadratic lattices (sometimes, simply lattices). They play an important role in number theory and topology. An integral quadratic form has integer coefficients, such as x2 + xy + y2; equivalently, given a lattice Λ in a vector space V (over a field with characteristic 0, such as Q or R), a quadratic form Q is integral with respect to Λ if and only if it is integer-valued on Λ, meaning Q(x, y) ∈ Z if x, y ∈ Λ. This is the current use of the term; in the past it was sometimes used differently, as detailed below. === Historical use === Historically there was some confusion and controversy over whether the notion of integral quadratic form should mean: twos in the quadratic form associated to a symmetric matrix with integer coefficients twos out a polynomial with integer coefficients (so the associated symmetric matrix may have half-integer coefficients off the diagonal) This debate was due to the confusion of quadratic forms (represented by polynomials) and symmetric bilinear forms (represented by matrices), and "twos out" is now the accepted convention; "twos in" is instead the theory of integral symmetric bilinear forms (integral symmetric matrices). In "twos in", binary quadratic forms are of the form ax2 + 2bxy + cy2, represented by the symmetric matrix ( a b b c ) {\displaystyle {\begin{pmatrix}a&b\\b&c\end{pmatrix}}} This is the convention Gauss uses in Disquisitiones Arithmeticae. In "twos out", binary quadratic forms are of the form ax2 + bxy + cy2, represented by the symmetric matrix ( a b / 2 b / 2 c ) . {\displaystyle {\begin{pmatrix}a&b/2\\b/2&c\end{pmatrix}}.} Several points of view mean that twos out has been adopted as the standard convention. Those include: better understanding of the 2-adic theory of quadratic forms, the 'local' source of the difficulty; the lattice point of view, which was generally adopted by the experts in the arithmetic of quadratic forms during the 1950s; the actual needs for integral quadratic form theory in topology for intersection theory; the Lie group and algebraic group aspects. === Universal quadratic forms === An integral quadratic form whose image consists of all the positive integers is sometimes called universal. Lagrange's four-square theorem shows that w2 + x2 + y2 + z2 is universal. Ramanujan generalized this aw2 + bx2 + cy2 + dz2 and found 54 multisets {a, b, c, d} that can each generate all positive integers, namely, There are also forms whose image consists of all but one of the positive integers. For example, {1, 2, 5, 5} has 15 as the exception. Recently, the 15 and 290 theorems have completely characterized universal integral quadratic forms: if all coefficients are integers, then it represents all positive integers if and only if it represents all integers up through 290; if it has an integral matrix, it represents all positive integers if and only if it represents all integers up through 15. == See also == ε-quadratic form Cubic form Discriminant of a quadratic form Hasse–Minkowski theorem Quadric Ramanujan's ternary quadratic form Square class Witt group Witt's theorem == Notes == == References == O'Meara, O.T. (2000), Introduction to Quadratic Forms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66564-9 Conway, John Horton; Fung, Francis Y. C. (1997), The Sensual (Quadratic) Form, Carus Mathematical Monographs, The Mathematical Association of America, ISBN 978-0-88385-030-5 Shafarevich, I. R.; Remizov, A. O. (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9. == Further reading == Cassels, J.W.S. (1978). Rational Quadratic Forms. London Mathematical Society Monographs. Vol. 13. Academic Press. ISBN 0-12-163260-1. Zbl 0395.10029. Kitaoka, Yoshiyuki (1993). Arithmetic of quadratic forms. Cambridge Tracts in Mathematics. Vol. 106. Cambridge University Press. ISBN 0-521-40475-4. Zbl 0785.11021. Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023. Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016. O'Meara, O.T. (1973). Introduction to quadratic forms. Die Grundlehren der mathematischen Wissenschaften. Vol. 117. Springer-Verlag. ISBN 3-540-66564-1. Zbl 0259.10018. Pfister, Albrecht (1995). Quadratic Forms with Applications to Algebraic Geometry and Topology. London Mathematical Society lecture note series. Vol. 217. Cambridge University Press. ISBN 0-521-46755-1. Zbl 0847.11014. == External links == A.V.Malyshev (2001) [1994], "Quadratic form", Encyclopedia of Mathematics, EMS Press A.V.Malyshev (2001) [1994], "Binary quadratic form", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Quadratrix of Hippias#0
|
The quadratrix or trisectrix of Hippias (also called the quadratrix of Dinostratus) is a curve which is created by a uniform motion. It is traced out by the crossing point of two lines, one moving by translation at a uniform speed, and the other moving by rotation around one of its points at a uniform speed. An alternative definition as a parametric curve leads to an equivalence between the quadratrix, the image of the Lambert W function, and the graph of the function y = x cot x {\displaystyle y=x\cot x} . The discovery of this curve is attributed to the Greek sophist Hippias of Elis, who used it around 420 BC in an attempt to solve the angle trisection problem, hence its name as a trisectrix. Later around 350 BC Dinostratus used it in an attempt to solve the problem of squaring the circle, hence its name as a quadratrix. Dinostratus's theorem, used in this attempt, relates an endpoint of the curve to the value of π. Both angle trisection and squaring the circle can be solved using a compass, a straightedge, and a given copy of this curve, but not by compass and straightedge alone. Although a dense set of points on the curve can be constructed by compass and straightedge, allowing these problems to be approximated, the whole curve cannot be constructed in this way. The quadratrix of Hippias is a transcendental curve. It is one of several curves used in Greek mathematics for squaring the circle. == Definitions == === By moving lines === Consider a square A B C D {\displaystyle ABCD} , and an inscribed quarter circle arc centered at A {\displaystyle A} with radius equal to the side of the square. Let E {\displaystyle E} be a point that travels with a constant angular velocity along the arc from D {\displaystyle D} to B {\displaystyle B} , and let F {\displaystyle F} be a point that travels simultaneously with a constant velocity from D {\displaystyle D} to A {\displaystyle A} along line segment A D ¯ {\displaystyle {\overline {AD}}} , so that E {\displaystyle E} and F {\displaystyle F} start at the same time at D {\displaystyle D} and arrive at the same time at B {\displaystyle B} and A {\displaystyle A} . Then the quadratrix is defined as the locus of the intersection of line segment A E ¯ {\displaystyle {\overline {AE}}} with the parallel line to A B ¯ {\displaystyle {\overline {AB}}} through F {\displaystyle F} . === Helicoid section === If a line in three-dimensional space, perpendicular to and intersecting the z {\displaystyle z} -axis, rotates at a constant rate at the same time that its intersection with the z {\displaystyle z} -axis moves upward at a constant rate, it will trace out a helicoid. As Pappus of Alexandria observed, the curve formed by intersecting this helicoid with a non-vertical plane that contains one of the generating lines of the helicoid, when projected onto the x y {\displaystyle xy} -plane, forms a quadratrix. === Parametric equation === If one places square A B C D {\displaystyle ABCD} with side length a {\displaystyle a} in a (Cartesian) coordinate system with the side A B ¯ {\displaystyle {\overline {AB}}} on the x {\displaystyle x} -axis and with vertex A {\displaystyle A} at the origin, then the quadratrix is described by a parametric equation that gives the coordinates of each point on the curve as a function of a time parameter t {\displaystyle t} , as γ ( t ) = ( x ( t ) y ( t ) ) = ( 2 a π t cot ( t ) 2 a π t ) {\displaystyle \gamma (t)={\begin{pmatrix}x(t)\\y(t)\end{pmatrix}}={\begin{pmatrix}{\frac {2a}{\pi }}t\cot(t)\\{\frac {2a}{\pi }}t\end{pmatrix}}} This description can also be used to give an analytical rather than a geometric definition of the quadratrix and to extend it beyond the ( 0 , π 2 ] {\displaystyle (0,{\tfrac {\pi }{2}}]} interval. It does however remain undefined at the points where cot ( t ) {\displaystyle \cot(t)} is singular, except for the case of t = 0 {\displaystyle t=0} . At t = 0 {\displaystyle t=0} , the singularity is removable by evaluating it using the limit lim t → 0 t cot ( t ) = 1 {\displaystyle \lim _{t\to 0}t\cot(t)=1} , obtained as the ratio of the identity function and tangent function using l'Hôpital's rule. Removing the singularity in this way and extending the parametric definition to negative values of t {\displaystyle t} yields a continuous planar curve on the range of parameter values − π < t < π {\displaystyle -\pi <t<\pi } . === As the graph of a function === When reflected left to right and scaled appropriately in the complex plane, the quadratrix forms the image of the real axis for one branch of Lambert W function. The images for other branches consist of curves above and below the quadratrix, and the real axis itself. To describe the quadratrix as the graph of an unbranched function, it is advantageous to swap the y {\displaystyle y} -axis and the x {\displaystyle x} -axis, that is to place the side A B ¯ {\displaystyle {\overline {AB}}} on the y {\displaystyle y} -axis rather than on the x {\displaystyle x} -axis. Then the quadratrix forms the graph of the function f ( x ) = x ⋅ cot ( π 2 a ⋅ x ) . {\displaystyle f(x)=x\cdot \cot \left({\frac {\pi }{2a}}\cdot x\right).} == Angle trisection == The trisection of an arbitrary angle using only compass and straightedge is impossible. However, if the quadratrix is allowed as an additional tool, it is possible to divide an arbitrary angle into n {\displaystyle n} equal segments and hence a trisection ( n = 3 {\displaystyle n=3} ) becomes possible. In practical terms the quadratrix can be drawn with the help of a template or a quadratrix compass (see drawing). By the definition of the quadratrix, the traversed angle is proportional to the traversed segment of the associated squares' side. Therefore, dividing that segment on the side into n {\displaystyle n} equal parts yields a partition of the associated angle into n {\displaystyle n} equal parts as well. Dividing the line segment into n {\displaystyle n} equal parts with ruler and compass is possible due to the intercept theorem. In more detail, to divide a given angle ∠ B A E {\displaystyle \angle BAE} (at most 90°) into any desired number of equal parts, construct a square A B C D {\displaystyle ABCD} over its leg A B ¯ {\displaystyle {\overline {AB}}} . The other leg of the angle intersects the quadratrix of the square in a point G {\displaystyle G} and the parallel line to the leg A B ¯ {\displaystyle {\overline {AB}}} through G {\displaystyle G} intersects the side A D ¯ {\displaystyle {\overline {AD}}} of the square in F {\displaystyle F} . Now the segment A F ¯ {\displaystyle {\overline {AF}}} corresponds to the angle ∠ B A E {\displaystyle \angle BAE} and due to the definition of the quadratrix any division of the segment A F ¯ {\displaystyle {\overline {AF}}} into n {\displaystyle n} equal segments yields a corresponding division of the angle ∠ B A E {\displaystyle \angle BAE} into n {\displaystyle n} equal angles. To divide the segment A F ¯ {\displaystyle {\overline {AF}}} into n {\displaystyle n} equal segments, draw any ray starting at A {\displaystyle A} with n {\displaystyle n} equal segments (of arbitrary length) on it. Connect the endpoint O {\displaystyle O} of the last segment to F {\displaystyle F} and draw lines parallel to O F ¯ {\displaystyle {\overline {OF}}} through all the endpoints of the remaining n − 1 {\displaystyle n-1} segments on A O ¯ {\displaystyle {\overline {AO}}} . These parallel lines divide the segment A F ¯ {\displaystyle {\overline {AF}}} into n {\displaystyle n} equal segments. Now draw parallel lines to A B ¯ {\displaystyle {\overline {AB}}} through the endpoints of those segments on A F ¯ {\displaystyle {\overline {AF}}} , intersecting the trisectrix. Connecting their points of intersection to A {\displaystyle A} yields a partition of angle ∠ B A E {\displaystyle \angle BAE} into n {\displaystyle n} equal angles. Since not all points of the trisectrix can be constructed with circle and compass alone, it is really required as an additional tool beyond the compass and straightedge. However it is possible to construct a dense subset of the trisectrix by compass and straightedge. In this way, while one cannot assure an exact division of an angle into n {\displaystyle n} parts without a given trisectrix, one can construct an arbitrarily close approximation to the trisectrix and therefore also to the division of the angle by compass and straightedge alone. == Squaring the circle == Squaring the circle with compass and straightedge alone is impossible. However, if one allows the quadratrix of Hippias as an additional construction tool, the squaring of the circle becomes possible due to Dinostratus's theorem relating an endpoint of this circle to the value of π. One can use this theorem to construct a square with the same area as a quarter circle. Another square with twice the side length has the same area as the full circle. === Dinostratus's theorem === According to Dinostratus's theorem the quadratrix divides one of the sides of the associated square in a ratio of 2 π {\displaystyle {\tfrac {2}{\pi }}} . More precisely, for the square A B C D {\displaystyle ABCD} used to define the curve, let J {\displaystyle J} be the endpoint of the curve on edge A B {\displaystyle AB} . Then A J ¯ A B ¯ = 2 π , {\displaystyle {\frac {\overline {AJ}}{\overline {AB}}}={\frac {2}{\pi }},} as can be seen from the parametric equation for the quadratrix at t = 0 {\displaystyle t=0} and the limiting behavior of the function controlling its x {\displaystyle x} -coordinate at that parameter value, lim t → 0 t cot t = 1 {\displaystyle \lim _{t\to 0}t\cot t=1} . The point J {\displaystyle J} , where the quadratrix meets the side A B ¯ {\displaystyle {\overline {AB}}} of the associated square, is one of the points of the quadratrix that cannot be constructed with ruler and compass alone and not even with the help of the quadratrix compass. This is due to the fact that (as Sporus of Nicaea already observed) the two uniformly moving lines coincide and hence there exists no unique intersection point. However relying on the generalized definition of the quadratrix as a function or planar curve allows for J {\displaystyle J} being a point on the quadratrix. === Construction === For a given quarter circle with radius r {\displaystyle r} one constructs the associated square A B C D {\displaystyle ABCD} with side length r {\displaystyle r} . The quadratrix intersect the side A B ¯ {\displaystyle {\overline {AB}}} in J {\displaystyle J} with | A J ¯ | = 2 π r {\displaystyle \left|{\overline {AJ}}\right|={\tfrac {2}{\pi }}r} . Now one constructs a line segment J K ¯ {\displaystyle {\overline {JK}}} of length r {\displaystyle r} being perpendicular to A B ¯ {\displaystyle {\overline {AB}}} . Then the line through A {\displaystyle A} and K {\displaystyle K} intersects the extension of the side B C ¯ {\displaystyle {\overline {BC}}} in L {\displaystyle L} and from the intercept theorem follows | B L ¯ | = π 2 r {\displaystyle \left|{\overline {BL}}\right|={\tfrac {\pi }{2}}r} . Extending A B ¯ {\displaystyle {\overline {AB}}} to the right by a new line segment | B O ¯ | = r 2 {\displaystyle \left|{\overline {BO}}\right|={\tfrac {r}{2}}} yields the rectangle B L N O {\displaystyle BLNO} with sides B L ¯ {\displaystyle {\overline {BL}}} and B O ¯ {\displaystyle {\overline {BO}}} the area of which matches the area of the quarter circle. This rectangle can be transformed into a square of the same area with the help of Euclid's geometric mean theorem. One extends the side O N ¯ {\displaystyle {\overline {ON}}} by a line segment | O Q ¯ | = | B O ¯ | = r 2 {\displaystyle \left|{\overline {OQ}}\right|=\left|{\overline {BO}}\right|={\tfrac {r}{2}}} and draws a half circle to right of N Q ¯ {\displaystyle {\overline {NQ}}} , which has N Q ¯ {\displaystyle {\overline {NQ}}} as its diameter. The extension of B O ¯ {\displaystyle {\overline {BO}}} meets the half circle in R ¯ {\displaystyle {\overline {R}}} and due to Thales' theorem the line segment O R ¯ {\displaystyle {\overline {OR}}} is the altitude of the right-angled triangle Q N R {\displaystyle QNR} . Hence the geometric mean theorem can be applied, which means that O R ¯ {\displaystyle {\overline {OR}}} forms the side of a square O U S R {\displaystyle OUSR} with the same area as the rectangle B L N O {\displaystyle BLNO} and hence as the quarter circle. == Other properties == For a quadratrix constructed from a unit square, the area under the quadratrix is 2 ln 2 π ≈ 0.44127. {\displaystyle {\frac {2\ln 2}{\pi }}\approx 0.44127.} Inverting the quadratrix by a circle centered at the axis of the rotating line that defines it produces a cochleoid, and in the same way inverting the cochleoid produces a quadratrix. == History == The quadratrix of Hippias is one of several curves used in Greek mathematics for squaring the circle, the most well-known for this purpose. Another is the Archimedean spiral, used to square the circle by Archimedes. It is mentioned in the works of Proclus (412–485), Pappus of Alexandria (3rd and 4th centuries) and Iamblichus (c. 240 – c. 325). Proclus names Hippias as the inventor of a curve called a quadratrix and describes somewhere else how Hippias has applied the curve on the trisection problem. Pappus only mentions how a curve named a quadratrix was used by Dinostratus, Nicomedes and others to square the circle. He relays the objections of Sporus of Nicaea to this construction, but neither mentions Hippias nor attributes the invention of the quadratrix to a particular person. Iamblichus just writes in a single line, that a curve called a quadratrix was used by Nicomedes to square the circle. From Proclus' name for the curve, it is conceivable that Hippias himself used it for squaring the circle or some other curvilinear figure. However, most historians of mathematics assume that Hippias invented the curve, but used it only for the trisection of angles. According to this theory, its use for squaring the circle only occurred decades later and was due to mathematicians like Dinostratus and Nicomedes. This interpretation of the historical sources goes back to the German mathematician and historian Moritz Cantor. Rüdiger Thiele claims that François Viète used the trisectrix to derive Viète's formula, an infinite product of nested radicals published by Viète in 1593 that converges to 2 / π {\displaystyle 2/\pi } . However, other sources instead view Viète's formula as an elaboration of a method of nested polygons used by Archimedes to approximate π {\displaystyle \pi } . In his 1637 book La Géométrie, René Descartes classified curves either as "geometric", admitting a precise geometric construction, or if not as "mechanical"; he gave the quadratrix as an example of a mechanical curve. In modern terminology, roughly the same distinction may be expressed by saying that it is a transcendental curve rather than an algebraic curve. Isaac Newton used trigonometric series to determine the area enclosed by the quadratrix. == Related phenomena == When a camera with a rolling shutter takes a photograph of a quickly rotating object, such as a propeller, curves resembling the quadratrix of Hippias may appear, generated in an analogous way to the quadratrix: these curves are traced out by the points of intersection of the rotating propeller blade and the linearly moving scan line of the camera. Different curves may be generated depending on the angle of the propeller at the time when the scan line crosses its axis of rotation (rather than coinciding with the scan line at that time for the quadratrix). A similar visual phenomenon was also observed in the 19th century by Peter Mark Roget when the spoked wheel of a moving cart or train is viewed through the vertical slats of a fence or palisade; it is called Roget’s palisade illusion. == References == == Further reading == Alsina, Claudi; Nelsen, Roger B. (2010), Charming Proofs: A Journey Into Elegant Mathematics, Mathematical Association of America, pp. 146–147, ISBN 978-0-88385-348-1 Venner, Nicole (March 13, 2023), Algebraic properties of Euclidean geometry with transcendental curves, arXiv:2303.12514; this unpublished preprint includes the conjecture that the quadratrix cannot be used for doubling the cube, another problem unsolvable with compass and straightedge == External links == Michael D. Huberty, Ko Hayashi, Chia Vang: Hippias' Quadratrix Weisstein, Eric W., "Quadratrix of Hippias", MathWorld{{cite web}}: CS1 maint: overridden setting (link)
|
Wikipedia:Quadrature (geometry)#0
|
In mathematics, quadrature is a historic term for the computation of areas and is thus used for computation of integrals. The word is derived from the Latin quadratus meaning "square". The reason is that, for Ancient Greek mathematicians, the computation of an area consisted of constructing a square of the same area. In this sense, the modern term is squaring. For example, the quadrature of the circle, (or squaring the circle) is a famous old problem that has been shown, in the 19th century, to be impossible with the methods available to the Ancient Greeks, Integral calculus, introduced in the 17th century, is a general method for computation of areas. Quadrature came to refer to the computation of any integral; such a computation is presently called more often "integral" or "integration". However, the computation of solutions of differential equations and differential systems is also called integration, and quadrature remains useful for distinguish integrals from solutions of differential equations, in contexts where both problems are considered. This is the case in numerical analysis; see numerical quadrature. Also, reduction to quadratures and solving by quadratures means expressing solutions of differential equations in terms of integrals. The remainder of this article is devoted to the original meaning of quadrature, namely, computation of areas. == History == === Antiquity === Greek mathematicians understood the determination of an area of a figure as the process of geometrically constructing a square having the same area (squaring), thus the name quadrature for this process. The Greek geometers were not always successful (see squaring the circle), but they did carry out quadratures of some figures whose sides were not simply line segments, such as the lune of Hippocrates and the parabola. By a certain Greek tradition, these constructions had to be performed using only a compass and straightedge, though not all Greek mathematicians adhered to this dictum. For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side x = a b {\displaystyle x={\sqrt {ab}}} (the geometric mean of a and b). For this purpose it is possible to use the following: if one draws the circle with diameter made from joining line segments of lengths a and b, then the height (BH in the diagram) of the line segment drawn perpendicular to the diameter, from the point of their connection to the point where it crosses the circle, equals the geometric mean of a and b. A similar geometrical construction solves the problems of quadrature of a parallelogram and of a triangle. Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge was proved in the 19th century to be impossible. Nevertheless, for some figures a quadrature can be performed. The quadratures of the surface of a sphere and a parabola segment discovered by Archimedes became the highest achievement of analysis in antiquity. The area of the surface of a sphere is equal to four times the area of the circle formed by a great circle of this sphere. The area of a segment of a parabola determined by a straight line cutting it is 4/3 the area of a triangle inscribed in this segment. For the proofs of these results, Archimedes used the method of exhaustion attributed to Eudoxus. === Medieval mathematics === In medieval Europe, quadrature meant the calculation of area by any method. Most often the method of indivisibles was used; it was less rigorous than the geometric constructions of the Greeks, but it was simpler and more powerful. With its help, Galileo Galilei and Gilles de Roberval found the area of a cycloid arch, Grégoire de Saint-Vincent investigated the area under a hyperbola (Opus Geometricum, 1647),: 491 and Alphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area to logarithms.: 492 === Integral calculus === John Wallis algebrised this method; he wrote in his Arithmetica Infinitorum (1656) some series which are equivalent to what is now called the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress: quadratures for some algebraic curves and spirals. Christiaan Huygens successfully performed a quadrature of the surface area of some solids of revolution. The quadrature of the hyperbola by Gregoire de Saint-Vincent and A. A. de Sarasa provided a new function, the natural logarithm, of critical importance. With the invention of integral calculus came a universal method for area calculation. In response, the term quadrature has become traditional, and instead the modern phrase finding the area is more commonly used for what is technically the computation of a univariate definite integral. == See also == Gaussian quadrature Hyperbolic angle Numerical integration Quadratrix Tanh-sinh quadrature == Notes == == References == Boyer, C. B. (1989) A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, ISBN 0-471-09763-2 (1991 pbk ed. ISBN 0-471-54397-7). Thomas Heath (1921) A History of Greek Mathematics, Oxford, Clarendon Press, via Internet Archive: Volume I, From Thales to Euclid, Volume II, From Aristarchus to Diophantus Eves, Howard (1990) An Introduction to the History of Mathematics, Saunders, ISBN 0-03-029558-0, Christiaan Huygens (1651) Theoremata de Quadratura Hyperboles, Ellipsis et Circuli Jean-Etienne Montucla (1873) History of the Quadrature of the Circle, J. Babin translator, William Alexander Myers editor, link from HathiTrust. Christoph Scriba (1983) "Gregory's Converging Double Sequence: a new look at the controversy between Huygens and Gregory over the 'analytical' quadrature of the circle", Historia Mathematica 10:274–85.
|
Wikipedia:Quantized enveloping algebra#0
|
In mathematics, a quantum or quantized enveloping algebra is a q-analog of a universal enveloping algebra. Given a Lie algebra g {\displaystyle {\mathfrak {g}}} , the quantum enveloping algebra is typically denoted as U q ( g ) {\displaystyle U_{q}({\mathfrak {g}})} . The notation was introduced by Drinfeld and independently by Jimbo. Among the applications, studying the q → 0 {\displaystyle q\to 0} limit led to the discovery of crystal bases. == The case of == s l 2 {\displaystyle {\mathfrak {sl}}_{2}} Michio Jimbo considered the algebras with three generators related by the three commutators [ h , e ] = 2 e , [ h , f ] = − 2 f , [ e , f ] = sinh ( η h ) / sinh η . {\displaystyle [h,e]=2e,\ [h,f]=-2f,\ [e,f]=\sinh(\eta h)/\sinh \eta .} When η → 0 {\displaystyle \eta \to 0} , these reduce to the commutators that define the special linear Lie algebra s l 2 {\displaystyle {\mathfrak {sl}}_{2}} . In contrast, for nonzero η {\displaystyle \eta } , the algebra defined by these relations is not a Lie algebra but instead an associative algebra that can be regarded as a deformation of the universal enveloping algebra of s l 2 {\displaystyle {\mathfrak {sl}}_{2}} . == See also == Quantum group == Notes == == References == Drinfel'd, V. G. (1987), "Quantum Groups", Proceedings of the International Congress of Mathematicians 986, 1, American Mathematical Society: 798–820 Tjin, T. (10 October 1992). "An introduction to quantized Lie groups and algebras". International Journal of Modern Physics A. 07 (25): 6175–6213. arXiv:hep-th/9111043. Bibcode:1992IJMPA...7.6175T. doi:10.1142/S0217751X92002805. ISSN 0217-751X. S2CID 119087306. == External links == Quantized enveloping algebra at the nLab Quantized enveloping algebras at q = 1 {\displaystyle q=1} at MathOverflow Does there exist any "quantum Lie algebra" imbedded into the quantum enveloping algebra U q ( g ) {\displaystyle U_{q}(g)} ? at MathOverflow
|
Wikipedia:Quantum algebra#0
|
Quantum algebra is one of the top-level mathematics categories used by the arXiv. It is the study of noncommutative analogues and generalizations of commutative algebras, especially those arising in Lie theory. Subjects include: Quantum groups Skein theories Operadic algebra Diagrammatic algebra Quantum field theory Racks and quandles == See also == == References == == External links == Quantum algebra at arxiv.org
|
Wikipedia:Quantum groupoid#0
|
In mathematics, a quantum groupoid is any of a number of notions in noncommutative geometry analogous to the notion of groupoid. In usual geometry, the information of a groupoid can be contained in its monoidal category of representations (by a version of Tannaka–Krein duality), in its groupoid algebra or in the commutative Hopf algebroid of functions on the groupoid. Thus formalisms trying to capture quantum groupoids include certain classes of (autonomous) monoidal categories, Hopf algebroids etc. == References == Ross Street, Brian Day, "Quantum categories, star autonomy, and quantum groupoids", in "Galois Theory, Hopf Algebras, and Semiabelian Categories", Fields Institute Communications 43 (American Math. Soc. 2004) 187–226; arXiv:math/0301209 Gabriella Böhm, "Hopf algebroids", (a chapter of) Handbook of algebra, Vol. 6, ed. by M. Hazewinkel, Elsevier 2009, 173–236 arXiv:0805.3806 Jiang-Hua Lu, "Hopf algebroids and quantum groupoids", Int. J. Math. 7, n. 1 (1996) pp. 47–70, arXiv:q-alg/9505024, MR1369905, doi:10.1142/S0129167X96000050
|
Wikipedia:Quartic equation#0
|
In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,} where a ≠ 0. The quartic is the highest order polynomial equation that can be solved by radicals in the general case. == History == Lodovico Ferrari is attributed with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna (1545). The proof that this was the highest order general polynomial for which such solutions could be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois before his death in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result. == Solving a quartic equation, special cases == Consider a quartic equation expressed in the form a 0 x 4 + a 1 x 3 + a 2 x 2 + a 3 x + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{3}x+a_{4}=0} : There exists a general formula for finding the roots to quartic equations, provided the coefficient of the leading term is non-zero. However, since the general method is quite complex and susceptible to errors in execution, it is better to apply one of the special cases listed below if possible. === Degenerate case === If the constant term a4 = 0, then one of the roots is x = 0, and the other roots can be found by dividing by x, and solving the resulting cubic equation, a 0 x 3 + a 1 x 2 + a 2 x + a 3 = 0. {\displaystyle a_{0}x^{3}+a_{1}x^{2}+a_{2}x+a_{3}=0.\,} === Evident roots: 1 and −1 and −k === Call our quartic polynomial Q(x). Since 1 raised to any power is 1, Q ( 1 ) = a 0 + a 1 + a 2 + a 3 + a 4 . {\displaystyle Q(1)=a_{0}+a_{1}+a_{2}+a_{3}+a_{4}\ .} Thus if a 0 + a 1 + a 2 + a 3 + a 4 = 0 , {\displaystyle \ a_{0}+a_{1}+a_{2}+a_{3}+a_{4}=0\ ,} Q(1) = 0 and so x = 1 is a root of Q(x). It can similarly be shown that if a 0 + a 2 + a 4 = a 1 + a 3 , {\displaystyle \ a_{0}+a_{2}+a_{4}=a_{1}+a_{3}\ ,} x = −1 is a root. In either case the full quartic can then be divided by the factor (x − 1) or (x + 1) respectively yielding a new cubic polynomial, which can be solved to find the quartic's other roots. If a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 2 = 0 {\displaystyle \ a_{2}=0\ } and a 4 = a 3 k , {\displaystyle \ a_{4}=a_{3}k\ ,} then x = − k {\displaystyle \ x=-k\ } is a root of the equation. The full quartic can then be factorized this way: a 0 x 4 + a 0 k x 3 + a 3 x + a 3 k = a 0 x 3 ( x + k ) + a 3 ( x + k ) = ( a 0 x 3 + a 3 ) ( x + k ) . {\displaystyle \ a_{0}x^{4}+a_{0}kx^{3}+a_{3}x+a_{3}k=a_{0}x^{3}(x+k)+a_{3}(x+k)=(a_{0}x^{3}+a_{3})(x+k)\ .} Alternatively, if a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 3 = a 2 k , {\displaystyle \ a_{3}=a_{2}k\ ,} and a 4 = 0 , {\displaystyle \ a_{4}=0\ ,} then x = 0 and x = −k become two known roots. Q(x) divided by x(x + k) is a quadratic polynomial. === Biquadratic equations === A quartic equation where a3 and a1 are equal to 0 takes the form a 0 x 4 + a 2 x 2 + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{2}x^{2}+a_{4}=0\,\!} and thus is a biquadratic equation, which is easy to solve: let z = x 2 {\displaystyle z=x^{2}} , so our equation becomes a 0 z 2 + a 2 z + a 4 = 0 {\displaystyle a_{0}z^{2}+a_{2}z+a_{4}=0\,\!} which is a simple quadratic equation, whose solutions are easily found using the quadratic formula: z = − a 2 ± a 2 2 − 4 a 0 a 4 2 a 0 {\displaystyle z={\frac {-a_{2}\pm {\sqrt {a_{2}^{2}-4a_{0}a_{4}}}}{2a_{0}}}\,\!} When we've solved it (i.e. found these two z values), we can extract x from them x 1 = + z + {\displaystyle x_{1}=+{\sqrt {z_{+}}}\,\!} x 2 = − z + {\displaystyle x_{2}=-{\sqrt {z_{+}}}\,\!} x 3 = + z − {\displaystyle x_{3}=+{\sqrt {z_{-}}}\,\!} x 4 = − z − {\displaystyle x_{4}=-{\sqrt {z_{-}}}\,\!} If either of the z solutions were negative or complex numbers, then some of the x solutions are complex numbers. === Quasi-symmetric equations === a 0 x 4 + a 1 x 3 + a 2 x 2 + a 1 m x + a 0 m 2 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}=0\,} Steps: Divide by x 2. Use variable change z = x + m/x. So, z 2 = x 2 + (m/x) 2 + 2m. This leads to: a 0 ( x 2 + m 2 / x 2 ) + a 1 ( x + m / x ) + a 2 = 0 {\displaystyle a_{0}(x^{2}+m^{2}/x^{2})+a_{1}(x+m/x)+a_{2}=0} , a 0 ( z 2 − 2 m ) + a 1 ( z ) + a 2 = 0 {\displaystyle a_{0}(z^{2}-2m)+a_{1}(z)+a_{2}=0} , z 2 + ( a 1 / a 0 ) z + ( a 2 / a 0 − 2 m ) = 0 {\displaystyle z^{2}+(a_{1}/a_{0})z+(a_{2}/a_{0}-2m)=0} (a quadratic in z = x + m/x) === Multiple roots === If the quartic has a double root, it can be found by taking the polynomial greatest common divisor with its derivative. Then they can be divided out and the resulting quadratic equation solved. In general, there exist only four possible cases of quartic equations with multiple roots, which are listed below: Multiplicity-4 (M4): when the general quartic equation can be expressed as a ( x − l ) 4 = 0 {\displaystyle a(x-l)^{4}=0} , for some real number l {\displaystyle l} . This case can always be reduced to a biquadratic equation. Multiplicity-3 (M3): when the general quartic equation can be expressed as a ( x − l ) 3 ( x − m ) = 0 {\displaystyle a(x-l)^{3}(x-m)=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers. This is the only case that can never be reduced to a biquadratic equation. Double Multiplicity-2 (DM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) 2 = 0 {\displaystyle a(x-l)^{2}(x-m)^{2}=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers or a pair of non-real complex conjugate numbers. This case can also always be reduced to a biquadratic equation. Single Multiplicity-2 (SM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) ( x − n ) = 0 {\displaystyle a(x-l)^{2}(x-m)(x-n)=0} , where l {\displaystyle l} , m {\displaystyle m} , and n {\displaystyle n} are three different real numbers or l {\displaystyle l} is a real number and m {\displaystyle m} and n {\displaystyle n} are a pair of non-real complex conjugate numbers. This case is divided into two subcases, those that can be reduced to a biquadratic equation and those that can't. So, if the three non-monic coefficients of the depressed quartic equation, x 4 + p x 2 + q x + r = 0 {\displaystyle x^{4}+px^{2}+qx+r=0} , in terms of the five coefficients of the general quartic equation are given as follows: p = 8 a c − 3 b 2 8 a 2 {\displaystyle p={\frac {8ac-3b^{2}}{8a^{2}}}} , q = b 3 − 4 a b c + 8 a 2 d 8 a 3 {\displaystyle q={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}} and r = 16 a b 2 c − 64 a 2 b d − 3 b 4 + 256 a 3 e 256 a 4 {\displaystyle r={\frac {16ab^{2}c-64a^{2}bd-3b^{4}+256a^{3}e}{256a^{4}}}} , then the criteria to identify a priori each case of quartic equations with multiple roots and their respective solutions are shown below. M4. The general quartic equation corresponds to this case whenever p = q = r = 0 {\displaystyle p=q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = x 3 = x 4 = − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=x_{4}=-{\frac {b}{4a}}} . M3. The general quartic equation corresponds to this case whenever p 2 = − 12 r > 0 {\displaystyle p^{2}=-12r>0} and 27 q 2 = − 8 p 3 > 0 {\displaystyle 27q^{2}=-8p^{3}>0} , so the four roots of this equation are given as follows: x 1 = x 2 = x 3 = − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}={\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} and x 4 = − − 3 p 2 − b 4 a {\displaystyle x_{4}=-{\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} , whether q > 0 {\displaystyle q>0} ; otherwise, x 1 = x 2 = x 3 = − − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=-{\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} and x 4 = − 3 p 2 − b 4 a {\displaystyle x_{4}={\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} . DM2. The general quartic equation corresponds to this case whenever p 2 = 4 r > 0 = q {\displaystyle p^{2}=4r>0=q} , so the four roots of this equation are given as follows: x 1 = x 3 = − p 2 − b 4 a {\displaystyle x_{1}=x_{3}={\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} and x 2 = x 4 = − − p 2 − b 4 a {\displaystyle x_{2}=x_{4}=-{\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} . Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever p ≠ q = r = 0 {\displaystyle p\neq q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = − b 4 a {\displaystyle x_{1}=x_{2}=-{\frac {b}{4a}}} , x 3 = − p − b 4 a {\displaystyle x_{3}={\sqrt {-p}}-{\frac {b}{4a}}} and x 4 = − − p − b 4 a {\displaystyle x_{4}=-{\sqrt {-p}}-{\frac {b}{4a}}} . Non-Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever ( p 2 + 12 r ) 3 = [ p ( p 2 − 36 r ) + 27 2 q 2 ] 2 > 0 ≠ q {\displaystyle (p^{2}+12r)^{3}=[p(p^{2}-36r)+{\frac {27}{2}}q^{2}]^{2}>0\neq {q}} , so the four roots of this equation are given by the following formula: x = 1 2 [ ξ s 1 ± 2 ( s 2 − ξ q s 1 ) ] − b 4 a {\displaystyle x={\frac {1}{2}}\left[\xi {\sqrt {s_{1}}}\pm {\sqrt {2{\biggl (}s_{2}-{\frac {\xi q}{\sqrt {s_{1}}}}{\biggr )}}}\right]-{\frac {b}{4a}}} , where s 1 = 9 q 2 − 32 p r p 2 + 12 r > 0 {\displaystyle s_{1}={\frac {9q^{2}-32pr}{p^{2}+12r}}>0} , s 2 = − 2 p ( p 2 − 4 r ) + 9 q 2 2 ( p 2 + 12 r ) ≠ 0 {\displaystyle s_{2}=-{\frac {2p(p^{2}-4r)+9q^{2}}{2(p^{2}+12r)}}\neq 0} and ξ = ± 1 {\displaystyle \xi =\pm 1} . == The general case == To begin, the quartic must first be converted to a depressed quartic. === Converting to a depressed quartic === Let be the general quartic equation which it is desired to solve. Divide both sides by A, x 4 + B A x 3 + C A x 2 + D A x + E A = 0 . {\displaystyle \ x^{4}+{B \over A}x^{3}+{C \over A}x^{2}+{D \over A}x+{E \over A}=0\ .} The first step, if B is not already zero, should be to eliminate the x3 term. To do this, change variables from x to u, such that x = u − B 4 A . {\displaystyle \ x=u-{B \over 4A}\ .} Then ( u − B 4 A ) 4 + B A ( u − B 4 A ) 3 + C A ( u − B 4 A ) 2 + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u-{B \over 4A}\right)^{4}+{B \over A}\left(u-{B \over 4A}\right)^{3}+{C \over A}\left(u-{B \over 4A}\right)^{2}+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Expanding the powers of the binomials produces ( u 4 − B A u 3 + 6 u 2 B 2 16 A 2 − 4 u B 3 64 A 3 + B 4 256 A 4 ) + B A ( u 3 − 3 u 2 B 4 A + 3 u B 2 16 A 2 − B 3 64 A 3 ) + C A ( u 2 − u B 2 A + B 2 16 A 2 ) + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u^{4}-{B \over A}u^{3}+{6u^{2}B^{2} \over 16A^{2}}-{4uB^{3} \over 64A^{3}}+{B^{4} \over 256A^{4}}\right)+{B \over A}\left(u^{3}-{3u^{2}B \over 4A}+{3uB^{2} \over 16A^{2}}-{B^{3} \over 64A^{3}}\right)+{C \over A}\left(u^{2}-{uB \over 2A}+{B^{2} \over 16A^{2}}\right)+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Collecting the same powers of u yields u 4 + ( − 3 B 2 8 A 2 + C A ) u 2 + ( B 3 8 A 3 − B C 2 A 2 + D A ) u + ( − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A ) = 0 . {\displaystyle \ u^{4}+\left({-3B^{2} \over 8A^{2}}+{C \over A}\right)u^{2}+\left({B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\right)u+\left({-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\right)=0\ .} Now rename the coefficients of u. Let a = − 3 B 2 8 A 2 + C A , b = B 3 8 A 3 − B C 2 A 2 + D A , c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle {\begin{aligned}a&={-3B^{2} \over 8A^{2}}+{C \over A}\ ,\\b&={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\ ,\\c&={-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\ .\end{aligned}}} The resulting equation is which is a depressed quartic equation. If b = 0 {\displaystyle \ b=0\ } then we have the special case of a biquadratic equation, which is easily solved, as explained above. Note that the general solution, given below, will not work for the special case b = 0 . {\displaystyle \ b=0\ .} The equation must be solved as a biquadratic. In either case, once the depressed quartic is solved for u, substituting those values into x = u − B 4 A {\displaystyle \ x=u-{B \over 4A}\ } produces the values for x that solve the original quartic. === Solving a depressed quartic when b ≠ 0 === After converting to a depressed quartic equation u 4 + a u 2 + b u + c = 0 {\displaystyle u^{4}+au^{2}+bu+c=0} and excluding the special case b = 0, which is solved as a biquadratic, we assume from here on that b ≠ 0 . We will separate the terms left and right as u 4 = − a u 2 − b u − c {\displaystyle u^{4}=-au^{2}-bu-c} and add in terms to both sides which make them both into perfect squares. Let y be any solution of this cubic equation: 2 y 3 − a y 2 − 2 c y + ( a c − 1 4 b 2 ) = ( 2 y − a ) ( y 2 − c ) − 1 4 b 2 = 0 . {\displaystyle 2y^{3}-ay^{2}-2cy+(ac-{\tfrac {1}{4}}b^{2})=(2y-a)(y^{2}-c)-{\tfrac {1}{4}}b^{2}=0\ .} Then (since b ≠ 0) 2 y − a ≠ 0 {\displaystyle 2y-a\neq 0} so we may divide by it, giving y 2 − c = b 2 4 ( 2 y − a ) . {\displaystyle y^{2}-c={\frac {b^{2}}{4(2y-a)}}\ .} Then ( u 2 + y ) 2 = u 4 + 2 y u 2 + y 2 = ( 2 y − a ) u 2 − b u + ( y 2 − c ) = ( 2 y − a ) u 2 − b u + b 2 4 ( 2 y − a ) = ( 2 y − a u − b 2 2 y − a ) 2 . {\displaystyle (u^{2}+y)^{2}=u^{4}+2yu^{2}+y^{2}=(2y-a)u^{2}-bu+(y^{2}-c)=(2y-a)u^{2}-bu+{\frac {b^{2}}{\ 4(2y-a)\ }}=\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}\ .} Subtracting, we get the difference of two squares which is the product of the sum and difference of their roots ( u 2 + y ) 2 − ( 2 y − a u − b 2 2 y − a ) 2 = ( u 2 + y + 2 y − a u − b 2 2 y − a ) ( u 2 + y − 2 y − a u + b 2 2 y − a ) = 0 {\displaystyle (u^{2}+y)^{2}-\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}=\left(u^{2}+y+{\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)\left(u^{2}+y-{\sqrt {2y-a\ }}\,u+{\frac {b}{2{\sqrt {2y-a\ }}}}\right)=0} which can be solved by applying the quadratic formula to each of the two factors. So the possible values of u are: u = 1 2 ( − 2 y − a + − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}+{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( − 2 y − a − − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}-{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( 2 y − a + − 2 y − a − 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}+{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} or u = 1 2 ( 2 y − a − − 2 y − a − 2 b 2 y − a ) . {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}-{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ .} Using another y from among the three roots of the cubic simply causes these same four values of u to appear in a different order. The solutions of the cubic are: y = a 6 + w − p 3 w {\displaystyle \ y={\frac {a}{6}}+w-{\frac {p}{3w}}\ } w = − q 2 + q 2 4 + p 3 27 3 {\displaystyle \ w={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}\ }}\ }}} using any one of the three possible cube roots. A wise strategy is to choose the sign of the square-root that makes the absolute value of w as large as possible. p = − a 2 12 − c , {\displaystyle \ p=-{\frac {a^{2}}{12}}-c\ ,} q = − a 3 108 + a c 3 − b 2 8 . {\displaystyle \ q=-{\frac {a^{3}}{108}}+{\frac {ac}{3}}-{\frac {b^{2}}{8}}\ .} === Ferrari's solution === Otherwise, the depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. Once the depressed quartic has been obtained, the next step is to add the valid identity ( u 2 + a ) 2 − u 4 − 2 a u 2 = a 2 {\displaystyle \left(u^{2}+a\right)^{2}-u^{4}-2au^{2}=a^{2}} to equation (1), yielding The effect has been to fold up the u4 term into a perfect square: (u2 + a)2. The second term, au2 did not disappear, but its sign has changed and it has been moved to the right side. The next step is to insert a variable y into the perfect square on the left side of equation (2), and a corresponding 2y into the coefficient of u2 in the right side. To accomplish these insertions, the following valid formulas will be added to equation (2), ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = 2 y ( u 2 + a ) + y 2 = 2 y u 2 + 2 y a + y 2 , {\displaystyle {\begin{aligned}(u^{2}+a+y)^{2}-(u^{2}+a)^{2}&=2y(u^{2}+a)+y^{2}\ \ \\&=2yu^{2}+2ya+y^{2},\end{aligned}}} and 0 = ( a + 2 y ) u 2 − 2 y u 2 − a u 2 {\displaystyle 0=(a+2y)u^{2}-2yu^{2}-au^{2}\,} These two formulas, added together, produce ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = ( a + 2 y ) u 2 − a u 2 + 2 y a + y 2 ( y -insertion ) {\displaystyle \left(u^{2}+a+y\right)^{2}-\left(u^{2}+a\right)^{2}=\left(a+2y\right)u^{2}-au^{2}+2ya+y^{2}\qquad \qquad (y{\hbox{-insertion}})\,} which added to equation (2) produces ( u 2 + a + y ) 2 + b u + c = ( a + 2 y ) u 2 + ( 2 y a + y 2 + a 2 ) . {\displaystyle \left(u^{2}+a+y\right)^{2}+bu+c=\left(a+2y\right)u^{2}+\left(2ya+y^{2}+a^{2}\right).\,} This is equivalent to The objective now is to choose a value for y such that the right side of equation (3) becomes a perfect square. This can be done by letting the discriminant of the quadratic function become zero. To explain this, first expand a perfect square so that it equals a quadratic function: ( s u + t ) 2 = ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) . {\displaystyle \left(su+t\right)^{2}=\left(s^{2}\right)u^{2}+\left(2st\right)u+\left(t^{2}\right).\,} The quadratic function on the right side has three coefficients. It can be verified that squaring the second coefficient and then subtracting four times the product of the first and third coefficients yields zero: ( 2 s t ) 2 − 4 ( s 2 ) ( t 2 ) = 0. {\displaystyle \left(2st\right)^{2}-4\left(s^{2}\right)\left(t^{2}\right)=0.\,} Therefore to make the right side of equation (3) into a perfect square, the following equation must be solved: ( − b ) 2 − 4 ( 2 y + a ) ( y 2 + 2 y a + a 2 − c ) = 0. {\displaystyle (-b)^{2}-4\left(2y+a\right)\left(y^{2}+2ya+a^{2}-c\right)=0.\,} Multiply the binomial with the polynomial, b 2 − 4 ( 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c ) ) = 0 {\displaystyle b^{2}-4\left(2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac\right)\right)=0\,} Divide both sides by −4, and move the −b2/4 to the right, 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c − b 2 4 ) = 0 {\displaystyle 2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac-{\frac {b^{2}}{4}}\right)=0} Divide both sides by 2, This is a cubic equation in y. Solve for y using any method for solving such equations (e.g. conversion to a reduced cubic and application of Cardano's formula). Any of the three possible roots will do. ==== Folding the second perfect square ==== With the value for y so selected, it is now known that the right side of equation (3) is a perfect square of the form ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) = ( ( s 2 ) u + ( 2 s t ) 2 s 2 ) 2 {\displaystyle \left(s^{2}\right)u^{2}+(2st)u+\left(t^{2}\right)=\left(\left({\sqrt {s^{2}}}\right)u+{(2st) \over 2{\sqrt {s^{2}}}}\right)^{2}} (This is correct for both signs of square root, as long as the same sign is taken for both square roots. A ± is redundant, as it would be absorbed by another ± a few equations further down this page.) so that it can be folded: ( a + 2 y ) u 2 + ( − b ) u + ( y 2 + 2 y a + a 2 − c ) = ( ( a + 2 y ) u + ( − b ) 2 a + 2 y ) 2 . {\displaystyle (a+2y)u^{2}+(-b)u+\left(y^{2}+2ya+a^{2}-c\right)=\left(\left({\sqrt {a+2y}}\right)u+{(-b) \over 2{\sqrt {a+2y}}}\right)^{2}.} Note: If b ≠ 0 then a + 2y ≠ 0. If b = 0 then this would be a biquadratic equation, which we solved earlier. Therefore equation (3) becomes Equation (5) has a pair of folded perfect squares, one on each side of the equation. The two perfect squares balance each other. If two squares are equal, then the sides of the two squares are also equal, as shown by: Collecting like powers of u produces Note: The subscript s of ± s {\displaystyle \pm _{s}} and ∓ s {\displaystyle \mp _{s}} is to note that they are dependent. Equation (6) is a quadratic equation for u. Its solution is u = ± s a + 2 y ± t ( a + 2 y ) − 4 ( a + y ± s b 2 a + 2 y ) 2 . {\displaystyle u={\frac {\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {(a+2y)-4\left(a+y\pm _{s}{b \over 2{\sqrt {a+2y}}}\right)}}}{2}}.} Simplifying, one gets u = ± s a + 2 y ± t − ( 3 a + 2 y ± s 2 b a + 2 y ) 2 . {\displaystyle u={\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over {\sqrt {a+2y}}}\right)}} \over 2}.} This is the solution of the depressed quartic, therefore the solutions of the original quartic equation are Remember: The two ± s {\displaystyle \pm _{s}} come from the same place in equation (5'), and should both have the same sign, while the sign of ± t {\displaystyle \pm _{t}} is independent. ==== Summary of Ferrari's method ==== Given the quartic equation A x 4 + B x 3 + C x 2 + D x + E = 0 , {\displaystyle Ax^{4}+Bx^{3}+Cx^{2}+Dx+E=0,\,} its solution can be found by means of the following calculations: a = − 3 B 2 8 A 2 + C A , {\displaystyle a=-{3B^{2} \over 8A^{2}}+{C \over A},} b = B 3 8 A 3 − B C 2 A 2 + D A , {\displaystyle b={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A},} c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle c=-{3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}.} If b = 0 , {\displaystyle \,b=0,} then x = − B 4 A ± s − a ± t a 2 − 4 c 2 (for b = 0 only) . {\displaystyle x=-{B \over 4A}\pm _{s}{\sqrt {-a\pm _{t}{\sqrt {a^{2}-4c}} \over 2}}\qquad {\mbox{(for }}b=0{\mbox{ only)}}.} Otherwise, continue with P = − a 2 12 − c , {\displaystyle P=-{a^{2} \over 12}-c,} Q = − a 3 108 + a c 3 − b 2 8 , {\displaystyle Q=-{a^{3} \over 108}+{ac \over 3}-{b^{2} \over 8},} R = − Q 2 ± Q 2 4 + P 3 27 , {\displaystyle R=-{Q \over 2}\pm {\sqrt {{Q^{2} \over 4}+{P^{3} \over 27}}},} (either sign of the square root will do) U = R 3 , {\displaystyle U={\sqrt[{3}]{R}},} (there are 3 complex roots, any one of them will do) y = − 5 6 a + { U = 0 → − Q 3 U ≠ 0 , → U − P 3 U , {\displaystyle y=-{5 \over 6}a+{\begin{cases}U=0&\to -{\sqrt[{3}]{Q}}\\U\neq 0,&\to U-{P \over 3U},\end{cases}}\quad \quad \quad } W = a + 2 y {\displaystyle W={\sqrt {a+2y}}} x = − B 4 A + ± s W ± t − ( 3 a + 2 y ± s 2 b W ) 2 . {\displaystyle x=-{B \over 4A}+{\pm _{s}W\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over W}\right)}} \over 2}.} The two ±s must have the same sign, the ±t is independent. To get all roots, compute x for (±s,±t) = (+,+); (+,−); (−,+); (−,−). This formula handles repeated roots without problem. Ferrari was the first to discover one of these labyrinthine solutions. The equation which he solved was x 4 + 6 x 2 − 60 x + 36 = 0 {\displaystyle x^{4}+6x^{2}-60x+36=0} which was already in depressed form. It has a pair of solutions which can be found with the set of formulas shown above. === Ferrari's solution in the special case of real coefficients === If the coefficients of the quartic equation are real then the nested depressed cubic equation (5) also has real coefficients, thus it has at least one real root. Furthermore the cubic function C ( v ) = v 3 + P v + Q , {\displaystyle C(v)=v^{3}+Pv+Q,} where P and Q are given by (5) has the properties that C ( a 3 ) = − b 2 8 < 0 {\displaystyle C\left({a \over 3}\right)={-b^{2} \over 8}<0} and lim v → ∞ C ( v ) = ∞ , {\displaystyle \lim _{v\to \infty }C(v)=\infty ,} where a and b are given by (1). This means that (5) has a real root greater than a 3 {\displaystyle a \over 3} , and therefore that (4) has a real root greater than − a 2 {\displaystyle -a \over 2} . Using this root the term a + 2 y {\displaystyle {\sqrt {a+2y}}} in (6) is always real, which ensures that the two quadratic equations (6) have real coefficients. === Obtaining alternative solutions the hard way === It could happen that one only obtained one solution through the formulae above, because not all four sign patterns are tried for four solutions, and the solution obtained is complex. It may also be the case that one is only looking for a real solution. Let x1 denote the complex solution. If all the original coefficients A, B, C, D and E are real—which should be the case when one desires only real solutions – then there is another complex solution x2 which is the complex conjugate of x1. If the other two roots are denoted as x3 and x4 then the quartic equation can be expressed as ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) ( x − x 4 ) = 0 , {\displaystyle (x-x_{1})(x-x_{2})(x-x_{3})(x-x_{4})=0,\,} but this quartic equation is equivalent to the product of two quadratic equations: and Since x 2 = x 1 ⋆ {\displaystyle x_{2}=x_{1}^{\star }} then ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 1 ⋆ ) x + x 1 x 1 ⋆ = x 2 − 2 Re ( x 1 ) x + [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 . {\displaystyle {\begin{aligned}(x-x_{1})(x-x_{2})&=x^{2}-(x_{1}+x_{1}^{\star })x+x_{1}x_{1}^{\star }\\&=x^{2}-2\operatorname {Re} (x_{1})x+[\operatorname {Re} (x_{1})]^{2}+[\operatorname {Im} (x_{1})]^{2}.\end{aligned}}} Let a = − 2 Re ( x 1 ) , {\displaystyle a=-2\operatorname {Re} (x_{1}),} b = [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 {\displaystyle b=\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}} so that equation (9) becomes Also let there be (unknown) variables w and v such that equation (10) becomes Multiplying equations (11) and (12) produces Comparing equation (13) to the original quartic equation, it can be seen that a + w = B A , {\displaystyle a+w={B \over A},} b + w a + v = C A , {\displaystyle b+wa+v={C \over A},} w b + v a = D A , {\displaystyle wb+va={D \over A},} and v b = E A . {\displaystyle vb={E \over A}.} Therefore w = B A − a = B A + 2 Re ( x 1 ) , {\displaystyle w={B \over A}-a={B \over A}+2\operatorname {Re} (x_{1}),} v = E A b = E A ( [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 ) . {\displaystyle v={E \over Ab}={\frac {E}{A\left(\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}\right)}}.} Equation (12) can be solved for x yielding x 3 = − w + w 2 − 4 v 2 , {\displaystyle x_{3}={-w+{\sqrt {w^{2}-4v}} \over 2},} x 4 = − w − w 2 − 4 v 2 . {\displaystyle x_{4}={-w-{\sqrt {w^{2}-4v}} \over 2}.} One of these two solutions should be the desired real solution. == Alternative methods == === Quick and memorable solution from first principles === Most textbook solutions of the quartic equation require a substitution that is hard to memorize. Here is an approach that makes it easy to understand. The job is done if we can factor the quartic equation into a product of two quadratics. Let 0 = x 4 + b x 3 + c x 2 + d x + e = ( x 2 + p x + q ) ( x 2 + r x + s ) = x 4 + ( p + r ) x 3 + ( q + s + p r ) x 2 + ( p s + q r ) x + q s {\displaystyle {\begin{aligned}0&=x^{4}+bx^{3}+cx^{2}+dx+e\\&=\left(x^{2}+px+q\right)\left(x^{2}+rx+s\right)\\&=x^{4}+(p+r)x^{3}+(q+s+pr)x^{2}+(ps+qr)x+qs\end{aligned}}} By equating coefficients, this results in the following set of simultaneous equations: b = p + r c = q + s + p r d = p s + q r e = q s {\displaystyle {\begin{aligned}b&=p+r\\c&=q+s+pr\\d&=ps+qr\\e&=qs\end{aligned}}} This is harder to solve than it looks, but if we start again with a depressed quartic where b = 0 {\displaystyle b=0} , which can be obtained by substituting ( x − b / 4 ) {\displaystyle (x-b/4)} for x {\displaystyle x} , then r = − p {\displaystyle r=-p} , and: c + p 2 = s + q d / p = s − q e = s q {\displaystyle {\begin{aligned}c+p^{2}&=s+q\\d/p&=s-q\\e&=sq\end{aligned}}} It's now easy to eliminate both s {\displaystyle s} and q {\displaystyle q} by doing the following: ( c + p 2 ) 2 − ( d / p ) 2 = ( s + q ) 2 − ( s − q ) 2 = 4 s q = 4 e {\displaystyle {\begin{aligned}\left(c+p^{2}\right)^{2}-(d/p)^{2}&=(s+q)^{2}-(s-q)^{2}\\&=4sq\\&=4e\end{aligned}}} If we set P = p 2 {\displaystyle P=p^{2}} , then this equation turns into the cubic equation: P 3 + 2 c P 2 + ( c 2 − 4 e ) P − d 2 = 0 {\displaystyle P^{3}+2cP^{2}+\left(c^{2}-4e\right)P-d^{2}=0} which is solved elsewhere. Once you have p {\displaystyle p} , then: r = − p 2 s = c + p 2 + d / p 2 q = c + p 2 − d / p {\displaystyle {\begin{aligned}r&=-p\\2s&=c+p^{2}+d/p\\2q&=c+p^{2}-d/p\end{aligned}}} The symmetries in this solution are easy to see. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of p {\displaystyle p} for the square root of P {\displaystyle P} merely exchanges the two quadratics with one another. === Möbius transformation method === A suitably chosen Möbius transformation can transform a quartic equation into a quadratic equation in the new variable squared. This is a known method. Finding such a Möbius transformation involves solving a cubic equation and so simplifies the problem. For example, start with the depressed quartic equation with unity leading coefficient and with neither a 1 {\displaystyle a_{1}} nor a 0 {\displaystyle a_{0}} equal to zero: x 4 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle x^{4}+a_{2}x^{2}+a_{1}x+a_{0}=0} and do the Möbius transformation: x = A + B y 1 + y {\displaystyle x={\frac {A+By}{1+y}}} Set the first and third order coefficients of the resulting quartic equation in y {\displaystyle y} to zero. After some algebra, one finds A + B {\displaystyle A+B} is to be obtained from the cubic equation a 1 ( A + B ) 3 + ( 4 a 0 − 2 a 1 a 2 − a 2 2 ) ( A + B ) 2 − 2 a 1 a 2 ( A + B ) − a 1 2 = 0 {\displaystyle a_{1}(A+B)^{3}+(4a_{0}-2a_{1}a_{2}-{a_{2}}^{2})(A+B)^{2}-2a_{1}a_{2}(A+B)-{a_{1}}^{2}=0} and, regarding A + B {\displaystyle A+B} as known, A {\displaystyle A} is to be obtained from the quadratic equation 2 ( A + B ) A 2 − 2 ( A + B ) 2 A − a 2 ( A + B ) − a 1 = 0 {\displaystyle 2(A+B)A^{2}-2(A+B)^{2}A-a_{2}(A+B)-a_{1}=0} Solving the resulting quadratic equation for y 2 {\displaystyle y^{2}} gives two values for y 2 {\displaystyle y^{2}} and each square root of y 2 {\displaystyle y^{2}} has two values, giving a total of four solutions, as expected. The cubic equation in A + B {\displaystyle {\textbf {A}}+{\textbf {B}}} given earlier is the same as P 2 − Q ( A + B ) 2 = 0 {\displaystyle P^{2}-Q(A+B)^{2}=0} , where P ≡ b 1 − b 3 2 ( A − B ) = 2 A B ( A + B ) + a 2 ( A + B ) + a 1 {\displaystyle P\equiv {\frac {b_{1}-b_{3}}{2(A-B)}}=2\,A\,B\,(A+B)+a_{2}(A+B)+a_{1}} Q ≡ B b 1 − A b 3 A − B = 4 A 2 B 2 − a 1 ( A + B ) − 4 a 0 = 0 {\displaystyle Q\equiv {\frac {B\,b_{1}-A\,b_{3}}{A-B}}=4A^{2}B^{2}-a_{1}(A+B)-4a_{0}=0} Here bi are the coefficients of the quartic polynomial in y. This shows how this equation was obtained. === Galois theory and factorization === The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots. Suppose ri for i from 0 to 3 are roots of x 4 + b x 3 + c x 2 + d x + e = 0 ( 1 ) {\displaystyle x^{4}+bx^{3}+cx^{2}+dx+e=0\qquad (1)} If we now set s 0 = 1 2 ( r 0 + r 1 + r 2 + r 3 ) , s 1 = 1 2 ( r 0 − r 1 + r 2 − r 3 ) , s 2 = 1 2 ( r 0 + r 1 − r 2 − r 3 ) , s 3 = 1 2 ( r 0 − r 1 − r 2 + r 3 ) , {\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(r_{0}+r_{1}+r_{2}+r_{3}),\\s_{1}&={\tfrac {1}{2}}(r_{0}-r_{1}+r_{2}-r_{3}),\\s_{2}&={\tfrac {1}{2}}(r_{0}+r_{1}-r_{2}-r_{3}),\\s_{3}&={\tfrac {1}{2}}(r_{0}-r_{1}-r_{2}+r_{3}),\end{aligned}}} then since the transformation is an involution, we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −b/2, we really only need the values for s1, s2 and s3. These we may find by expanding the polynomial ( z 2 − s 1 2 ) ( z 2 − s 2 2 ) ( z 2 − s 3 2 ) ( 2 ) {\displaystyle \left(z^{2}-s_{1}^{2}\right)\left(z^{2}-s_{2}^{2}\right)\left(z^{2}-s_{3}^{2}\right)\qquad (2)} which if we make the simplifying assumption that b = 0, is equal to z 6 + 2 c z 4 + ( c 2 − 4 e ) z 2 − d 2 ( 3 ) {\displaystyle z^{6}+2cz^{4}+\left(c^{2}-4e\right)z^{2}-d^{2}\qquad (3)} This polynomial is of degree six, but only of degree three in z2, and so the corresponding equation is solvable. By trial we can determine which three roots are the correct ones, and hence find the solutions of the quartic. We can remove any requirement for trial by using a root of the same resolvent polynomial for factoring; if w is any root of (3), and if F 1 = x 2 + w x + 1 2 w 2 + 1 2 c − 1 2 ⋅ c 2 w d − 1 2 ⋅ w 5 d − c w 3 d + 2 e w d {\displaystyle F_{1}=x^{2}+wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c-{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}-{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}-{\frac {cw^{3}}{d}}+2{\frac {ew}{d}}} F 2 = x 2 − w x + 1 2 w 2 + 1 2 c + 1 2 ⋅ w 5 d + c w 3 d − 2 e w d + 1 2 ⋅ c 2 w d {\displaystyle F_{2}=x^{2}-wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c+{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}+{\frac {cw^{3}}{d}}-2{\frac {ew}{d}}+{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}} then F 1 F 2 = x 4 + c x 2 + d x + e ( 4 ) {\displaystyle F_{1}F_{2}=x^{4}+cx^{2}+dx+e\qquad \qquad (4)} We therefore can solve the quartic by solving for w and then solving for the roots of the two factors using the quadratic formula. === Approximate methods === The methods described above are, in principle, exact root-finding methods. It is also possible to use successive approximation methods which iteratively converge towards the roots, such as the Durand–Kerner method. Iterative methods are the only ones available for quintic and higher-order equations, beyond trivial or special cases. == See also == Linear equation Quadratic equation Cubic equation Quintic equation Polynomial Newton's method Principal equation form == References == Ferrari's achievement Quartic formula as four single equations at PlanetMath. == Notes == == External links == Calculator for solving Quartics
|
Wikipedia:Quasi-Lie algebra#0
|
In mathematics, a quasi-Lie algebra in abstract algebra is just like a Lie algebra, but with the usual axiom [ x , x ] = 0 {\displaystyle [x,x]=0} replaced by [ x , y ] = − [ y , x ] {\displaystyle [x,y]=-[y,x]} (anti-symmetry). In characteristic other than 2, these are equivalent (in the presence of bilinearity), so this distinction doesn't arise when considering real or complex Lie algebras. It can however become important, when considering Lie algebras over the integers. In a quasi-Lie algebra, 2 [ x , x ] = 0. {\displaystyle 2[x,x]=0.} Therefore, the bracket of any element with itself is 2-torsion, if it does not actually vanish. == See also == Whitehead product == References == Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups. 1964 lectures given at Harvard University. Lecture Notes in Mathematics. Vol. 1500 (Corrected 5th printing of the 2nd (1992) ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-540-70634-2. ISBN 3-540-55008-9. MR 2179691.
|
Wikipedia:Quasi-exact solvability#0
|
A linear differential operator L is called quasi-exactly-solvable (QES) if it has a finite-dimensional invariant subspace of functions { V } n {\displaystyle \{{\mathcal {V}}\}_{n}} such that L : { V } n → { V } n , {\displaystyle L:\{{\mathcal {V}}\}_{n}\rightarrow \{{\mathcal {V}}\}_{n},} where n is a dimension of { V } n {\displaystyle \{{\mathcal {V}}\}_{n}} . There are two important cases: { V } n {\displaystyle \{{\mathcal {V}}\}_{n}} is the space of multivariate polynomials of degree not higher than some integer number; and { V } n {\displaystyle \{{\mathcal {V}}\}_{n}} is a subspace of a Hilbert space. Sometimes, the functional space { V } n {\displaystyle \{{\mathcal {V}}\}_{n}} is isomorphic to the finite-dimensional representation space of a Lie algebra g of first-order differential operators. In this case, the operator L is called a g-Lie-algebraic Quasi-Exactly-Solvable operator. Usually, one can indicate basis where L has block-triangular form. If the operator L is of the second order and has the form of the Schrödinger operator, it is called a Quasi-Exactly-Solvable Schrödinger operator. The most studied cases are one-dimensional s l ( 2 ) {\displaystyle sl(2)} -Lie-algebraic quasi-exactly-solvable (Schrödinger) operators. The best known example is the sextic QES anharmonic oscillator with the Hamiltonian { H } = − d 2 d x 2 + a 2 x 6 + 2 a b x 4 + [ b 2 − ( 4 n + 3 + 2 p ) a ] x 2 , a ≥ 0 , n ∈ N , p = { 0 , 1 } , {\displaystyle \{{\mathcal {H}}\}=-{\frac {d^{2}}{dx^{2}}}+a^{2}x^{6}+2abx^{4}+[b^{2}-(4n+3+2p)a]x^{2},\ a\geq 0\ ,\ n\in \mathbb {N} \ ,\ p=\{0,1\},} where (n+1) eigenstates of positive (negative) parity can be found algebraically. Their eigenfunctions are of the form Ψ ( x ) = x p P n ( x 2 ) e − a x 4 4 − b x 2 2 , {\displaystyle \Psi (x)\ =\ x^{p}P_{n}(x^{2})e^{-{\frac {ax^{4}}{4}}-{\frac {bx^{2}}{2}}}\ ,} where P n ( x 2 ) {\displaystyle P_{n}(x^{2})} is a polynomial of degree n and (energies) eigenvalues are roots of an algebraic equation of degree (n+1). In general, twelve families of one-dimensional QES problems are known, two of them characterized by elliptic potentials. == References == Turbiner, A.V.; Ushveridze, A.G. (1987). "Spectral singularities and quasi-exactly solvable quantal problem". Physics Letters A. 126 (3). Elsevier BV: 181–183. Bibcode:1987PhLA..126..181T. doi:10.1016/0375-9601(87)90456-7. ISSN 0375-9601. Turbiner, A. V. (1988). "Quasi-exactly-solvable problems and s l ( 2 , R ) {\displaystyle sl(2,R)} algebra". Communications in Mathematical Physics. 118 (3). Springer Science and Business Media LLC: 467–474. doi:10.1007/bf01466727. ISSN 0010-3616. S2CID 121442012. González-López, Artemio; Kamran, Niky; Olver, Peter J. (1994), "Quasi-exact solvability", Lie algebras, cohomology, and new applications to quantum mechanics (Springfield, MO, 1992), Contemp. Math., vol. 160, Providence, RI: Amer. Math. Soc., pp. 113–140 Turbiner, A.V. (1996), "Quasi-exactly-solvable differential equations", in Ibragimov, N.H. (ed.), CRC Handbook of Lie Group Analysis of Differential Equations, vol. 3, Boca Raton, Fl.: CRC Press, pp. 329–364, ISBN 978-0849394195 Ushveridze, Alexander G. (1994), Quasi-exactly solvable models in quantum mechanics, Bristol: Institute of Physics Publishing, ISBN 0-7503-0266-6, MR 1329549 == External links == Olver, Peter, A Quasi-Exactly Solvable Travel Guide (PDF)
|
Wikipedia:Quasi-free algebra#0
|
In abstract algebra, a quasi-free algebra is an associative algebra that satisfies the lifting property similar to that of a formally smooth algebra in commutative algebra. The notion was introduced by Cuntz and Quillen for the applications to cyclic homology. A quasi-free algebra generalizes a free algebra, as well as the coordinate ring of a smooth affine complex curve. Because of the latter generalization, a quasi-free algebra can be thought of as signifying smoothness on a noncommutative space. == Definition == Let A be an associative algebra over the complex numbers. Then A is said to be quasi-free if the following equivalent conditions are met: Given a square-zero extension R → R / I {\displaystyle R\to R/I} , each homomorphism A → R / I {\displaystyle A\to R/I} lifts to A → R {\displaystyle A\to R} . The cohomological dimension of A with respect to Hochschild cohomology is at most one. Let ( Ω A , d ) {\displaystyle (\Omega A,d)} denotes the differential envelope of A; i.e., the universal differential-graded algebra generated by A. Then A is quasi-free if and only if Ω 1 A {\displaystyle \Omega ^{1}A} is projective as a bimodule over A. There is also a characterization in terms of a connection. Given an A-bimodule E, a right connection on E is a linear map ∇ r : E → E ⊗ A Ω 1 A {\displaystyle \nabla _{r}:E\to E\otimes _{A}\Omega ^{1}A} that satisfies ∇ r ( a s ) = a ∇ r ( s ) {\displaystyle \nabla _{r}(as)=a\nabla _{r}(s)} and ∇ r ( s a ) = ∇ r ( s ) a + s ⊗ d a {\displaystyle \nabla _{r}(sa)=\nabla _{r}(s)a+s\otimes da} . A left connection is defined in the similar way. Then A is quasi-free if and only if Ω 1 A {\displaystyle \Omega ^{1}A} admits a right connection. == Properties and examples == One of basic properties of a quasi-free algebra is that the algebra is left and right hereditary (i.e., a submodule of a projective left or right module is projective or equivalently the left or right global dimension is at most one). This puts a strong restriction for algebras to be quasi-free. For example, a hereditary (commutative) integral domain is precisely a Dedekind domain. In particular, a polynomial ring over a field is quasi-free if and only if the number of variables is at most one. An analog of the tubular neighborhood theorem, called the formal tubular neighborhood theorem, holds for quasi-free algebras. == References == === Bibliography === Cuntz, Joachim (June 2013). "Quillen's work on the foundations of cyclic cohomology". Journal of K-Theory. 11 (3): 559–574. arXiv:1202.5958. doi:10.1017/is012011006jkt201. ISSN 1865-2433. Cuntz, Joachim; Quillen, Daniel (1995). "Algebra Extensions and Nonsingularity". Journal of the American Mathematical Society. 8 (2): 251–289. doi:10.2307/2152819. ISSN 0894-0347. Kontsevich, Maxim; Rosenberg, Alexander L. (2000). "Noncommutative Smooth Spaces". The Gelfand Mathematical Seminars, 1996–1999. Birkhäuser: 85–108. arXiv:math/9812158. doi:10.1007/978-1-4612-1340-6_5. Maxim Kontsevich, Alexander Rosenberg, Noncommutative spaces, preprint MPI-2004-35 Vale, R. (2009). "notes on quasi-free algebras" (PDF). == Further reading == https://ncatlab.org/nlab/show/quasi-free+algebra
|
Wikipedia:Quasi-identity#0
|
In universal algebra, a quasi-identity is an implication of the form s1 = t1 ∧ … ∧ sn = tn → s = t where s1, ..., sn, t1, ..., tn, s, and t are terms built up from variables using the operation symbols of the specified signature. A quasi-identity amounts to a conditional equation for which the conditions themselves are equations. Alternatively, it can be seen as a disjunction of inequations and one equation s1 ≠ t1 ∨ ... ∨ sn ≠ tn ∨ s = t—that is, as a definite Horn clause. A quasi-identity with n = 0 is an ordinary identity or equation, so quasi-identities are a generalization of identities. == See also == Quasivariety == References == Burris, Stanley N.; H.P. Sankappanavar (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-2. Free online edition.
|
Wikipedia:Quasi-polynomial growth#0
|
In theoretical computer science, a function f ( n ) {\displaystyle f(n)} is said to exhibit quasi-polynomial growth when it has an upper bound of the form f ( n ) = 2 O ( ( log n ) c ) {\displaystyle f(n)=2^{O{\bigl (}(\log n)^{c}{\bigr )}}} for some constant c {\displaystyle c} , as expressed using big O notation. That is, it is bounded by an exponential function of a polylogarithmic function. This generalizes the polynomials and the functions of polynomial growth, for which one can take c = 1 {\displaystyle c=1} . A function with quasi-polynomial growth is also said to be quasi-polynomially bounded. Quasi-polynomial growth has been used in the analysis of algorithms to describe certain algorithms whose computational complexity is not polynomial, but is substantially smaller than exponential. In particular, algorithms whose worst-case running times exhibit quasi-polynomial growth are said to take quasi-polynomial time. As well as time complexity, some algorithms require quasi-polynomial space complexity, use a quasi-polynomial number of parallel processors, can be expressed as algebraic formulas of quasi-polynomial size or have a quasi-polynomial competitive ratio. In some other cases, quasi-polynomial growth is used to model restrictions on the inputs to a problem that, when present, lead to good performance from algorithms on those inputs. It can also bound the size of the output for some problems; for instance, for the shortest path problem with linearly varying edge weights, the number of distinct solutions can be quasipolynomial. Beyond theoretical computer science, quasi-polynomial growth bounds have also been used in mathematics, for instance in partial results on the Hirsch conjecture for the diameter of polytopes in polyhedral combinatorics, or relating the sizes of cliques and independent sets in certain classes of graphs. However, in polyhedral combinatorics and enumerative combinatorics, a different meaning of the same word also is used, for the quasi-polynomials, functions that generalize polynomials by having periodic coefficients. == References ==
|
Wikipedia:Quasicircle#0
|
In mathematics, a quasicircle is a Jordan curve in the complex plane that is the image of a circle under a quasiconformal mapping of the plane onto itself. Originally introduced independently by Pfluger (1961) and Tienari (1962), in the older literature (in German) they were referred to as quasiconformal curves, a terminology which also applied to arcs. In complex analysis and geometric function theory, quasicircles play a fundamental role in the description of the universal Teichmüller space, through quasisymmetric homeomorphisms of the circle. Quasicircles also play an important role in complex dynamical systems. == Definitions == A quasicircle is defined as the image of a circle under a quasiconformal mapping of the extended complex plane. It is called a K-quasicircle if the quasiconformal mapping has dilatation K. The definition of quasicircle generalizes the characterization of a Jordan curve as the image of a circle under a homeomorphism of the plane. In particular a quasicircle is a Jordan curve. The interior of a quasicircle is called a quasidisk. As shown in Lehto & Virtanen (1973), where the older term "quasiconformal curve" is used, if a Jordan curve is the image of a circle under a quasiconformal map in a neighbourhood of the curve, then it is also the image of a circle under a quasiconformal mapping of the extended plane and thus a quasicircle. The same is true for "quasiconformal arcs" which can be defined as quasiconformal images of a circular arc either in an open set or equivalently in the extended plane. == Geometric characterizations == Ahlfors (1963) gave a geometric characterization of quasicircles as those Jordan curves for which the absolute value of the cross-ratio of any four points, taken in cyclic order, is bounded below by a positive constant. Ahlfors also proved that quasicircles can be characterized in terms of a reverse triangle inequality for three points: there should be a constant C such that if two points z1 and z2 are chosen on the curve and z3 lies on the shorter of the resulting arcs, then | z 1 − z 3 | + | z 2 − z 3 | ≤ C | z 1 − z 2 | . {\displaystyle |z_{1}-z_{3}|+|z_{2}-z_{3}|\leq C|z_{1}-z_{2}|.} This property is also called bounded turning or the arc condition. For Jordan curves in the extended plane passing through ∞, Ahlfors (1966) gave a simpler necessary and sufficient condition to be a quasicircle. There is a constant C > 0 such that if z1, z2 are any points on the curve and z3 lies on the segment between them, then | z 3 − z 1 + z 2 2 | ≤ C | z 1 − z 2 | . {\displaystyle \displaystyle {\left|z_{3}-{z_{1}+z_{2} \over 2}\right|\leq C|z_{1}-z_{2}|.}} These metric characterizations imply that an arc or closed curve is quasiconformal whenever it arises as the image of an interval or the circle under a bi-Lipschitz map f, i.e. satisfying C 1 | s − t | ≤ | f ( s ) − f ( t ) | ≤ C 2 | s − t | {\displaystyle C_{1}|s-t|\leq |f(s)-f(t)|\leq C_{2}|s-t|} for positive constants Ci. == Quasicircles and quasisymmetric homeomorphisms == If φ is a quasisymmetric homeomorphism of the circle, then there are conformal maps f of [z| < 1 and g of |z|>1 into disjoint regions such that the complement of the images of f and g is a Jordan curve. The maps f and g extend continuously to the circle |z| = 1 and the sewing equation φ = g − 1 ∘ f {\displaystyle \varphi =g^{-1}\circ f} holds. The image of the circle is a quasicircle. Conversely, using the Riemann mapping theorem, the conformal maps f and g uniformizing the outside of a quasicircle give rise to a quasisymmetric homeomorphism through the above equation. The quotient space of the group of quasisymmetric homeomorphisms by the subgroup of Möbius transformations provides a model of universal Teichmüller space. The above correspondence shows that the space of quasicircles can also be taken as a model. == Quasiconformal reflection == A quasiconformal reflection in a Jordan curve is an orientation-reversing quasiconformal map of period 2 which switches the inside and the outside of the curve fixing points on the curve. Since the map R 0 ( z ) = 1 z ¯ {\displaystyle \displaystyle {R_{0}(z)={1 \over {\overline {z}}}}} provides such a reflection for the unit circle, any quasicircle admits a quasiconformal reflection. Ahlfors (1963) proved that this property characterizes quasicircles. Ahlfors noted that this result can be applied to uniformly bounded holomorphic univalent functions f(z) on the unit disk D. Let Ω = f(D). As Carathéodory had proved using his theory of prime ends, f extends continuously to the unit circle if and only if ∂Ω is locally connected, i.e. admits a covering by finitely many compact connected sets of arbitrarily small diameter. The extension to the circle is 1-1 if and only if ∂Ω has no cut points, i.e. points which when removed from ∂Ω yield a disconnected set. Carathéodory's theorem shows that a locally set without cut points is just a Jordan curve and that in precisely this case is the extension of f to the closed unit disk a homeomorphism. If f extends to a quasiconformal mapping of the extended complex plane then ∂Ω is by definition a quasicircle. Conversely Ahlfors (1963) observed that if ∂Ω is a quasicircle and R1 denotes the quasiconformal reflection in ∂Ω then the assignment f ( z ) = R 1 f R 0 ( z ) {\displaystyle \displaystyle {f(z)=R_{1}fR_{0}(z)}} for |z| > 1 defines a quasiconformal extension of f to the extended complex plane. == Complex dynamical systems == Quasicircles were known to arise as the Julia sets of rational maps R(z). Sullivan (1985) proved that if the Fatou set of R has two components and the action of R on the Julia set is "hyperbolic", i.e. there are constants c > 0 and A > 1 such that | ∂ z R n ( z ) | ≥ c A n {\displaystyle |\partial _{z}R^{n}(z)|\geq cA^{n}} on the Julia set, then the Julia set is a quasicircle. There are many examples: quadratic polynomials R(z) = z2 + c with an attracting fixed point the Douady rabbit (c = –0.122561 + 0.744862i, where c3 + 2 c2 + c + 1 = 0) quadratic polynomials z2 + λz with |λ| < 1 the Koch snowflake == Quasi-Fuchsian groups == Quasi-Fuchsian groups are obtained as quasiconformal deformations of Fuchsian groups. By definition their limit sets are quasicircles. Let Γ be a Fuchsian group of the first kind: a discrete subgroup of the Möbius group preserving the unit circle. acting properly discontinuously on the unit disk D and with limit set the unit circle. Let μ(z) be a measurable function on D with ‖ μ ‖ ∞ < 1 {\displaystyle \|\mu \|_{\infty }<1} such that μ is Γ-invariant, i.e. μ ( g ( z ) ) ∂ z g ( z ) ¯ ∂ z g ( z ) = μ ( z ) {\displaystyle \mu (g(z)){{\overline {\partial _{z}g(z)}} \over \partial _{z}g(z)}=\mu (z)} for every g in Γ. (μ is thus a "Beltrami differential" on the Riemann surface D / Γ.) Extend μ to a function on C by setting μ(z) = 0 off D. The Beltrami equation ∂ z ¯ f ( z ) = μ ( z ) ∂ z f ( z ) {\displaystyle \partial _{\overline {z}}f(z)=\mu (z)\partial _{z}f(z)} admits a solution unique up to composition with a Möbius transformation. It is a quasiconformal homeomorphism of the extended complex plane. If g is an element of Γ, then f(g(z)) gives another solution of the Beltrami equation, so that α ( g ) = f ∘ g ∘ f − 1 {\displaystyle \alpha (g)=f\circ g\circ f^{-1}} is a Möbius transformation. The group α(Γ) is a quasi-Fuchsian group with limit set the quasicircle given by the image of the unit circle under f. == Hausdorff dimension == It is known that there are quasicircles for which no segment has finite length. The Hausdorff dimension of quasicircles was first investigated by Gehring & Väisälä (1973), who proved that it can take all values in the interval [1,2). Astala (1993), using the new technique of "holomorphic motions" was able to estimate the change in the Hausdorff dimension of any planar set under a quasiconformal map with dilatation K. For quasicircles C, there was a crude estimate for the Hausdorff dimension d H ( C ) ≤ 1 + k {\displaystyle d_{H}(C)\leq 1+k} where k = K − 1 K + 1 . {\displaystyle k={K-1 \over K+1}.} On the other hand, the Hausdorff dimension for the Julia sets Jc of the iterates of the rational maps R ( z ) = z 2 + c {\displaystyle R(z)=z^{2}+c} had been estimated as result of the work of Rufus Bowen and David Ruelle, who showed that 1 < d H ( J c ) < 1 + | c | 2 4 log 2 + o ( | c | 2 ) . {\displaystyle 1<d_{H}(J_{c})<1+{|c|^{2} \over 4\log 2}+o(|c|^{2}).} Since these are quasicircles corresponding to a dilatation K = 1 + t 1 − t {\displaystyle K={\sqrt {1+t \over 1-t}}} where t = | 1 − 1 − 4 c | , {\displaystyle t=|1-{\sqrt {1-4c}}|,} this led Becker & Pommerenke (1987) to show that for k small 1 + 0.36 k 2 ≤ d H ( C ) ≤ 1 + 37 k 2 . {\displaystyle 1+0.36k^{2}\leq d_{H}(C)\leq 1+37k^{2}.} Having improved the lower bound following calculations for the Koch snowflake with Steffen Rohde and Oded Schramm, Astala (1994) conjectured that d H ( C ) ≤ 1 + k 2 . {\displaystyle d_{H}(C)\leq 1+k^{2}.} This conjecture was proved by Smirnov (2010); a complete account of his proof, prior to publication, was already given in Astala, Iwaniec & Martin (2009). For a quasi-Fuchsian group Bowen (1979) and Sullivan (1982) showed that the Hausdorff dimension d of the limit set is always greater than 1. When d < 2, the quantity λ = d ( 2 − d ) ∈ ( 0 , 1 ) {\displaystyle \lambda =d(2-d)\,\in (0,1)} is the lowest eigenvalue of the Laplacian of the corresponding hyperbolic 3-manifold. == Notes == == References == Ahlfors, Lars V. (1966), Lectures on quasiconformal mappings, Van Nostrand Ahlfors, L. (1963), "Quasiconformal reflections", Acta Mathematica, 109: 291–301, doi:10.1007/bf02391816, Zbl 0121.06403 Astala, K. (1993), "Distortion of area and dimension under quasiconformal mappings in the plane", Proc. Natl. Acad. Sci. U.S.A., 90 (24): 11958–11959, Bibcode:1993PNAS...9011958A, doi:10.1073/pnas.90.24.11958, PMC 48104, PMID 11607447 Astala, K.; Zinsmeister, M. (1994), "Holomorphic families of quasi-Fuchsian groups", Ergodic Theory Dynam. Systems, 14 (2): 207–212, doi:10.1017/s0143385700007847, S2CID 121209816 Astala, K. (1994), "Area distortion of quasiconformal mappings", Acta Math., 173: 37–60, doi:10.1007/bf02392568 Astala, Kari; Iwaniec, Tadeusz; Martin, Gaven (2009), Elliptic partial differential equations and quasiconformal mappings in the plane, Princeton mathematical series, vol. 48, Princeton University Press, pp. 332–342, ISBN 978-0-691-13777-3, Section 13.2, Dimension of quasicircles. Becker, J.; Pommerenke, C. (1987), "On the Hausdorff dimension of quasicircles", Ann. Acad. Sci. Fenn. Ser. A I Math., 12: 329–333, doi:10.5186/aasfm.1987.1206 Bers, Lipman (August 1961), "Uniformization by Beltrami equations", Communications on Pure and Applied Mathematics, 14 (3): 215–228, doi:10.1002/cpa.3160140304 Bowen, R. (1979), "Hausdorff dimension of quasicircles", Inst. Hautes Études Sci. Publ. Math., 50: 11–25, doi:10.1007/BF02684767, S2CID 55631433 Carleson, L.; Gamelin, T. D. W. (1993), Complex dynamics, Universitext: Tracts in Mathematics, Springer-Verlag, ISBN 978-0-387-97942-7 Gehring, F. W.; Väisälä, J. (1973), "Hausdorff dimension and quasiconformal mappings", Journal of the London Mathematical Society, 6 (3): 504–512, CiteSeerX 10.1.1.125.2374, doi:10.1112/jlms/s2-6.3.504 Gehring, F. W. (1982), Characteristic properties of quasidisks, Séminaire de Mathématiques Supérieures, vol. 84, Presses de l'Université de Montréal, ISBN 978-2-7606-0601-2 Imayoshi, Y.; Taniguchi, M. (1992), An Introduction to Teichmüller spaces, Springer-Verlag, ISBN 978-0-387-70088-5 + Lehto, O. (1987), Univalent functions and Teichmüller spaces, Springer-Verlag, pp. 50–59, 111–118, 196–205, ISBN 978-0-387-96310-5 Krzyz, J. (1983), Quasiconformal Mappings in the Plane: Parametrical Methods, Berlin, Heidelberg: Springer Berlin / Heidelberg, ISBN 978-3540119890 Lehto, O.; Virtanen, K. I. (1973), Quasiconformal mappings in the plane, Die Grundlehren der mathematischen Wissenschaften, vol. 126 (Second ed.), Springer-Verlag Marden, A. (2007), Outer circles. An introduction to hyperbolic 3-manifolds, Cambridge University Press, ISBN 978-0-521-83974-7 Mumford, D.; Series, C.; Wright, David (2002), Indra's pearls. The vision of Felix Klein, Cambridge University Press, ISBN 978-0-521-35253-6 Pfluger, A. (1961), "Ueber die Konstruktion Riemannscher Flächen durch Verheftung", J. Indian Math. Soc., 24: 401–412 Pommerenke, C. (1975), Univalent functions, with a chapter on quadratic differentials by Gerd Jensen, Studia Mathematica/Mathematische Lehrbücher, vol. 15, Vandenhoeck & Ruprecht Rohde, S. (1991), "On conformal welding and quasicircles", Michigan Math. J., 38: 111–116, doi:10.1307/mmj/1029004266 Sullivan, D. (1982), "Discrete conformal groups and measurable dynamics", Bull. Amer. Math. Soc., 6: 57–73, doi:10.1090/s0273-0979-1982-14966-7 Sullivan, D. (1985), "Quasiconformal homeomorphisms and dynamics, I, Solution of the Fatou-Julia problem on wandering domains", Annals of Mathematics, 122 (2): 401–418, doi:10.2307/1971308, JSTOR 1971308 Tienari, M. (1962), "Fortsetzung einer quasikonformen Abbildung über einen Jordanbogen", Ann. Acad. Sci. Fenn. Ser. A, 321 Smirnov, S. (2010), "Dimension of quasicircles", Acta Mathematica, 205: 189–197, arXiv:0904.1237, doi:10.1007/s11511-010-0053-8, MR 2736155, S2CID 17945998
|
Wikipedia:Quasinorm#0
|
In linear algebra, functional analysis and related areas of mathematics, a quasinorm is similar to a norm in that it satisfies the norm axioms, except that the triangle inequality is replaced by ‖ x + y ‖ ≤ K ( ‖ x ‖ + ‖ y ‖ ) {\displaystyle \|x+y\|\leq K(\|x\|+\|y\|)} for some K > 1. {\displaystyle K>1.} == Definition == A quasi-seminorm on a vector space X {\displaystyle X} is a real-valued map p {\displaystyle p} on X {\displaystyle X} that satisfies the following conditions: Non-negativity: p ≥ 0 ; {\displaystyle p\geq 0;} Absolute homogeneity: p ( s x ) = | s | p ( x ) {\displaystyle p(sx)=|s|p(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ; {\displaystyle s;} there exists a real k ≥ 1 {\displaystyle k\geq 1} such that p ( x + y ) ≤ k [ p ( x ) + p ( y ) ] {\displaystyle p(x+y)\leq k[p(x)+p(y)]} for all x , y ∈ X . {\displaystyle x,y\in X.} If k = 1 {\displaystyle k=1} then this inequality reduces to the triangle inequality. It is in this sense that this condition generalizes the usual triangle inequality. A quasinorm is a quasi-seminorm that also satisfies: Positive definite/Point-separating: if x ∈ X {\displaystyle x\in X} satisfies p ( x ) = 0 , {\displaystyle p(x)=0,} then x = 0. {\displaystyle x=0.} A pair ( X , p ) {\displaystyle (X,p)} consisting of a vector space X {\displaystyle X} and an associated quasi-seminorm p {\displaystyle p} is called a quasi-seminormed vector space. If the quasi-seminorm is a quasinorm then it is also called a quasinormed vector space. Multiplier The infimum of all values of k {\displaystyle k} that satisfy condition (3) is called the multiplier of p . {\displaystyle p.} The multiplier itself will also satisfy condition (3) and so it is the unique smallest real number that satisfies this condition. The term k {\displaystyle k} -quasi-seminorm is sometimes used to describe a quasi-seminorm whose multiplier is equal to k . {\displaystyle k.} A norm (respectively, a seminorm) is just a quasinorm (respectively, a quasi-seminorm) whose multiplier is 1. {\displaystyle 1.} Thus every seminorm is a quasi-seminorm and every norm is a quasinorm (and a quasi-seminorm). === Topology === If p {\displaystyle p} is a quasinorm on X {\displaystyle X} then p {\displaystyle p} induces a vector topology on X {\displaystyle X} whose neighborhood basis at the origin is given by the sets: { x ∈ X : p ( x ) < 1 / n } {\displaystyle \{x\in X:p(x)<1/n\}} as n {\displaystyle n} ranges over the positive integers. A topological vector space with such a topology is called a quasinormed topological vector space or just a quasinormed space. Every quasinormed topological vector space is pseudometrizable. A complete quasinormed space is called a quasi-Banach space. Every Banach space is a quasi-Banach space, although not conversely. === Related definitions === A quasinormed space ( A , ‖ ⋅ ‖ ) {\displaystyle (A,\|\,\cdot \,\|)} is called a quasinormed algebra if the vector space A {\displaystyle A} is an algebra and there is a constant K > 0 {\displaystyle K>0} such that ‖ x y ‖ ≤ K ‖ x ‖ ⋅ ‖ y ‖ {\displaystyle \|xy\|\leq K\|x\|\cdot \|y\|} for all x , y ∈ A . {\displaystyle x,y\in A.} A complete quasinormed algebra is called a quasi-Banach algebra. == Characterizations == A topological vector space (TVS) is a quasinormed space if and only if it has a bounded neighborhood of the origin. == Examples == Since every norm is a quasinorm, every normed space is also a quasinormed space. L p {\displaystyle L^{p}} spaces with 0 < p < 1 {\displaystyle 0<p<1} The L p {\displaystyle L^{p}} spaces for 0 < p < 1 {\displaystyle 0<p<1} are quasinormed spaces (indeed, they are even F-spaces) but they are not, in general, normable (meaning that there might not exist any norm that defines their topology). For 0 < p < 1 , {\displaystyle 0<p<1,} the Lebesgue space L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is a complete metrizable TVS (an F-space) that is not locally convex (in fact, its only convex open subsets are itself L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} and the empty set) and the only continuous linear functional on L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is the constant 0 {\displaystyle 0} function (Rudin 1991, §1.47). In particular, the Hahn-Banach theorem does not hold for L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} when 0 < p < 1. {\displaystyle 0<p<1.} == See also == Metrizable topological vector space – A topological vector space whose topology can be defined by a metric Norm (mathematics) – Length in a vector space Seminorm – Mathematical function Topological vector space – Vector space with a notion of nearness == References == Aull, Charles E.; Robert Lowen (2001). Handbook of the History of General Topology. Springer. ISBN 0-7923-6970-X. Conway, John B. (1990). A Course in Functional Analysis. Springer. ISBN 0-387-97245-5. Kalton, N. (1986). "Plurisubharmonic functions on quasi-Banach spaces" (PDF). Studia Mathematica. 84 (3). Institute of Mathematics, Polish Academy of Sciences: 297–324. doi:10.4064/sm-84-3-297-324. ISSN 0039-3223. Nikolʹskiĭ, Nikolaĭ Kapitonovich (1992). Functional Analysis I: Linear Functional Analysis. Encyclopaedia of Mathematical Sciences. Vol. 19. Springer. ISBN 3-540-50584-9. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Swartz, Charles (1992). An Introduction to Functional Analysis. CRC Press. ISBN 0-8247-8643-2. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
|
Wikipedia:Quasiregular map#0
|
In the mathematical field of analysis, quasiregular maps are a class of continuous maps between Euclidean spaces Rn of the same dimension or, more generally, between Riemannian manifolds of the same dimension, which share some of the basic properties with holomorphic functions of one complex variable. == Motivation == The theory of holomorphic (=analytic) functions of one complex variable is one of the most beautiful and most useful parts of the whole mathematics. One drawback of this theory is that it deals only with maps between two-dimensional spaces (Riemann surfaces). The theory of functions of several complex variables has a different character, mainly because analytic functions of several variables are not conformal. Conformal maps can be defined between Euclidean spaces of arbitrary dimension, but when the dimension is greater than 2, this class of maps is very small: it consists of Möbius transformations only. This is a theorem of Joseph Liouville; relaxing the smoothness assumptions does not help, as proved by Yurii Reshetnyak. This suggests the search of a generalization of the property of conformality which would give a rich and interesting class of maps in higher dimension. == Definition == A differentiable map f of a region D in Rn to Rn is called K-quasiregular if the following inequality holds at all points in D: ‖ D f ( x ) ‖ n ≤ K | J f ( x ) | {\displaystyle \|Df(x)\|^{n}\leq K|J_{f}(x)|} . Here K ≥ 1 is a constant, Jf is the Jacobian determinant, Df is the derivative, that is the linear map defined by the Jacobi matrix, and ||·|| is the usual (Euclidean) norm of the matrix. The development of the theory of such maps showed that it is unreasonable to restrict oneself to differentiable maps in the classical sense, and that the "correct" class of maps consists of continuous maps in the Sobolev space W1,nloc whose partial derivatives in the sense of distributions have locally summable n-th power, and such that the above inequality is satisfied almost everywhere. This is a formal definition of a K-quasiregular map. A map is called quasiregular if it is K-quasiregular with some K. Constant maps are excluded from the class of quasiregular maps. == Properties == The fundamental theorem about quasiregular maps was proved by Reshetnyak: Quasiregular maps are open and discrete. This means that the images of open sets are open and that preimages of points consist of isolated points. In dimension 2, these two properties give a topological characterization of the class of non-constant analytic functions: every continuous open and discrete map of a plane domain to the plane can be pre-composed with a homeomorphism, so that the result is an analytic function. This is a theorem of Simion Stoilov. Reshetnyak's theorem implies that all pure topological results about analytic functions (such that the Maximum Modulus Principle, Rouché's theorem etc.) extend to quasiregular maps. Injective quasiregular maps are called quasiconformal. A simple example of non-injective quasiregular map is given in cylindrical coordinates in 3-space by the formula ( r , θ , z ) ↦ ( r , 2 θ , z ) . {\displaystyle (r,\theta ,z)\mapsto (r,2\theta ,z).} This map is 2-quasiregular. It is smooth everywhere except the z-axis. A remarkable fact is that all smooth quasiregular maps are local homeomorphisms. Even more remarkable is that every quasiregular local homeomorphism Rn → Rn, where n ≥ 3, is a homeomorphism (this is a theorem of Vladimir Zorich). This explains why in the definition of quasiregular maps it is not reasonable to restrict oneself to smooth maps: all smooth quasiregular maps of Rn to itself are quasiconformal. == Rickman's theorem == Many theorems about geometric properties of holomorphic functions of one complex variable have been extended to quasiregular maps. These extensions are usually highly non-trivial. Perhaps the most famous result of this sort is the extension of Picard's theorem which is due to Seppo Rickman: A K-quasiregular map Rn → Rn can omit at most a finite set. When n = 2, this omitted set can contain at most one point (this is a simple extension of Picard's theorem). But when n > 2, the omitted set can contain more than one point, and its cardinality can be estimated from above in terms of n and K. In fact, any finite set can be omitted, as shown by David Drasin and Pekka Pankka. == Connection with potential theory == If f is an analytic function, then log |f| is subharmonic, and harmonic away from the zeros of f. The corresponding fact for quasiregular maps is that log |f| satisfies a certain non-linear partial differential equation of elliptic type. This discovery of Reshetnyak stimulated the development of non-linear potential theory, which treats this kind of equations as the usual potential theory treats harmonic and subharmonic functions. == See also == Yurii Reshetnyak Vladimir Zorich == References ==
|
Wikipedia:Quasisymmetric function#0
|
In algebra and in particular in algebraic combinatorics, a quasisymmetric function is any element in the ring of quasisymmetric functions which is in turn a subring of the formal power series ring with a countable number of variables. This ring generalizes the ring of symmetric functions. This ring can be realized as a specific limit of the rings of quasisymmetric polynomials in n variables, as n goes to infinity. This ring serves as universal structure in which relations between quasisymmetric polynomials can be expressed in a way independent of the number n of variables (but its elements are neither polynomials nor functions). == Definitions == The ring of quasisymmetric functions, denoted QSym, can be defined over any commutative ring R such as the integers. Quasisymmetric functions are power series of bounded degree in variables x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\dots } with coefficients in R, which are shift invariant in the sense that the coefficient of the monomial x 1 α 1 x 2 α 2 ⋯ x k α k {\displaystyle x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\cdots x_{k}^{\alpha _{k}}} is equal to the coefficient of the monomial x i 1 α 1 x i 2 α 2 ⋯ x i k α k {\displaystyle x_{i_{1}}^{\alpha _{1}}x_{i_{2}}^{\alpha _{2}}\cdots x_{i_{k}}^{\alpha _{k}}} for any strictly increasing sequence of positive integers i 1 < i 2 < ⋯ < i k {\displaystyle i_{1}<i_{2}<\cdots <i_{k}} indexing the variables and any positive integer sequence ( α 1 , α 2 , … , α k ) {\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{k})} of exponents. Much of the study of quasisymmetric functions is based on that of symmetric functions. A quasisymmetric function in finitely many variables is a quasisymmetric polynomial. Both symmetric and quasisymmetric polynomials may be characterized in terms of actions of the symmetric group S n {\displaystyle S_{n}} on a polynomial ring in n {\displaystyle n} variables x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} . One such action of S n {\displaystyle S_{n}} permutes variables, changing a polynomial p ( x 1 , … , x n ) {\displaystyle p(x_{1},\dots ,x_{n})} by iteratively swapping pairs ( x i , x i + 1 ) {\displaystyle (x_{i},x_{i+1})} of variables having consecutive indices. Those polynomials unchanged by all such swaps form the subring of symmetric polynomials. A second action of S n {\displaystyle S_{n}} conditionally permutes variables, changing a polynomial p ( x 1 , … , x n ) {\displaystyle p(x_{1},\ldots ,x_{n})} by swapping pairs ( x i , x i + 1 ) {\displaystyle (x_{i},x_{i+1})} of variables except in monomials containing both variables. Those polynomials unchanged by all such conditional swaps form the subring of quasisymmetric polynomials. One quasisymmetric polynomial in four variables x 1 , x 2 , x 3 , x 4 {\displaystyle x_{1},x_{2},x_{3},x_{4}} is the polynomial x 1 2 x 2 x 3 + x 1 2 x 2 x 4 + x 1 2 x 3 x 4 + x 2 2 x 3 x 4 . {\displaystyle x_{1}^{2}x_{2}x_{3}+x_{1}^{2}x_{2}x_{4}+x_{1}^{2}x_{3}x_{4}+x_{2}^{2}x_{3}x_{4}.\,} The simplest symmetric polynomial containing these monomials is x 1 2 x 2 x 3 + x 1 2 x 2 x 4 + x 1 2 x 3 x 4 + x 2 2 x 3 x 4 + x 1 x 2 2 x 3 + x 1 x 2 2 x 4 + x 1 x 3 2 x 4 + x 2 x 3 2 x 4 + x 1 x 2 x 3 2 + x 1 x 2 x 4 2 + x 1 x 3 x 4 2 + x 2 x 3 x 4 2 . {\displaystyle {\begin{aligned}x_{1}^{2}x_{2}x_{3}+x_{1}^{2}x_{2}x_{4}+x_{1}^{2}x_{3}x_{4}+x_{2}^{2}x_{3}x_{4}+x_{1}x_{2}^{2}x_{3}+x_{1}x_{2}^{2}x_{4}+x_{1}x_{3}^{2}x_{4}+x_{2}x_{3}^{2}x_{4}\\{}+x_{1}x_{2}x_{3}^{2}+x_{1}x_{2}x_{4}^{2}+x_{1}x_{3}x_{4}^{2}+x_{2}x_{3}x_{4}^{2}.\,\end{aligned}}} == Important bases == QSym is a graded R-algebra, decomposing as QSym = ⨁ n ≥ 0 QSym n , {\displaystyle \operatorname {QSym} =\bigoplus _{n\geq 0}\operatorname {QSym} _{n},\,} where QSym n {\displaystyle \operatorname {QSym} _{n}} is the R {\displaystyle R} -span of all quasisymmetric functions that are homogeneous of degree n {\displaystyle n} . Two natural bases for QSym n {\displaystyle \operatorname {QSym} _{n}} are the monomial basis { M α } {\displaystyle \{M_{\alpha }\}} and the fundamental basis { F α } {\displaystyle \{F_{\alpha }\}} indexed by compositions α = ( α 1 , α 2 , … , α k ) {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\ldots ,\alpha _{k})} of n {\displaystyle n} , denoted α ⊨ n {\displaystyle \alpha \vDash n} . The monomial basis consists of M 0 = 1 {\displaystyle M_{0}=1} and all formal power series M α = ∑ i 1 < i 2 < ⋯ < i k x i 1 α 1 x i 2 α 2 ⋯ x i k α k . {\displaystyle M_{\alpha }=\sum _{i_{1}<i_{2}<\cdots <i_{k}}x_{i_{1}}^{\alpha _{1}}x_{i_{2}}^{\alpha _{2}}\cdots x_{i_{k}}^{\alpha _{k}}.\,} The fundamental basis consists F 0 = 1 {\displaystyle F_{0}=1} and all formal power series F α = ∑ α ⪰ β M β , {\displaystyle F_{\alpha }=\sum _{\alpha \succeq \beta }M_{\beta },\,} where α ⪰ β {\displaystyle \alpha \succeq \beta } means we can obtain α {\displaystyle \alpha } by adding together adjacent parts of β {\displaystyle \beta } , for example, (3,2,4,2) ⪰ {\displaystyle \succeq } (3,1,1,1,2,1,2). Thus, when the ring R {\displaystyle R} is the ring of rational numbers, one has QSym n = span Q { M α ∣ α ⊨ n } = span Q { F α ∣ α ⊨ n } . {\displaystyle \operatorname {QSym} _{n}=\operatorname {span} _{\mathbb {Q} }\{M_{\alpha }\mid \alpha \vDash n\}=\operatorname {span} _{\mathbb {Q} }\{F_{\alpha }\mid \alpha \vDash n\}.\,} Then one can define the algebra of symmetric functions Λ = Λ 0 ⊕ Λ 1 ⊕ ⋯ {\displaystyle \Lambda =\Lambda _{0}\oplus \Lambda _{1}\oplus \cdots } as the subalgebra of QSym spanned by the monomial symmetric functions m 0 = 1 {\displaystyle m_{0}=1} and all formal power series m λ = ∑ M α , {\displaystyle m_{\lambda }=\sum M_{\alpha },} where the sum is over all compositions α {\displaystyle \alpha } which rearrange to the integer partition λ {\displaystyle \lambda } . Moreover, we have Λ n = Λ ∩ QSym n {\displaystyle \Lambda _{n}=\Lambda \cap \operatorname {QSym} _{n}} . For example, F ( 1 , 2 ) = M ( 1 , 2 ) + M ( 1 , 1 , 1 ) {\displaystyle F_{(1,2)}=M_{(1,2)}+M_{(1,1,1)}} and m ( 2 , 1 ) = M ( 2 , 1 ) + M ( 1 , 2 ) . {\displaystyle m_{(2,1)}=M_{(2,1)}+M_{(1,2)}.} Other important bases for quasisymmetric functions include the basis of quasisymmetric Schur functions, the "type I" and "type II" quasisymmetric power sums, and bases related to enumeration in matroids. == Applications == Quasisymmetric functions have been applied in enumerative combinatorics, symmetric function theory, representation theory, and number theory. Applications of quasisymmetric functions include enumeration of P-partitions, permutations, tableaux, chains of posets, reduced decompositions in finite Coxeter groups (via Stanley symmetric functions), and parking functions. In symmetric function theory and representation theory, applications include the study of Schubert polynomials, Macdonald polynomials, Hecke algebras, and Kazhdan–Lusztig polynomials. Often quasisymmetric functions provide a powerful bridge between combinatorial structures and symmetric functions. == Related algebras == As a graded Hopf algebra, the dual of the ring of quasisymmetric functions is the ring of noncommutative symmetric functions. Every symmetric function is also a quasisymmetric function, and hence the ring of symmetric functions is a subalgebra of the ring of quasisymmetric functions. The ring of quasisymmetric functions is the terminal object in category of graded Hopf algebras with a single character. Hence any such Hopf algebra has a morphism to the ring of quasisymmetric functions. One example of this is the peak algebra. === Other related algebras === The Malvenuto–Reutenauer algebra is a Hopf algebra based on permutations that relates the rings of symmetric functions, quasisymmetric functions, and noncommutative symmetric functions, (denoted Sym, QSym, and NSym respectively), as depicted the following commutative diagram. The duality between QSym and NSym mentioned above is reflected in the main diagonal of this diagram. Many related Hopf algebras were constructed from Hopf monoids in the category of species by Aguiar and Majahan. One can also construct the ring of quasisymmetric functions in noncommuting variables. == References == == External links == BIRS Workshop on Quasisymmetric Functions
|
Wikipedia:Quasisymmetric map#0
|
In mathematics, a quasisymmetric homeomorphism between metric spaces is a map that generalizes bi-Lipschitz maps. While bi-Lipschitz maps shrink or expand the diameter of a set by no more than a multiplicative factor, quasisymmetric maps satisfy the weaker geometric property that they preserve the relative sizes of sets: if two sets A and B have diameters t and are no more than distance t apart, then the ratio of their sizes changes by no more than a multiplicative constant. These maps are also related to quasiconformal maps, since in many circumstances they are in fact equivalent. == Definition == Let (X, dX) and (Y, dY) be two metric spaces. A homeomorphism f:X → Y is said to be η-quasisymmetric if there is an increasing function η : [0, ∞) → [0, ∞) such that for any triple x, y, z of distinct points in X, we have d Y ( f ( x ) , f ( y ) ) d Y ( f ( x ) , f ( z ) ) ≤ η ( d X ( x , y ) d X ( x , z ) ) . {\displaystyle {\frac {d_{Y}(f(x),f(y))}{d_{Y}(f(x),f(z))}}\leq \eta \left({\frac {d_{X}(x,y)}{d_{X}(x,z)}}\right).} == Basic properties == Inverses are quasisymmetric If f : X → Y is an invertible η-quasisymmetric map as above, then its inverse map is η ′ {\displaystyle \eta '} -quasisymmetric, where η ′ ( t ) = 1 / η − 1 ( 1 / t ) . {\textstyle \eta '(t)=1/\eta ^{-1}(1/t).} Quasisymmetric maps preserve relative sizes of sets If A {\displaystyle A} and B {\displaystyle B} are subsets of X {\displaystyle X} and A {\displaystyle A} is a subset of B {\displaystyle B} , then η − 1 ( diam B diam A ) 2 ≤ diam f ( B ) diam f ( A ) ≤ 2 η ( diam B diam A ) . {\displaystyle {\frac {\eta ^{-1}({\frac {\operatorname {diam} B}{\operatorname {diam} A}})}{2}}\leq {\frac {\operatorname {diam} f(B)}{\operatorname {diam} f(A)}}\leq 2\eta \left({\frac {\operatorname {diam} B}{\operatorname {diam} A}}\right).} == Examples == === Weakly quasisymmetric maps === A map f:X→Y is said to be H-weakly-quasisymmetric for some H > 0 {\displaystyle H>0} if for all triples of distinct points x , y , z {\displaystyle x,y,z} in X {\displaystyle X} , then | f ( x ) − f ( y ) | ≤ H | f ( x ) − f ( z ) | whenever | x − y | ≤ | x − z | {\displaystyle |f(x)-f(y)|\leq H|f(x)-f(z)|\;\;\;{\text{ whenever }}\;\;\;|x-y|\leq |x-z|} Not all weakly quasisymmetric maps are quasisymmetric. However, if X {\displaystyle X} is connected and X {\displaystyle X} and Y {\displaystyle Y} are doubling, then all weakly quasisymmetric maps are quasisymmetric. The appeal of this result is that proving weak-quasisymmetry is much easier than proving quasisymmetry directly, and in many natural settings the two notions are equivalent. === δ-monotone maps === A monotone map f:H → H on a Hilbert space H is δ-monotone if for all x and y in H, ⟨ f ( x ) − f ( y ) , x − y ⟩ ≥ δ | f ( x ) − f ( y ) | ⋅ | x − y | . {\displaystyle \langle f(x)-f(y),x-y\rangle \geq \delta |f(x)-f(y)|\cdot |x-y|.} To grasp what this condition means geometrically, suppose f(0) = 0 and consider the above estimate when y = 0. Then it implies that the angle between the vector x and its image f(x) stays between 0 and arccos δ < π/2. These maps are quasisymmetric, although they are a much narrower subclass of quasisymmetric maps. For example, while a general quasisymmetric map in the complex plane could map the real line to a set of Hausdorff dimension strictly greater than one, a δ-monotone will always map the real line to a rotated graph of a Lipschitz function L:ℝ → ℝ. == Doubling measures == === The real line === Quasisymmetric homeomorphisms of the real line to itself can be characterized in terms of their derivatives. An increasing homeomorphism f:ℝ → ℝ is quasisymmetric if and only if there is a constant C > 0 and a doubling measure μ on the real line such that f ( x ) = C + ∫ 0 x d μ ( t ) . {\displaystyle f(x)=C+\int _{0}^{x}\,d\mu (t).} === Euclidean space === An analogous result holds in Euclidean space. Suppose C = 0 and we rewrite the above equation for f as f ( x ) = 1 2 ∫ R ( x − t | x − t | + t | t | ) d μ ( t ) . {\displaystyle f(x)={\frac {1}{2}}\int _{\mathbb {R} }\left({\frac {x-t}{|x-t|}}+{\frac {t}{|t|}}\right)d\mu (t).} Writing it this way, we can attempt to define a map using this same integral, but instead integrate (what is now a vector valued integrand) over ℝn: if μ is a doubling measure on ℝn and ∫ | x | > 1 1 | x | d μ ( x ) < ∞ {\displaystyle \int _{|x|>1}{\frac {1}{|x|}}\,d\mu (x)<\infty } then the map f ( x ) = 1 2 ∫ R n ( x − y | x − y | + y | y | ) d μ ( y ) {\displaystyle f(x)={\frac {1}{2}}\int _{\mathbb {R} ^{n}}\left({\frac {x-y}{|x-y|}}+{\frac {y}{|y|}}\right)\,d\mu (y)} is quasisymmetric (in fact, it is δ-monotone for some δ depending on the measure μ). == Quasisymmetry and quasiconformality in Euclidean space == Let Ω {\displaystyle \Omega } and Ω ′ {\displaystyle \Omega '} be open subsets of ℝn. If f : Ω → Ω´ is η-quasisymmetric, then it is also K-quasiconformal, where K > 0 {\displaystyle K>0} is a constant depending on η {\displaystyle \eta } . Conversely, if f : Ω → Ω´ is K-quasiconformal and B ( x , 2 r ) {\displaystyle B(x,2r)} is contained in Ω {\displaystyle \Omega } , then f {\displaystyle f} is η-quasisymmetric on B ( x , 2 r ) {\displaystyle B(x,2r)} , where η {\displaystyle \eta } depends only on K {\displaystyle K} . == Quasi-Möbius maps == A related but weaker condition is the notion of quasi-Möbius maps where instead of the ratio only the cross-ratio is considered: === Definition === Let (X, dX) and (Y, dY) be two metric spaces and let η : [0, ∞) → [0, ∞) be an increasing function. An η-quasi-Möbius homeomorphism f:X → Y is a homeomorphism for which for every quadruple x, y, z, t of distinct points in X, we have d Y ( f ( x ) , f ( z ) ) d Y ( f ( y ) , f ( t ) ) d Y ( f ( x ) , f ( y ) ) d Y ( f ( z ) , f ( t ) ) ≤ η ( d X ( x , z ) d X ( y , t ) d X ( x , y ) d X ( z , t ) ) . {\displaystyle {\frac {d_{Y}(f(x),f(z))d_{Y}(f(y),f(t))}{d_{Y}(f(x),f(y))d_{Y}(f(z),f(t))}}\leq \eta \left({\frac {d_{X}(x,z)d_{X}(y,t)}{d_{X}(x,y)d_{X}(z,t)}}\right).} == See also == Douady–Earle extension == References ==
|
Wikipedia:Quaternionic analysis#0
|
In mathematics, quaternionic analysis is the study of functions with quaternions as the domain and/or range. Such functions can be called functions of a quaternion variable just as functions of a real variable or a complex variable are called. As with complex and real analysis, it is possible to study the concepts of analyticity, holomorphy, harmonicity and conformality in the context of quaternions. Unlike the complex numbers and like the reals, the four notions do not coincide. == Properties == The projections of a quaternion onto its scalar part or onto its vector part, as well as the modulus and versor functions, are examples that are basic to understanding quaternion structure. An important example of a function of a quaternion variable is f 1 ( q ) = u q u − 1 {\displaystyle f_{1}(q)=uqu^{-1}} which rotates the vector part of q by twice the angle represented by the versor u. The quaternion multiplicative inverse f 2 ( q ) = q − 1 {\displaystyle f_{2}(q)=q^{-1}} is another fundamental function, but as with other number systems, f 2 ( 0 ) {\displaystyle f_{2}(0)} and related problems are generally excluded due to the nature of dividing by zero. Affine transformations of quaternions have the form f 3 ( q ) = a q + b , a , b , q ∈ H . {\displaystyle f_{3}(q)=aq+b,\quad a,b,q\in \mathbb {H} .} Linear fractional transformations of quaternions can be represented by elements of the matrix ring M 2 ( H ) {\displaystyle M_{2}(\mathbb {H} )} operating on the projective line over H {\displaystyle \mathbb {H} } . For instance, the mappings q ↦ u q v , {\displaystyle q\mapsto uqv,} where u {\displaystyle u} and v {\displaystyle v} are fixed versors serve to produce the motions of elliptic space. Quaternion variable theory differs in some respects from complex variable theory. For example: The complex conjugate mapping of the complex plane is a central tool but requires the introduction of a non-arithmetic, non-analytic operation. Indeed, conjugation changes the orientation of plane figures, something that arithmetic functions do not change. In contrast to the complex conjugate, the quaternion conjugation can be expressed arithmetically, as f 4 ( q ) = − 1 2 ( q + i q i + j q j + k q k ) {\displaystyle f_{4}(q)=-{\tfrac {1}{2}}(q+iqi+jqj+kqk)} This equation can be proven, starting with the basis {1, i, j, k}: f 4 ( 1 ) = − 1 2 ( 1 − 1 − 1 − 1 ) = 1 , f 4 ( i ) = − 1 2 ( i − i + i + i ) = − i , f 4 ( j ) = − j , f 4 ( k ) = − k {\displaystyle f_{4}(1)=-{\tfrac {1}{2}}(1-1-1-1)=1,\quad f_{4}(i)=-{\tfrac {1}{2}}(i-i+i+i)=-i,\quad f_{4}(j)=-j,\quad f_{4}(k)=-k} . Consequently, since f 4 {\displaystyle f_{4}} is linear, f 4 ( q ) = f 4 ( w + x i + y j + z k ) = w f 4 ( 1 ) + x f 4 ( i ) + y f 4 ( j ) + z f 4 ( k ) = w − x i − y j − z k = q ∗ . {\displaystyle f_{4}(q)=f_{4}(w+xi+yj+zk)=wf_{4}(1)+xf_{4}(i)+yf_{4}(j)+zf_{4}(k)=w-xi-yj-zk=q^{*}.} The success of complex analysis in providing a rich family of holomorphic functions for scientific work has engaged some workers in efforts to extend the planar theory, based on complex numbers, to a 4-space study with functions of a quaternion variable. These efforts were summarized in Deavours (1973). Though H {\displaystyle \mathbb {H} } appears as a union of complex planes, the following proposition shows that extending complex functions requires special care: Let f 5 ( z ) = u ( x , y ) + i v ( x , y ) {\displaystyle f_{5}(z)=u(x,y)+iv(x,y)} be a function of a complex variable, z = x + i y {\displaystyle z=x+iy} . Suppose also that u {\displaystyle u} is an even function of y {\displaystyle y} and that v {\displaystyle v} is an odd function of y {\displaystyle y} . Then f 5 ( q ) = u ( x , y ) + r v ( x , y ) {\displaystyle f_{5}(q)=u(x,y)+rv(x,y)} is an extension of f 5 {\displaystyle f_{5}} to a quaternion variable q = x + y r {\displaystyle q=x+yr} where r 2 = − 1 {\displaystyle r^{2}=-1} and r ∈ H {\displaystyle r\in \mathbb {H} } . Then, let r ∗ {\displaystyle r^{*}} represent the conjugate of r {\displaystyle r} , so that q = x − y r ∗ {\displaystyle q=x-yr^{*}} . The extension to H {\displaystyle \mathbb {H} } will be complete when it is shown that f 5 ( q ) = f 5 ( x − y r ∗ ) {\displaystyle f_{5}(q)=f_{5}(x-yr^{*})} . Indeed, by hypothesis u ( x , y ) = u ( x , − y ) , v ( x , y ) = − v ( x , − y ) {\displaystyle u(x,y)=u(x,-y),\quad v(x,y)=-v(x,-y)\quad } one obtains f 5 ( x − y r ∗ ) = u ( x , − y ) + r ∗ v ( x , − y ) = u ( x , y ) + r v ( x , y ) = f 5 ( q ) . {\displaystyle f_{5}(x-yr^{*})=u(x,-y)+r^{*}v(x,-y)=u(x,y)+rv(x,y)=f_{5}(q).} == Homographies == In the following, colons and square brackets are used to denote homogeneous vectors. The rotation about axis r is a classical application of quaternions to space mapping. In terms of a homography, the rotation is expressed [ q : 1 ] ( u 0 0 u ) = [ q u : u ] ∼ [ u − 1 q u : 1 ] , {\displaystyle [q:1]{\begin{pmatrix}u&0\\0&u\end{pmatrix}}=[qu:u]\thicksim [u^{-1}qu:1],} where u = exp ( θ r ) = cos θ + r sin θ {\displaystyle u=\exp(\theta r)=\cos \theta +r\sin \theta } is a versor. If p * = −p, then the translation q ↦ q + p {\displaystyle q\mapsto q+p} is expressed by [ q : 1 ] ( 1 0 p 1 ) = [ q + p : 1 ] . {\displaystyle [q:1]{\begin{pmatrix}1&0\\p&1\end{pmatrix}}=[q+p:1].} Rotation and translation xr along the axis of rotation is given by [ q : 1 ] ( u 0 u x r u ) = [ q u + u x r : u ] ∼ [ u − 1 q u + x r : 1 ] . {\displaystyle [q:1]{\begin{pmatrix}u&0\\uxr&u\end{pmatrix}}=[qu+uxr:u]\thicksim [u^{-1}qu+xr:1].} Such a mapping is called a screw displacement. In classical kinematics, Chasles' theorem states that any rigid body motion can be displayed as a screw displacement. Just as the representation of a Euclidean plane isometry as a rotation is a matter of complex number arithmetic, so Chasles' theorem, and the screw axis required, is a matter of quaternion arithmetic with homographies: Let s be a right versor, or square root of minus one, perpendicular to r, with t = rs. Consider the axis passing through s and parallel to r. Rotation about it is expressed by the homography composition ( 1 0 − s 1 ) ( u 0 0 u ) ( 1 0 s 1 ) = ( u 0 z u ) , {\displaystyle {\begin{pmatrix}1&0\\-s&1\end{pmatrix}}{\begin{pmatrix}u&0\\0&u\end{pmatrix}}{\begin{pmatrix}1&0\\s&1\end{pmatrix}}={\begin{pmatrix}u&0\\z&u\end{pmatrix}},} where z = u s − s u = sin θ ( r s − s r ) = 2 t sin θ . {\displaystyle z=us-su=\sin \theta (rs-sr)=2t\sin \theta .} Now in the (s,t)-plane the parameter θ traces out a circle u − 1 z = u − 1 ( 2 t sin θ ) = 2 sin θ ( t cos θ − s sin θ ) {\displaystyle u^{-1}z=u^{-1}(2t\sin \theta )=2\sin \theta (t\cos \theta -s\sin \theta )} in the half-plane { w t + x s : x > 0 } . {\displaystyle \lbrace wt+xs:x>0\rbrace .} Any p in this half-plane lies on a ray from the origin through the circle { u − 1 z : 0 < θ < π } {\displaystyle \lbrace u^{-1}z:0<\theta <\pi \rbrace } and can be written p = a u − 1 z , a > 0. {\displaystyle p=au^{-1}z,\ \ a>0.} Then up = az, with ( u 0 a z u ) {\displaystyle {\begin{pmatrix}u&0\\az&u\end{pmatrix}}} as the homography expressing conjugation of a rotation by a translation p. == The derivative for quaternions == Since the time of Hamilton, it has been realized that requiring the independence of the derivative from the path that a differential follows toward zero is too restrictive: it excludes even f ( q ) = q 2 {\displaystyle \ f(q)=q^{2}\ } from differentiation. Therefore, a direction-dependent derivative is necessary for functions of a quaternion variable. Considering the increment of polynomial function of quaternionic argument shows that the increment is a linear map of increment of the argument. From this, a definition can be made: A continuous function f : H → H {\displaystyle \ f:\mathbb {H} \rightarrow \mathbb {H} \ } is called differentiable on the set U ⊂ H , {\displaystyle \ U\subset \mathbb {H} \ ,} if at every point x ∈ U , {\displaystyle \ x\in U\ ,} an increment of the function f {\displaystyle \ f\ } corresponding to a quaternion increment h {\displaystyle \ h\ } of its argument, can be represented as f ( x + h ) − f ( x ) = d f ( x ) d x ∘ h + o ( h ) {\displaystyle f(x+h)-f(x)={\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\circ h+o(h)} where d f ( x ) d x : H → H {\displaystyle {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}:\mathbb {H} \rightarrow \mathbb {H} } is linear map of quaternion algebra H , {\displaystyle \ \mathbb {H} \ ,} and o : H → H {\displaystyle \ o:\mathbb {H} \rightarrow \mathbb {H} \ } represents some continuous map such that lim a → 0 | o ( a ) | | a | = 0 , {\displaystyle \lim _{a\rightarrow 0}{\frac {\ \left|\ o(a)\ \right|\ }{\left|\ a\ \right|}}=0\ ,} and the notation ∘ h {\displaystyle \ \circ h\ } denotes ... The linear map d f ( x ) d x {\displaystyle {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}} is called the derivative of the map f . {\displaystyle \ f~.} On the quaternions, the derivative may be expressed as d f ( x ) d x = ∑ s d s 0 f ( x ) d x ⊗ d s 1 f ( x ) d x {\displaystyle {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}=\sum _{s}{\frac {\operatorname {d} _{s0}f(x)}{\operatorname {d} x}}\otimes {\frac {\operatorname {d} _{s1}f(x)}{\operatorname {d} x}}} Therefore, the differential of the map f {\displaystyle \ f\ } may be expressed as follows, with brackets on either side. d f ( x ) d x ∘ d x = ( ∑ s d s 0 f ( x ) d x ⊗ d s 1 f ( x ) d x ) ∘ d x = ∑ s d s 0 f ( x ) d x ( d x ) d s 1 f ( x ) d x {\displaystyle {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\circ \operatorname {d} x=\left(\sum _{s}{\frac {\operatorname {d} _{s0}f(x)}{\operatorname {d} x}}\otimes {\frac {\operatorname {d} _{s1}f(x)}{\operatorname {d} x}}\right)\circ \operatorname {d} x=\sum _{s}{\frac {\operatorname {d} _{s0}f(x)}{\operatorname {d} x}}\left(\operatorname {d} x\right){\frac {\operatorname {d} _{s1}f(x)}{\operatorname {d} x}}} The number of terms in the sum will depend on the function f . {\displaystyle \ f~.} The expressions d s p d f ( x ) d x f o r p = 0 , 1 {\displaystyle ~~{\frac {\operatorname {d} _{sp}\operatorname {d} f(x)}{\operatorname {d} x}}~~{\mathsf {\ for\ }}~~p=0,1~~} are called components of derivative. The derivative of a quaternionic function is defined by the expression d f ( x ) d x ∘ h = lim t → 0 ( f ( x + t h ) − f ( x ) t ) {\displaystyle {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\circ h=\lim _{t\to 0}\left(\ {\frac {\ f(x+t\ h)-f(x)\ }{t}}\ \right)} where the variable t {\displaystyle \ t\ } is a real scalar. The following equations then hold: d ( f ( x ) + g ( x ) ) d x = d f ( x ) d x + d g ( x ) d x {\displaystyle {\frac {\operatorname {d} \left(f(x)+g(x)\right)}{\operatorname {d} x}}={\frac {\operatorname {d} f(x)}{\operatorname {d} x}}+{\frac {\operatorname {d} g(x)}{\operatorname {d} x}}} d ( f ( x ) g ( x ) ) d x = d f ( x ) d x g ( x ) + f ( x ) d g ( x ) d x {\displaystyle {\frac {\operatorname {d} \left(f(x)\ g(x)\right)}{\operatorname {d} x}}={\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\ g(x)+f(x)\ {\frac {\operatorname {d} g(x)}{\operatorname {d} x}}} d ( f ( x ) g ( x ) ) d x ∘ h = ( d f ( x ) d x ∘ h ) g ( x ) + f ( x ) ( d g ( x ) d x ∘ h ) {\displaystyle {\frac {\operatorname {d} \left(f(x)\ g(x)\right)}{\operatorname {d} x}}\circ h=\left({\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\circ h\right)\ g(x)+f(x)\left({\frac {\operatorname {d} g(x)}{\operatorname {d} x}}\circ h\right)} d ( a f ( x ) b ) d x = a d f ( x ) d x b {\displaystyle {\frac {\operatorname {d} \left(a\ f(x)\ b\right)}{\operatorname {d} x}}=a\ {\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\ b} d ( a f ( x ) b ) d x ∘ h = a ( d f ( x ) d x ∘ h ) b {\displaystyle {\frac {\operatorname {d} \left(a\ f(x)\ b\right)}{\operatorname {d} x}}\circ h=a\left({\frac {\operatorname {d} f(x)}{\operatorname {d} x}}\circ h\right)b} For the function f ( x ) = a x b , {\displaystyle \ f(x)=a\ x\ b\ ,} where a {\displaystyle \ a\ } and b {\displaystyle \ b\ } are constant quaternions, the derivative is and so the components are: Similarly, for the function f ( x ) = x 2 , {\displaystyle \ f(x)=x^{2}\ ,} the derivative is and the components are: Finally, for the function f ( x ) = x − 1 , {\displaystyle \ f(x)=x^{-1}\ ,} the derivative is and the components are: == See also == Cayley transform Quaternionic manifold == Notes == == Citations == == References == Arnold, Vladimir (1995), "The geometry of spherical curves and the algebra of quaternions", Russian Mathematical Surveys, 50 (1), translated by Porteous, Ian R.: 1–68, doi:10.1070/RM1995v050n01ABEH001662, S2CID 250897899, Zbl 0848.58005 Cayley, Arthur (1848), "On the application of quaternions to the theory of rotation", London and Edinburgh Philosophical Magazine, Series 3, 33 (221): 196–200, doi:10.1080/14786444808645844 Deavours, C.A. (1973), "The quaternion calculus", American Mathematical Monthly, 80 (9), Washington, DC: Mathematical Association of America: 995–1008, doi:10.2307/2318774, ISSN 0002-9890, JSTOR 2318774, Zbl 0282.30040 Du Val, Patrick (1964), Homographies, Quaternions and Rotations, Oxford Mathematical Monographs, Oxford: Clarendon Press, MR 0169108, Zbl 0128.15403 Fueter, Rudolf (1936), "Über die analytische Darstellung der regulären Funktionen einer Quaternionenvariablen", Commentarii Mathematici Helvetici (in German), 8: 371–378, doi:10.1007/BF01199562, S2CID 121227604, Zbl 0014.16702 Gentili, Graziano; Stoppato, Caterina; Struppa, Daniele C. (2013), Regular Functions of a Quaternionic Variable, Berlin: Springer, doi:10.1007/978-3-642-33871-7, ISBN 978-3-642-33870-0, S2CID 118710284, Zbl 1269.30001 Gormley, P.G. (1947), "Stereographic projection and the linear fractional group of transformations of quaternions", Proceedings of the Royal Irish Academy, Section A, 51: 67–85, JSTOR 20488472 Gürlebeck, Klaus; Sprößig, Wolfgang (1990), Quaternionic analysis and elliptic boundary value problems, Basel: Birkhäuser, ISBN 978-3-7643-2382-0, Zbl 0850.35001 John C.Holladay (1957), "The Stone–Weierstrass theorem for quaternions" (PDF), Proc. Amer. Math. Soc., 8: 656, doi:10.1090/S0002-9939-1957-0087047-7. Hamilton, William Rowan (1853), Lectures on Quaternions, Dublin: Hodges and Smith, OL 23416635M Hamilton, William Rowan (1866), Hamilton, William Edwin (ed.), Elements of Quaternions, London: Longmans, Green, & Company, Zbl 1204.01046 Joly, Charles Jasper (1903), "Quaternions and projective geometry", Philosophical Transactions of the Royal Society of London, 201 (331–345): 223–327, Bibcode:1903RSPTA.201..223J, doi:10.1098/rsta.1903.0018, JFM 34.0092.01, JSTOR 90902 Laisant, Charles-Ange (1881), Introduction à la Méthode des Quaternions (in French), Paris: Gauthier-Villars, JFM 13.0524.02 Porter, R. Michael (1998), "Möbius invariant quaternion geometry" (PDF), Conformal Geometry and Dynamics, 2 (6): 89–196, doi:10.1090/S1088-4173-98-00032-0, Zbl 0910.53005 Sudbery, A. (1979), "Quaternionic analysis", Mathematical Proceedings of the Cambridge Philosophical Society, 85 (2): 199–225, Bibcode:1979MPCPS..85..199S, doi:10.1017/S0305004100055638, hdl:10338.dmlcz/101933, S2CID 7606387, Zbl 0399.30038
|
Wikipedia:Quaternionic matrix#0
|
A quaternionic matrix is a matrix whose elements are quaternions. == Matrix operations == The quaternions form a noncommutative ring, and therefore addition and multiplication can be defined for quaternionic matrices as for matrices over any ring. Addition. The sum of two quaternionic matrices A and B is defined in the usual way by element-wise addition: ( A + B ) i j = A i j + B i j . {\displaystyle (A+B)_{ij}=A_{ij}+B_{ij}.\,} Multiplication. The product of two quaternionic matrices A and B also follows the usual definition for matrix multiplication. For it to be defined, the number of columns of A must equal the number of rows of B. Then the entry in the ith row and jth column of the product is the dot product of the ith row of the first matrix with the jth column of the second matrix. Specifically: ( A B ) i j = ∑ s A i s B s j . {\displaystyle (AB)_{ij}=\sum _{s}A_{is}B_{sj}.\,} For example, for U = ( u 11 u 12 u 21 u 22 ) , V = ( v 11 v 12 v 21 v 22 ) , {\displaystyle U={\begin{pmatrix}u_{11}&u_{12}\\u_{21}&u_{22}\\\end{pmatrix}},\quad V={\begin{pmatrix}v_{11}&v_{12}\\v_{21}&v_{22}\\\end{pmatrix}},} the product is U V = ( u 11 v 11 + u 12 v 21 u 11 v 12 + u 12 v 22 u 21 v 11 + u 22 v 21 u 21 v 12 + u 22 v 22 ) . {\displaystyle UV={\begin{pmatrix}u_{11}v_{11}+u_{12}v_{21}&u_{11}v_{12}+u_{12}v_{22}\\u_{21}v_{11}+u_{22}v_{21}&u_{21}v_{12}+u_{22}v_{22}\\\end{pmatrix}}.} Since quaternionic multiplication is noncommutative, care must be taken to preserve the order of the factors when computing the product of matrices. The identity for this multiplication is, as expected, the diagonal matrix I = diag(1, 1, ... , 1). Multiplication follows the usual laws of associativity and distributivity. The trace of a matrix is defined as the sum of the diagonal elements, but in general trace ( A B ) ≠ trace ( B A ) . {\displaystyle \operatorname {trace} (AB)\neq \operatorname {trace} (BA).} Left scalar multiplication, and right scalar multiplication are defined by ( c A ) i j = c A i j , ( A c ) i j = A i j c . {\displaystyle (cA)_{ij}=cA_{ij},\qquad (Ac)_{ij}=A_{ij}c.\,} Again, since multiplication is not commutative some care must be taken in the order of the factors. == Determinants == There is no natural way to define a determinant for (square) quaternionic matrices so that the values of the determinant are quaternions. Complex valued determinants can be defined however. The quaternion a + bi + cj + dk can be represented as the 2×2 complex matrix [ a + b i c + d i − c + d i a − b i ] . {\displaystyle {\begin{bmatrix}~~a+bi&c+di\\-c+di&a-bi\end{bmatrix}}.} This defines a map Ψmn from the m by n quaternionic matrices to the 2m by 2n complex matrices by replacing each entry in the quaternionic matrix by its 2 by 2 complex representation. The complex valued determinant of a square quaternionic matrix A is then defined as det(Ψ(A)). Many of the usual laws for determinants hold; in particular, an n by n matrix is invertible if and only if its determinant is nonzero. == Applications == Quaternionic matrices are used in quantum mechanics and in the treatment of multibody problems. == References ==
|
Wikipedia:Quaternionic vector space#0
|
In the mathematical field of representation theory, a quaternionic representation is a representation on a complex vector space V with an invariant quaternionic structure, i.e., an antilinear equivariant map j : V → V {\displaystyle j\colon V\to V} which satisfies j 2 = − 1. {\displaystyle j^{2}=-1.} Together with the imaginary unit i and the antilinear map k := ij, j equips V with the structure of a quaternionic vector space (i.e., V becomes a module over the division algebra of quaternions). From this point of view, quaternionic representation of a group G is a group homomorphism φ: G → GL(V, H), the group of invertible quaternion-linear transformations of V. In particular, a quaternionic matrix representation of g assigns a square matrix of quaternions ρ(g) to each element g of G such that ρ(e) is the identity matrix and ρ ( g h ) = ρ ( g ) ρ ( h ) for all g , h ∈ G . {\displaystyle \rho (gh)=\rho (g)\rho (h){\text{ for all }}g,h\in G.} Quaternionic representations of associative and Lie algebras can be defined in a similar way. == Properties and related concepts == If V is a unitary representation and the quaternionic structure j is a unitary operator, then V admits an invariant complex symplectic form ω, and hence is a symplectic representation. This always holds if V is a representation of a compact group (e.g. a finite group) and in this case quaternionic representations are also known as symplectic representations. Such representations, amongst irreducible representations, can be picked out by the Frobenius-Schur indicator. Quaternionic representations are similar to real representations in that they are isomorphic to their complex conjugate representation. Here a real representation is taken to be a complex representation with an invariant real structure, i.e., an antilinear equivariant map j : V → V {\displaystyle j\colon V\to V} which satisfies j 2 = + 1. {\displaystyle j^{2}=+1.} A representation which is isomorphic to its complex conjugate, but which is not a real representation, is sometimes called a pseudoreal representation. Real and pseudoreal representations of a group G can be understood by viewing them as representations of the real group algebra R[G]. Such a representation will be a direct sum of central simple R-algebras, which, by the Artin-Wedderburn theorem, must be matrix algebras over the real numbers or the quaternions. Thus a real or pseudoreal representation is a direct sum of irreducible real representations and irreducible quaternionic representations. It is real if no quaternionic representations occur in the decomposition. == Examples == A common example involves the quaternionic representation of rotations in three dimensions. Each (proper) rotation is represented by a quaternion with unit norm. There is an obvious one-dimensional quaternionic vector space, namely the space H of quaternions themselves under left multiplication. By restricting this to the unit quaternions, we obtain a quaternionic representation of the spinor group Spin(3). This representation ρ: Spin(3) → GL(1,H) also happens to be a unitary quaternionic representation because ρ ( g ) † ρ ( g ) = 1 {\displaystyle \rho (g)^{\dagger }\rho (g)=\mathbf {1} } for all g in Spin(3). Another unitary example is the spin representation of Spin(5). An example of a non-unitary quaternionic representation would be the two dimensional irreducible representation of Spin(5,1). More generally, the spin representations of Spin(d) are quaternionic when d equals 3 + 8k, 4 + 8k, and 5 + 8k dimensions, where k is an integer. In physics, one often encounters the spinors of Spin(d, 1). These representations have the same type of real or quaternionic structure as the spinors of Spin(d − 1). Among the compact real forms of the simple Lie groups, irreducible quaternionic representations only exist for the Lie groups of type A4k+1, B4k+1, B4k+2, Ck, D4k+2, and E7. == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.. Serre, Jean-Pierre (1977), Linear Representations of Finite Groups, Springer-Verlag, ISBN 978-0-387-90190-9. == See also == Symplectic vector space
|
Wikipedia:Quillen spectral sequence#0
|
In the area of mathematics known as K-theory, the Quillen spectral sequence, also called the Brown–Gersten–Quillen or BGQ spectral sequence (named after Kenneth Brown, Stephen Gersten, and Daniel Quillen), is a spectral sequence converging to the sheaf cohomology of a type of topological space that occurs in algebraic geometry. It is used in calculating the homotopy properties of a simplicial group. == References == Quillen, Daniel (1973). "Higher algebraic K-theory: I". Algebraic K-Theory I. Proceedings of the Conference Held at the Seattle Research Center of Battelle Memorial Institute, August 28 - September 8, 1972. Springer-Verlag. pp. 85–147. Brown, Kenneth S.; Gersten, Stephen M. (1973). "Algebraic K-theory as generalized sheaf cohomology". Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972). Lecture Notes in Math. Vol. 341. Berlin: Springer. pp. 266–292. MR 0347943. == External links == A spectral sequence of Quillen at the Stacks Project
|
Wikipedia:Quillen's lemma#0
|
In algebra, Quillen's lemma states that an endomorphism of a simple module over the enveloping algebra of a finite-dimensional Lie algebra over a field k is algebraic over k. In contrast to a version of Schur's lemma due to Dixmier, it does not require k to be uncountable. Quillen's original short proof uses generic flatness. == References == Quillen, D. (1969). "On the endomorphism ring of a simple module over an enveloping algebra". Proceedings of the American Mathematical Society. 21: 171–172. doi:10.1090/S0002-9939-1969-0238892-4.
|
Wikipedia:Quintuple product identity#0
|
In mathematics the Watson quintuple product identity is an infinite product identity introduced by Watson (1929) and rediscovered by Bailey (1951) and Gordon (1961). It is analogous to the Jacobi triple product identity, and is the Macdonald identity for a certain non-reduced affine root system. It is related to Euler's pentagonal number theorem. == Statement == ∏ n ≥ 1 ( 1 − s n ) ( 1 − s n t ) ( 1 − s n − 1 t − 1 ) ( 1 − s 2 n − 1 t 2 ) ( 1 − s 2 n − 1 t − 2 ) = ∑ n ∈ Z s ( 3 n 2 + n ) / 2 ( t 3 n − t − 3 n − 1 ) {\displaystyle \prod _{n\geq 1}(1-s^{n})(1-s^{n}t)(1-s^{n-1}t^{-1})(1-s^{2n-1}t^{2})(1-s^{2n-1}t^{-2})=\sum _{n\in \mathbf {Z} }s^{(3n^{2}+n)/2}(t^{3n}-t^{-3n-1})} == References == Bailey, W. N. (1951), "On the simplification of some identities of the Rogers-Ramanujan type", Proceedings of the London Mathematical Society, Third Series, 1: 217–221, doi:10.1112/plms/s3-1.1.217, ISSN 0024-6115, MR 0043839 Carlitz, L.; Subbarao, M. V. (1972), "A simple proof of the quintuple product identity", Proceedings of the American Mathematical Society, 32 (1): 42–44, doi:10.2307/2038301, ISSN 0002-9939, JSTOR 2038301, MR 0289316 Gordon, Basil (1961), "Some identities in combinatorial analysis", The Quarterly Journal of Mathematics, Second Series, 12: 285–290, doi:10.1093/qmath/12.1.285, ISSN 0033-5606, MR 0136551 Watson, G. N. (1929), "Theorems stated by Ramanujan. VII: Theorems on continued fractions.", Journal of the London Mathematical Society, 4 (1): 39–48, doi:10.1112/jlms/s1-4.1.39, ISSN 0024-6107, JFM 55.0273.01 Foata, D., & Han, G. N. (2001). The triple, quintuple and septuple product identities revisited. In The Andrews Festschrift (pp. 323–334). Springer, Berlin, Heidelberg. Cooper, S. (2006). The quintuple product identity. International Journal of Number Theory, 2(01), 115-161. == See also == Hirschhorn–Farkas–Kra septagonal numbers identity == Further reading == Subbarao, M. V., & Vidyasagar, M. (1970). On Watson’s quintuple product identity. Proceedings of the American Mathematical Society, 26(1), 23-27. Hirschhorn, M. D. (1988). A generalisation of the quintuple product identity. Journal of the Australian Mathematical Society, 44(1), 42-45. Alladi, K. (1996). The quintuple product identity and shifted partition functions. Journal of Computational and Applied Mathematics, 68(1-2), 3-13. Farkas, H., & Kra, I. (1999). On the quintuple product identity. Proceedings of the American Mathematical Society, 127(3), 771-778. Chen, W. Y., Chu, W., & Gu, N. S. (2005). Finite form of the quintuple product identity. arXiv preprint math/0504277.
|
Wikipedia:Quotient rule#0
|
In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. Let h ( x ) = f ( x ) g ( x ) {\displaystyle h(x)={\frac {f(x)}{g(x)}}} , where both f and g are differentiable and g ( x ) ≠ 0. {\displaystyle g(x)\neq 0.} The quotient rule states that the derivative of h(x) is h ′ ( x ) = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) ( g ( x ) ) 2 . {\displaystyle h'(x)={\frac {f'(x)g(x)-f(x)g'(x)}{(g(x))^{2}}}.} It is provable in many ways by using other derivative rules. == Examples == === Example 1: Basic example === Given h ( x ) = e x x 2 {\displaystyle h(x)={\frac {e^{x}}{x^{2}}}} , let f ( x ) = e x , g ( x ) = x 2 {\displaystyle f(x)=e^{x},g(x)=x^{2}} , then using the quotient rule: d d x ( e x x 2 ) = ( d d x e x ) ( x 2 ) − ( e x ) ( d d x x 2 ) ( x 2 ) 2 = ( e x ) ( x 2 ) − ( e x ) ( 2 x ) x 4 = x 2 e x − 2 x e x x 4 = x e x − 2 e x x 3 = e x ( x − 2 ) x 3 . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\left({\frac {e^{x}}{x^{2}}}\right)&={\frac {\left({\frac {d}{dx}}e^{x}\right)(x^{2})-(e^{x})\left({\frac {d}{dx}}x^{2}\right)}{(x^{2})^{2}}}\\&={\frac {(e^{x})(x^{2})-(e^{x})(2x)}{x^{4}}}\\&={\frac {x^{2}e^{x}-2xe^{x}}{x^{4}}}\\&={\frac {xe^{x}-2e^{x}}{x^{3}}}\\&={\frac {e^{x}(x-2)}{x^{3}}}.\end{aligned}}} === Example 2: Derivative of tangent function === The quotient rule can be used to find the derivative of tan x = sin x cos x {\displaystyle \tan x={\frac {\sin x}{\cos x}}} as follows: d d x tan x = d d x ( sin x cos x ) = ( d d x sin x ) ( cos x ) − ( sin x ) ( d d x cos x ) cos 2 x = ( cos x ) ( cos x ) − ( sin x ) ( − sin x ) cos 2 x = cos 2 x + sin 2 x cos 2 x = 1 cos 2 x = sec 2 x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\tan x&={\frac {d}{dx}}\left({\frac {\sin x}{\cos x}}\right)\\&={\frac {\left({\frac {d}{dx}}\sin x\right)(\cos x)-(\sin x)\left({\frac {d}{dx}}\cos x\right)}{\cos ^{2}x}}\\&={\frac {(\cos x)(\cos x)-(\sin x)(-\sin x)}{\cos ^{2}x}}\\&={\frac {\cos ^{2}x+\sin ^{2}x}{\cos ^{2}x}}\\&={\frac {1}{\cos ^{2}x}}=\sec ^{2}x.\end{aligned}}} == Reciprocal rule == The reciprocal rule is a special case of the quotient rule in which the numerator f ( x ) = 1 {\displaystyle f(x)=1} . Applying the quotient rule gives h ′ ( x ) = d d x [ 1 g ( x ) ] = 0 ⋅ g ( x ) − 1 ⋅ g ′ ( x ) g ( x ) 2 = − g ′ ( x ) g ( x ) 2 . {\displaystyle h'(x)={\frac {d}{dx}}\left[{\frac {1}{g(x)}}\right]={\frac {0\cdot g(x)-1\cdot g'(x)}{g(x)^{2}}}={\frac {-g'(x)}{g(x)^{2}}}.} Utilizing the chain rule yields the same result. == Proofs == === Proof from derivative definition and limit properties === Let h ( x ) = f ( x ) g ( x ) . {\displaystyle h(x)={\frac {f(x)}{g(x)}}.} Applying the definition of the derivative and properties of limits gives the following proof, with the term f ( x ) g ( x ) {\displaystyle f(x)g(x)} added and subtracted to allow splitting and factoring in subsequent steps without affecting the value: h ′ ( x ) = lim k → 0 h ( x + k ) − h ( x ) k = lim k → 0 f ( x + k ) g ( x + k ) − f ( x ) g ( x ) k = lim k → 0 f ( x + k ) g ( x ) − f ( x ) g ( x + k ) k ⋅ g ( x ) g ( x + k ) = lim k → 0 f ( x + k ) g ( x ) − f ( x ) g ( x + k ) k ⋅ lim k → 0 1 g ( x ) g ( x + k ) = lim k → 0 [ f ( x + k ) g ( x ) − f ( x ) g ( x ) + f ( x ) g ( x ) − f ( x ) g ( x + k ) k ] ⋅ 1 [ g ( x ) ] 2 = [ lim k → 0 f ( x + k ) g ( x ) − f ( x ) g ( x ) k − lim k → 0 f ( x ) g ( x + k ) − f ( x ) g ( x ) k ] ⋅ 1 [ g ( x ) ] 2 = [ lim k → 0 f ( x + k ) − f ( x ) k ⋅ g ( x ) − f ( x ) ⋅ lim k → 0 g ( x + k ) − g ( x ) k ] ⋅ 1 [ g ( x ) ] 2 = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\begin{aligned}h'(x)&=\lim _{k\to 0}{\frac {h(x+k)-h(x)}{k}}\\&=\lim _{k\to 0}{\frac {{\frac {f(x+k)}{g(x+k)}}-{\frac {f(x)}{g(x)}}}{k}}\\&=\lim _{k\to 0}{\frac {f(x+k)g(x)-f(x)g(x+k)}{k\cdot g(x)g(x+k)}}\\&=\lim _{k\to 0}{\frac {f(x+k)g(x)-f(x)g(x+k)}{k}}\cdot \lim _{k\to 0}{\frac {1}{g(x)g(x+k)}}\\&=\lim _{k\to 0}\left[{\frac {f(x+k)g(x)-f(x)g(x)+f(x)g(x)-f(x)g(x+k)}{k}}\right]\cdot {\frac {1}{[g(x)]^{2}}}\\&=\left[\lim _{k\to 0}{\frac {f(x+k)g(x)-f(x)g(x)}{k}}-\lim _{k\to 0}{\frac {f(x)g(x+k)-f(x)g(x)}{k}}\right]\cdot {\frac {1}{[g(x)]^{2}}}\\&=\left[\lim _{k\to 0}{\frac {f(x+k)-f(x)}{k}}\cdot g(x)-f(x)\cdot \lim _{k\to 0}{\frac {g(x+k)-g(x)}{k}}\right]\cdot {\frac {1}{[g(x)]^{2}}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{[g(x)]^{2}}}.\end{aligned}}} The limit evaluation lim k → 0 1 g ( x + k ) g ( x ) = 1 [ g ( x ) ] 2 {\displaystyle \lim _{k\to 0}{\frac {1}{g(x+k)g(x)}}={\frac {1}{[g(x)]^{2}}}} is justified by the differentiability of g ( x ) {\displaystyle g(x)} , implying continuity, which can be expressed as lim k → 0 g ( x + k ) = g ( x ) {\displaystyle \lim _{k\to 0}g(x+k)=g(x)} . === Proof using implicit differentiation === Let h ( x ) = f ( x ) g ( x ) , {\displaystyle h(x)={\frac {f(x)}{g(x)}},} so that f ( x ) = g ( x ) h ( x ) . {\displaystyle f(x)=g(x)h(x).} The product rule then gives f ′ ( x ) = g ′ ( x ) h ( x ) + g ( x ) h ′ ( x ) . {\displaystyle f'(x)=g'(x)h(x)+g(x)h'(x).} Solving for h ′ ( x ) {\displaystyle h'(x)} and substituting back for h ( x ) {\displaystyle h(x)} gives: h ′ ( x ) = f ′ ( x ) − g ′ ( x ) h ( x ) g ( x ) = f ′ ( x ) − g ′ ( x ) ⋅ f ( x ) g ( x ) g ( x ) = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\begin{aligned}h'(x)&={\frac {f'(x)-g'(x)h(x)}{g(x)}}\\&={\frac {f'(x)-g'(x)\cdot {\frac {f(x)}{g(x)}}}{g(x)}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{[g(x)]^{2}}}.\end{aligned}}} === Proof using the reciprocal rule or chain rule === Let h ( x ) = f ( x ) g ( x ) = f ( x ) ⋅ 1 g ( x ) . {\displaystyle h(x)={\frac {f(x)}{g(x)}}=f(x)\cdot {\frac {1}{g(x)}}.} Then the product rule gives h ′ ( x ) = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ d d x [ 1 g ( x ) ] . {\displaystyle h'(x)=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot {\frac {d}{dx}}\left[{\frac {1}{g(x)}}\right].} To evaluate the derivative in the second term, apply the reciprocal rule, or the power rule along with the chain rule: d d x [ 1 g ( x ) ] = − 1 g ( x ) 2 ⋅ g ′ ( x ) = − g ′ ( x ) g ( x ) 2 . {\displaystyle {\frac {d}{dx}}\left[{\frac {1}{g(x)}}\right]=-{\frac {1}{g(x)^{2}}}\cdot g'(x)={\frac {-g'(x)}{g(x)^{2}}}.} Substituting the result into the expression gives h ′ ( x ) = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ [ − g ′ ( x ) g ( x ) 2 ] = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) g ( x ) 2 = g ( x ) g ( x ) ⋅ f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) g ( x ) 2 = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) g ( x ) 2 . {\displaystyle {\begin{aligned}h'(x)&=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot \left[{\frac {-g'(x)}{g(x)^{2}}}\right]\\&={\frac {f'(x)}{g(x)}}-{\frac {f(x)g'(x)}{g(x)^{2}}}\\&={\frac {g(x)}{g(x)}}\cdot {\frac {f'(x)}{g(x)}}-{\frac {f(x)g'(x)}{g(x)^{2}}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{g(x)^{2}}}.\end{aligned}}} === Proof by logarithmic differentiation === Let h ( x ) = f ( x ) g ( x ) . {\displaystyle h(x)={\frac {f(x)}{g(x)}}.} Taking the absolute value and natural logarithm of both sides of the equation gives ln | h ( x ) | = ln | f ( x ) g ( x ) | {\displaystyle \ln |h(x)|=\ln \left|{\frac {f(x)}{g(x)}}\right|} Applying properties of the absolute value and logarithms, ln | h ( x ) | = ln | f ( x ) | − ln | g ( x ) | {\displaystyle \ln |h(x)|=\ln |f(x)|-\ln |g(x)|} Taking the logarithmic derivative of both sides, h ′ ( x ) h ( x ) = f ′ ( x ) f ( x ) − g ′ ( x ) g ( x ) {\displaystyle {\frac {h'(x)}{h(x)}}={\frac {f'(x)}{f(x)}}-{\frac {g'(x)}{g(x)}}} Solving for h ′ ( x ) {\displaystyle h'(x)} and substituting back f ( x ) g ( x ) {\displaystyle {\tfrac {f(x)}{g(x)}}} for h ( x ) {\displaystyle h(x)} gives: h ′ ( x ) = h ( x ) [ f ′ ( x ) f ( x ) − g ′ ( x ) g ( x ) ] = f ( x ) g ( x ) [ f ′ ( x ) f ( x ) − g ′ ( x ) g ( x ) ] = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) g ( x ) 2 = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) g ( x ) 2 . {\displaystyle {\begin{aligned}h'(x)&=h(x)\left[{\frac {f'(x)}{f(x)}}-{\frac {g'(x)}{g(x)}}\right]\\&={\frac {f(x)}{g(x)}}\left[{\frac {f'(x)}{f(x)}}-{\frac {g'(x)}{g(x)}}\right]\\&={\frac {f'(x)}{g(x)}}-{\frac {f(x)g'(x)}{g(x)^{2}}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{g(x)^{2}}}.\end{aligned}}} Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because d d x ( ln | u | ) = u ′ u {\displaystyle {\tfrac {d}{dx}}(\ln |u|)={\tfrac {u'}{u}}} , which justifies taking the absolute value of the functions for logarithmic differentiation. == Higher order derivatives == Implicit differentiation can be used to compute the nth derivative of a quotient (partially in terms of its first n − 1 derivatives). For example, differentiating f = g h {\displaystyle f=gh} twice (resulting in f ″ = g ″ h + 2 g ′ h ′ + g h ″ {\displaystyle f''=g''h+2g'h'+gh''} ) and then solving for h ″ {\displaystyle h''} yields h ″ = ( f g ) ″ = f ″ − g ″ h − 2 g ′ h ′ g . {\displaystyle h''=\left({\frac {f}{g}}\right)''={\frac {f''-g''h-2g'h'}{g}}.} == See also == Chain rule – For derivatives of composed functions Differentiation of integrals – Problem in mathematics Differentiation rules – Rules for computing derivatives of functions General Leibniz rule – Generalization of the product rule in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property Product rule – Formula for the derivative of a product Reciprocal rule – Derivative method in calculus mathematics Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == References ==
|
Wikipedia:Quotient space (linear algebra)#0
|
In linear algebra, the quotient of a vector space V {\displaystyle V} by a subspace N {\displaystyle N} is a vector space obtained by "collapsing" N {\displaystyle N} to zero. The space obtained is called a quotient space and is denoted V / N {\displaystyle V/N} (read " V {\displaystyle V} mod N {\displaystyle N} " or " V {\displaystyle V} by N {\displaystyle N} "). == Definition == Formally, the construction is as follows. Let V {\displaystyle V} be a vector space over a field K {\displaystyle \mathbb {K} } , and let N {\displaystyle N} be a subspace of V {\displaystyle V} . We define an equivalence relation ∼ {\displaystyle \sim } on V {\displaystyle V} by stating that x ∼ y {\displaystyle x\sim y} iff x − y ∈ N {\displaystyle x-y\in N} . That is, x {\displaystyle x} is related to y {\displaystyle y} if and only if one can be obtained from the other by adding an element of N {\displaystyle N} . This definition implies that any element of N {\displaystyle N} is related to the zero vector; more precisely, all the vectors in N {\displaystyle N} get mapped into the equivalence class of the zero vector. The equivalence class – or, in this case, the coset – of x {\displaystyle x} is defined as [ x ] := { x + n : n ∈ N } {\displaystyle [x]:=\{x+n:n\in N\}} and is often denoted using the shorthand [ x ] = x + N {\displaystyle [x]=x+N} . The quotient space V / N {\displaystyle V/N} is then defined as V / ∼ {\displaystyle V/_{\sim }} , the set of all equivalence classes induced by ∼ {\displaystyle \sim } on V {\displaystyle V} . Scalar multiplication and addition are defined on the equivalence classes by α [ x ] = [ α x ] {\displaystyle \alpha [x]=[\alpha x]} for all α ∈ K {\displaystyle \alpha \in \mathbb {K} } , and [ x ] + [ y ] = [ x + y ] {\displaystyle [x]+[y]=[x+y]} . It is not hard to check that these operations are well-defined (i.e. do not depend on the choice of representatives). These operations turn the quotient space V / N {\displaystyle V/N} into a vector space over K {\displaystyle \mathbb {K} } with N {\displaystyle N} being the zero class, [ 0 ] {\displaystyle [0]} . The mapping that associates to v ∈ V {\displaystyle v\in V} the equivalence class [ v ] {\displaystyle [v]} is known as the quotient map. Alternatively phrased, the quotient space V / N {\displaystyle V/N} is the set of all affine subsets of V {\displaystyle V} which are parallel to N {\displaystyle N} . == Examples == === Lines in Cartesian Plane === Let X = R2 be the standard Cartesian plane, and let Y be a line through the origin in X. Then the quotient space X/Y can be identified with the space of all lines in X which are parallel to Y. That is to say that, the elements of the set X/Y are lines in X parallel to Y. Note that the points along any one such line will satisfy the equivalence relation because their difference vectors belong to Y. This gives a way to visualize quotient spaces geometrically. (By re-parameterising these lines, the quotient space can more conventionally be represented as the space of all points along a line through the origin that is not parallel to Y. Similarly, the quotient space for R3 by a line through the origin can again be represented as the set of all co-parallel lines, or alternatively be represented as the vector space consisting of a plane which only intersects the line at the origin.) === Subspaces of Cartesian Space === Another example is the quotient of Rn by the subspace spanned by the first m standard basis vectors. The space Rn consists of all n-tuples of real numbers (x1, ..., xn). The subspace, identified with Rm, consists of all n-tuples such that the last n − m entries are zero: (x1, ..., xm, 0, 0, ..., 0). Two vectors of Rn are in the same equivalence class modulo the subspace if and only if they are identical in the last n − m coordinates. The quotient space Rn/Rm is isomorphic to Rn−m in an obvious manner. === Polynomial Vector Space === Let P 3 ( R ) {\displaystyle {\mathcal {P}}_{3}(\mathbb {R} )} be the vector space of all cubic polynomials over the real numbers. Then P 3 ( R ) / ⟨ x 2 ⟩ {\displaystyle {\mathcal {P}}_{3}(\mathbb {R} )/\langle x^{2}\rangle } is a quotient space, where each element is the set corresponding to polynomials that differ by a quadratic term only. For example, one element of the quotient space is { x 3 + a x 2 − 2 x + 3 : a ∈ R } {\displaystyle \{x^{3}+ax^{2}-2x+3:a\in \mathbb {R} \}} , while another element of the quotient space is { a x 2 + 2.7 x : a ∈ R } {\displaystyle \{ax^{2}+2.7x:a\in \mathbb {R} \}} . === General Subspaces === More generally, if V is an (internal) direct sum of subspaces U and W, V = U ⊕ W {\displaystyle V=U\oplus W} then the quotient space V/U is naturally isomorphic to W. === Lebesgue Integrals === An important example of a functional quotient space is an Lp space. == Properties == There is a natural epimorphism from V to the quotient space V/U given by sending x to its equivalence class [x]. The kernel (or nullspace) of this epimorphism is the subspace U. This relationship is neatly summarized by the short exact sequence 0 → U → V → V / U → 0. {\displaystyle 0\to U\to V\to V/U\to 0.\,} If U is a subspace of V, the dimension of V/U is called the codimension of U in V. Since a basis of V may be constructed from a basis A of U and a basis B of V/U by adding a representative of each element of B to A, the dimension of V is the sum of the dimensions of U and V/U. If V is finite-dimensional, it follows that the codimension of U in V is the difference between the dimensions of V and U: c o d i m ( U ) = dim ( V / U ) = dim ( V ) − dim ( U ) . {\displaystyle \mathrm {codim} (U)=\dim(V/U)=\dim(V)-\dim(U).} Let T : V → W be a linear operator. The kernel of T, denoted ker(T), is the set of all x in V such that Tx = 0. The kernel is a subspace of V. The first isomorphism theorem for vector spaces says that the quotient space V/ker(T) is isomorphic to the image of V in W. An immediate corollary, for finite-dimensional spaces, is the rank–nullity theorem: the dimension of V is equal to the dimension of the kernel (the nullity of T) plus the dimension of the image (the rank of T). The cokernel of a linear operator T : V → W is defined to be the quotient space W/im(T). == Quotient of a Banach space by a subspace == If X is a Banach space and M is a closed subspace of X, then the quotient X/M is again a Banach space. The quotient space is already endowed with a vector space structure by the construction of the previous section. We define a norm on X/M by ‖ [ x ] ‖ X / M = inf m ∈ M ‖ x − m ‖ X = inf m ∈ M ‖ x + m ‖ X = inf y ∈ [ x ] ‖ y ‖ X . {\displaystyle \|[x]\|_{X/M}=\inf _{m\in M}\|x-m\|_{X}=\inf _{m\in M}\|x+m\|_{X}=\inf _{y\in [x]}\|y\|_{X}.} === Examples === Let C[0,1] denote the Banach space of continuous real-valued functions on the interval [0,1] with the sup norm. Denote the subspace of all functions f ∈ C[0,1] with f(0) = 0 by M. Then the equivalence class of some function g is determined by its value at 0, and the quotient space C[0,1]/M is isomorphic to R. If X is a Hilbert space, then the quotient space X/M is isomorphic to the orthogonal complement of M. === Generalization to locally convex spaces === The quotient of a locally convex space by a closed subspace is again locally convex. Indeed, suppose that X is locally convex so that the topology on X is generated by a family of seminorms {pα | α ∈ A} where A is an index set. Let M be a closed subspace, and define seminorms qα on X/M by q α ( [ x ] ) = inf v ∈ [ x ] p α ( v ) . {\displaystyle q_{\alpha }([x])=\inf _{v\in [x]}p_{\alpha }(v).} Then X/M is a locally convex space, and the topology on it is the quotient topology. If, furthermore, X is metrizable, then so is X/M. If X is a Fréchet space, then so is X/M. == See also == Quotient group Quotient module Quotient set Quotient space (topology) == References == == Sources == Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0. Dieudonné, Jean (1976), Treatise on Analysis, vol. 2, Academic Press, ISBN 978-0122155024 Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Roman, Steven (2005). Advanced Linear Algebra. Graduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.
|
Wikipedia:Qāḍī Zāda al-Rūmī#0
|
al-Rumi (Arabic: الرومي, also transcribed as ar-Rumi), or its Persian variant of simply Rumi, is a nisba denoting a person from or related to the historical region(s) specified by the name Rûm. It may refer to: Jalāl ad-Dīn Muhammad Rūmī, Persian poet, Islamic jurist, theologian, and mystic commonly referred to by the moniker Rumi Suhayb ar-Rumi, a companion of Muhammad Qāḍī Zāda al-Rūmī, 14th-century mathematician Ibn al-Rumi, 9th-century Arabic poet Dhuka al-Rumi, 10th-century Abbasid governor of Egypt Al-Adli ar-Rumi, 9th-century Arab chess player and theoretician Mustafa Rumi, 16th-century Ottoman general Sarjun ibn Mansur al-Rumi, Umayyad official Yāqūt Shihāb al-Dīn ibn-'Abdullāh al-Rūmī al-Hamawī, 13th-century scholar Ahmet Câmî-i Rûmî, 16th-century Ottoman official Shah Sultan Rumi, 11th-century Sufi saint of Bengal
|
Wikipedia:R. E. Siday#0
|
Raymond Eldred Siday (1912–1956) was an English mathematician specialising in quantum mechanics. He obtained his BSc in Special Physics and later worked at the University of Edinburgh. He began collaborating with Werner Ehrenberg in 1933. Raymond Siday is known for the Ehrenberg–Siday effect. == Family == He was the brother of Eric Siday, a pioneer of electronic music and amateur racing driver == Ehrenberg–Siday–Aharonov–Bohm effect == The Ehrenberg–Siday effect, later known as the Aharonov–Bohm effect, is a quantum mechanical phenomenon by which a charged particle is affected by electromagnetic fields in regions from which the particle is excluded. The earliest form of this effect was predicted by Ehrenberg and Siday in 1949, and similar effects were later rediscovered by Aharonov and Bohm in 1959. Such effects are predicted to arise from both magnetic fields and electric fields, but the magnetic version has been easier to observe. In general, the consequence of Aharonov–Bohm effects is that knowledge of the classical electromagnetic field acting locally on a particle is not sufficient to predict its quantum-mechanical behavior. == Selected papers == Siday, R.E. (1942). "The determination of the optical properties of thick magnetic lenses, and the application of these lenses to beta-ray spectrometry". Proc. Phys. Soc. 54 (3): 266–277. Bibcode:1942PPS....54..266S. doi:10.1088/0959-5309/54/3/305. Siday, R.E. (1947). "The optical properties of axially symmetric magnetic prisms part 1: The study of rays in a plane of symmetry, and its application to the design of prism β-spectroscopes". Proc. Phys. Soc. 59 (6): 905–917. Bibcode:1947PPS....59..905S. doi:10.1088/0959-5309/59/6/301. Siday, R.E. (1947). "The Optical Properties of Axially Symmetric Magnetic Prisms". Proc. Phys. Soc. 59 (6): 1036–1036. Bibcode:1947PPS....59Q1036.. doi:10.1088/0959-5309/59/6/415. Ehrenberg, W.; Siday, R. E. (1949). "The Refractive Index in Electron Optics and the Principles of Dynamics". Proc. Phys. Soc. B62 (1): 8–21. Bibcode:1949PPSB...62....8E. CiteSeerX 10.1.1.205.6343. doi:10.1088/0370-1301/62/1/303. == References ==
|
Wikipedia:Racah polynomials#0
|
In mathematics, Racah polynomials are orthogonal polynomials named after Giulio Racah, as their orthogonality relations are equivalent to his orthogonality relations for Racah coefficients. The Racah polynomials were first defined by Wilson (1978) and are given by p n ( x ( x + γ + δ + 1 ) ) = 4 F 3 [ − n n + α + β + 1 − x x + γ + δ + 1 α + 1 γ + 1 β + δ + 1 ; 1 ] . {\displaystyle p_{n}(x(x+\gamma +\delta +1))={}_{4}F_{3}\left[{\begin{matrix}-n&n+\alpha +\beta +1&-x&x+\gamma +\delta +1\\\alpha +1&\gamma +1&\beta +\delta +1\\\end{matrix}};1\right].} == Orthogonality == ∑ y = 0 N R n ( x ; α , β , γ , δ ) R m ( x ; α , β , γ , δ ) γ + δ + 1 + 2 y γ + δ + 1 + y ω y = h n δ n , m , {\displaystyle \sum _{y=0}^{N}\operatorname {R} _{n}(x;\alpha ,\beta ,\gamma ,\delta )\operatorname {R} _{m}(x;\alpha ,\beta ,\gamma ,\delta ){\frac {\gamma +\delta +1+2y}{\gamma +\delta +1+y}}\omega _{y}=h_{n}\operatorname {\delta } _{n,m},} when α + 1 = − N {\displaystyle \alpha +1=-N} , where R {\displaystyle \operatorname {R} } is the Racah polynomial, x = y ( y + γ + δ + 1 ) , {\displaystyle x=y(y+\gamma +\delta +1),} δ n , m {\displaystyle \operatorname {\delta } _{n,m}} is the Kronecker delta function and the weight functions are ω y = ( α + 1 ) y ( β + δ + 1 ) y ( γ + 1 ) y ( γ + δ + 2 ) y ( − α + γ + δ + 1 ) y ( − β + γ + 1 ) y ( δ + 1 ) y y ! , {\displaystyle \omega _{y}={\frac {(\alpha +1)_{y}(\beta +\delta +1)_{y}(\gamma +1)_{y}(\gamma +\delta +2)_{y}}{(-\alpha +\gamma +\delta +1)_{y}(-\beta +\gamma +1)_{y}(\delta +1)_{y}y!}},} and h n = ( − β ) N ( γ + δ + 1 ) N ( − β + γ + 1 ) N ( δ + 1 ) N ( n + α + β + 1 ) n n ! ( α + β + 2 ) 2 n ( α + δ − γ + 1 ) n ( α − δ + 1 ) n ( β + 1 ) n ( α + 1 ) n ( β + δ + 1 ) n ( γ + 1 ) n , {\displaystyle h_{n}={\frac {(-\beta )_{N}(\gamma +\delta +1)_{N}}{(-\beta +\gamma +1)_{N}(\delta +1)_{N}}}{\frac {(n+\alpha +\beta +1)_{n}n!}{(\alpha +\beta +2)_{2n}}}{\frac {(\alpha +\delta -\gamma +1)_{n}(\alpha -\delta +1)_{n}(\beta +1)_{n}}{(\alpha +1)_{n}(\beta +\delta +1)_{n}(\gamma +1)_{n}}},} ( ⋅ ) n {\displaystyle (\cdot )_{n}} is the Pochhammer symbol. == Rodrigues-type formula == ω ( x ; α , β , γ , δ ) R n ( λ ( x ) ; α , β , γ , δ ) = ( γ + δ + 1 ) n ∇ n ∇ λ ( x ) n ω ( x ; α + n , β + n , γ + n , δ ) , {\displaystyle \omega (x;\alpha ,\beta ,\gamma ,\delta )\operatorname {R} _{n}(\lambda (x);\alpha ,\beta ,\gamma ,\delta )=(\gamma +\delta +1)_{n}{\frac {\nabla ^{n}}{\nabla \lambda (x)^{n}}}\omega (x;\alpha +n,\beta +n,\gamma +n,\delta ),} where ∇ {\displaystyle \nabla } is the backward difference operator, λ ( x ) = x ( x + γ + δ + 1 ) . {\displaystyle \lambda (x)=x(x+\gamma +\delta +1).} == Generating functions == There are three generating functions for x ∈ { 0 , 1 , 2 , . . . , N } {\displaystyle x\in \{0,1,2,...,N\}} when β + δ + 1 = − N {\displaystyle \beta +\delta +1=-N\quad } or γ + 1 = − N , {\displaystyle \quad \gamma +1=-N,} 2 F 1 ( − x , − x + α − γ − δ ; α + 1 ; t ) 2 F 1 ( x + β + δ + 1 , x + γ + 1 ; β + 1 ; t ) {\displaystyle {}_{2}F_{1}(-x,-x+\alpha -\gamma -\delta ;\alpha +1;t){}_{2}F_{1}(x+\beta +\delta +1,x+\gamma +1;\beta +1;t)} = ∑ n = 0 N ( β + δ + 1 ) n ( γ + 1 ) n ( β + 1 ) n n ! R n ( λ ( x ) ; α , β , γ , δ ) t n , {\displaystyle \quad =\sum _{n=0}^{N}{\frac {(\beta +\delta +1)_{n}(\gamma +1)_{n}}{(\beta +1)_{n}n!}}\operatorname {R} _{n}(\lambda (x);\alpha ,\beta ,\gamma ,\delta )t^{n},} when α + 1 = − N {\displaystyle \alpha +1=-N\quad } or γ + 1 = − N , {\displaystyle \quad \gamma +1=-N,} 2 F 1 ( − x , − x + β − γ ; β + δ + 1 ; t ) 2 F 1 ( x + α + 1 , x + γ + 1 ; α − δ + 1 ; t ) {\displaystyle {}_{2}F_{1}(-x,-x+\beta -\gamma ;\beta +\delta +1;t){}_{2}F_{1}(x+\alpha +1,x+\gamma +1;\alpha -\delta +1;t)} = ∑ n = 0 N ( α + 1 ) n ( γ + 1 ) n ( α − δ + 1 ) n n ! R n ( λ ( x ) ; α , β , γ , δ ) t n , {\displaystyle \quad =\sum _{n=0}^{N}{\frac {(\alpha +1)_{n}(\gamma +1)_{n}}{(\alpha -\delta +1)_{n}n!}}\operatorname {R} _{n}(\lambda (x);\alpha ,\beta ,\gamma ,\delta )t^{n},} when α + 1 = − N {\displaystyle \alpha +1=-N\quad } or β + δ + 1 = − N , {\displaystyle \quad \beta +\delta +1=-N,} 2 F 1 ( − x , − x − δ ; γ + 1 ; t ) 2 F 1 ( x + α + 1 ; x + β + γ + 1 ; α + β − γ + 1 ; t ) {\displaystyle {}_{2}F_{1}(-x,-x-\delta ;\gamma +1;t){}_{2}F_{1}(x+\alpha +1;x+\beta +\gamma +1;\alpha +\beta -\gamma +1;t)} = ∑ n = 0 N ( α + 1 ) n ( β + δ + 1 ) n ( α + β − γ + 1 ) n n ! R n ( λ ( x ) ; α , β , γ , δ ) t n . {\displaystyle \quad =\sum _{n=0}^{N}{\frac {(\alpha +1)_{n}(\beta +\delta +1)_{n}}{(\alpha +\beta -\gamma +1)_{n}n!}}\operatorname {R} _{n}(\lambda (x);\alpha ,\beta ,\gamma ,\delta )t^{n}.} == Connection formula for Wilson polynomials == When α = a + b − 1 , β = c + d − 1 , γ = a + d − 1 , δ = a − d , x → − a + i x , {\displaystyle \alpha =a+b-1,\beta =c+d-1,\gamma =a+d-1,\delta =a-d,x\rightarrow -a+ix,} R n ( λ ( − a + i x ) ; a + b − 1 , c + d − 1 , a + d − 1 , a − d ) = W n ( x 2 ; a , b , c , d ) ( a + b ) n ( a + c ) n ( a + d ) n , {\displaystyle \operatorname {R} _{n}(\lambda (-a+ix);a+b-1,c+d-1,a+d-1,a-d)={\frac {\operatorname {W} _{n}(x^{2};a,b,c,d)}{(a+b)_{n}(a+c)_{n}(a+d)_{n}}},} where W {\displaystyle \operatorname {W} } are Wilson polynomials. == q-analog == Askey & Wilson (1979) introduced the q-Racah polynomials defined in terms of basic hypergeometric functions by p n ( q − x + q x + 1 c d ; a , b , c , d ; q ) = 4 ϕ 3 [ q − n a b q n + 1 q − x q x + 1 c d a q b d q c q ; q ; q ] . {\displaystyle p_{n}(q^{-x}+q^{x+1}cd;a,b,c,d;q)={}_{4}\phi _{3}\left[{\begin{matrix}q^{-n}&abq^{n+1}&q^{-x}&q^{x+1}cd\\aq&bdq&cq\\\end{matrix}};q;q\right].} They are sometimes given with changes of variables as W n ( x ; a , b , c , N ; q ) = 4 ϕ 3 [ q − n a b q n + 1 q − x c q x − n a q b c q q − N ; q ; q ] . {\displaystyle W_{n}(x;a,b,c,N;q)={}_{4}\phi _{3}\left[{\begin{matrix}q^{-n}&abq^{n+1}&q^{-x}&cq^{x-n}\\aq&bcq&q^{-N}\\\end{matrix}};q;q\right].} == References == Askey, Richard; Wilson, James (1979), "A set of orthogonal polynomials that generalize the Racah coefficients or 6-j symbols" (PDF), SIAM Journal on Mathematical Analysis, 10 (5): 1008–1016, doi:10.1137/0510092, ISSN 0036-1410, MR 0541097, archived from the original on September 25, 2017 Wilson, J. (1978), Hypergeometric series recurrence relations and some new orthogonal functions, Ph.D. thesis, Univ. Wisconsin, Madison
|
Wikipedia:Rachel Kuske#0
|
Rachel Ann Kuske (born 1965) is an American-Canadian applied mathematician and Professor and Chair of Mathematics at the Georgia Institute of Technology. == Professional career == Kuske received her PhD in Applied Mathematics from Northwestern University in 1992. Her dissertation, Asymptotic Analysis of Random Wave Equations, was supervised by Bernard J. Matkowsky. From 1997 to 2002, she was Assistant Professor and then associate professor at the University of Minnesota. She is an expert on stochastic and nonlinear dynamics, mathematical modeling, asymptotic methods, and industrial mathematics. She served on the Scientific Advisory Board for the Institute for Computational and Experimental Research in Mathematics (ICERM), and as of 2021 she serves on ICERM's board of trustees. == Awards and honours == Kuske was awarded a Sloan Fellowship in 1992 and was made a Canada Research Chair in 2002. In 2011 Kuske was a recipient of the Canadian Mathematical Society Krieger–Nelson Prize, given to outstanding woman in mathematics in Canada. In 2015 she became a fellow of the Society for Industrial and Applied Mathematics "for contributions to the theory of stochastic and nonlinear dynamics and its application, and for promoting equity and diversity in mathematics." == References == == External links == Home page Citation for Krieger-Nelson Prize Letter from the Chair, Professor Rachel Kuske
|
Wikipedia:Rachid Deriche#0
|
Rachid Deriche is a research director at Inria Sophia Antipolis, France, where he leads the research project Athena aiming to explore the Central Nervous System using computational imaging. He has published more than 60 journals and more than 180 conferences papers with a Google Scholar H-index of 67. He is known for the development of the edge detection algorithm, named after him. == Background == Deriche was born in 1954, in Thenia, Algeria. He studied electronics at Ecole Nationale Polytechnique of Algiers. In 1977, he moved to France to continue his studies at Ecole Nationale Superieure des Telecommunications of Paris, from which he graduated in 1979. Three years later, he received from University of Paris IX, Dauphine a Ph.D degree in mathematics. He obtained his HDR degree from University of Nice Sophia Antipolis in 1991. == Contributions == Deriche has made major contributions to the scientific community, mainly in image processing, computer vision and neuro-imaging. In 1987, Deriche has developed Deriche Edge Detector which is a low-level, recursively implemented, optimal edge detector based on Canny's edge detector criteria for optimal edge detection. In 1998, based on his works in Computational image processing, Early vision, 3D reconstruction, panoramic photography, image-based modeling, and motion analysis, he co-founded with his Inria colleagues (among which O. Faugeras, T. Papadopoulo and L. Robert), Realviz, a start-up specialized in image-based content creation solutions for the film, broadcast, gaming, digital imaging and architecture. The startup was acquired by Autodesk in 2008. Deriche is currently devoting his research to explore the Central Nervous System by developing mathematical models and tools to reconstruct the functional and anatomical connections network of the brain. == Awards and honors == ERC Advanced Grant: In 2016, R. Deriche received from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program a grant to support his research project “Computational Brain Connectivity Mapping”. Doctorate Honoris Causa: R. Deriche received the title of Honorary Doctor (honoris causa) from the University of Sherbrooke in 2014, to honor his scientific contributions and his approach in educating new generations of scientists French Academy of Sciences Grand Prize of the EADS: In 2013, R. Deriche was awarded Grand Prize of the EADS Corporate Foundation in Computer Science by the French Academy of Sciences for all his outstanding scientific contributions in computer science. == References == == External links == Inria Athena research project
|
Wikipedia:Rademacher–Menchov theorem#0
|
In mathematical analysis, the Rademacher–Menchov theorem, introduced by Rademacher (1922) and Menchoff (1923), gives a sufficient condition for a series of orthogonal functions on an interval to converge almost everywhere. == Statement == If the coefficients cν of a series of bounded orthogonal functions on an interval satisfy ∑ | c ν | 2 log ( ν ) 2 < ∞ {\displaystyle \sum |c_{\nu }|^{2}\log(\nu )^{2}<\infty } then the series converges almost everywhere. == References == Menchoff, D. (1923), "Sur les séries de fonctions orthogonales. (Première Partie. La convergence.).", Fundamenta Mathematicae (in French), 4: 82–105, doi:10.4064/fm-4-1-82-105, ISSN 0016-2736 Rademacher, Hans (1922), "Einige Sätze über Reihen von allgemeinen Orthogonalfunktionen", Mathematische Annalen, 87, Springer Berlin / Heidelberg: 112–138, doi:10.1007/BF01458040, ISSN 0025-5831, S2CID 120708120 Zygmund, A. (2002) [1935], Trigonometric Series. Vol. I, II, Cambridge Mathematical Library (3rd ed.), Cambridge University Press, ISBN 978-0-521-89053-3, MR 1963498
|
Wikipedia:Radha Kessar#0
|
Radha Kessar is an Indian mathematician known for her research in the representation theory of finite groups. She holds the Fielden Chair in Pure Mathematics at the University of Manchester, and in 2009 won the Berwick Prize of the London Mathematical Society. == Education and career == Kessar graduated from Panjab University in 1991. She completed her Ph.D. in 1995 from Ohio State University; her dissertation, Blocks And Source Algebras For The Double Covers Of The Symmetric Groups, was supervised by Ronald Solomon. After taking visiting assistant professor positions at Yale University and the University of Minnesota, and working as a Weir Junior Research Fellow at University College, Oxford, she returned to Ohio State as an assistant professor in 2002. She moved to the University of Aberdeen in 2005, to City, University of London in 2012, and then to the University of Manchester in 2022. == Book == With Michael Aschbacher and Bob Oliver, she is an author of the book Fusion Systems in Algebra and Topology (Cambridge University Press, 2011). == Recognition == Her 2009 Berwick award was joint with her future City colleague Joseph Chuang, for the research reported in their paper Symmetric Groups, Wreath Products, Morita Equivalences and Broué's Abelian Defect Conjecture. She was named MSRI Simons Professor for 2017-2018. == References == == External links == Home page
|
Wikipedia:Radhika Kulkarni#0
|
Radhika Vidyadhar Kulkarni (born 1956) is a retired Indian and American operations researcher, and the 2022 president of INFORMS. The Bechhofer–Kulkarni selection procedure or Bechhofer–Kulkarni stopping rule, a stopping rule for maximization in Bernoulli processes, is named after her work with her doctoral advisor, Robert E. Bechhofer. == Education and career == After earning a master's degree in mathematics at IIT Delhi, Kulkarni went to Cornell University intending to do doctoral research in pure mathematics, but switched to operations research after taking a mathematical programming course in her first year. She earned a second master's degree in 1979 and completed her Ph.D. in 1981. Her doctoral supervisor was Robert E. Bechhofer. She worked for 35 years at the SAS Institute, including ten years as Vice President of Advanced Analytics R&D. She is the 2022 president of the Institute for Operations Research and the Management Sciences (INFORMS). == Recognition == Kulkarni was the 2006 winner of the WORMS Award for the Advancement of Women in OR/MS of INFORMS. In 2014 she was named a Fellow of INFORMS. == Personal life == Kulkarni married Vidyadhar Kulkarni, another student of operations research at Cornell and later the chair of Statistics and Operations Research at the University of North Carolina. == References ==
|
Wikipedia:Radial set#0
|
In mathematics, a subset A ⊆ X {\displaystyle A\subseteq X} of a linear space X {\displaystyle X} is radial at a given point a 0 ∈ A {\displaystyle a_{0}\in A} if for every x ∈ X {\displaystyle x\in X} there exists a real t x > 0 {\displaystyle t_{x}>0} such that for every t ∈ [ 0 , t x ] , {\displaystyle t\in [0,t_{x}],} a 0 + t x ∈ A . {\displaystyle a_{0}+tx\in A.} Geometrically, this means A {\displaystyle A} is radial at a 0 {\displaystyle a_{0}} if for every x ∈ X , {\displaystyle x\in X,} there is some (non-degenerate) line segment (depend on x {\displaystyle x} ) emanating from a 0 {\displaystyle a_{0}} in the direction of x {\displaystyle x} that lies entirely in A . {\displaystyle A.} Every radial set is a star domain although not conversely. == Relation to the algebraic interior == The points at which a set is radial are called internal points. The set of all points at which A ⊆ X {\displaystyle A\subseteq X} is radial is equal to the algebraic interior. == Relation to absorbing sets == Every absorbing subset is radial at the origin a 0 = 0 , {\displaystyle a_{0}=0,} and if the vector space is real then the converse also holds. That is, a subset of a real vector space is absorbing if and only if it is radial at the origin. Some authors use the term radial as a synonym for absorbing. == See also == Absorbing set – Set that can be "inflated" to reach any point Algebraic interior – Generalization of topological interior Minkowski functional – Function made from a set Star domain – Property of point sets in Euclidean spaces == References == Aliprantis, Charalambos D.; Border, Kim C. (2006). Infinite Dimensional Analysis: A Hitchhiker's Guide (Third ed.). Berlin: Springer Science & Business Media. ISBN 978-3-540-29587-7. OCLC 262692874. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
|
Wikipedia:Radical polynomial#0
|
In mathematics, in the realm of abstract algebra, a radical polynomial is a multivariate polynomial over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if k [ x 1 , x 2 , … , x n ] {\displaystyle k[x_{1},x_{2},\ldots ,x_{n}]} is a polynomial ring, the ring of radical polynomials is the subring generated by the polynomial ∑ i = 1 n x i 2 . {\displaystyle \sum _{i=1}^{n}x_{i}^{2}.} Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group. The ring of radical polynomials is a graded subalgebra of the ring of all polynomials. The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial. This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials. == References ==
|
Wikipedia:Radivoj Kašanin#0
|
Radivoj Kašanin or Radivoje Kašanin (21 May 1892 – 30 October 1989) was a Serbian mathematician, university professor, and member of the Serbian Academy of Arts and Sciences. Radivoje Kašanin is regarded as a talented mathematician and scholar of natural sciences with a wide scientific culture. As for his profound and diversified knowledge in many areas of mathematics, mechanics, and astronomy he could be considered as Serbia's last encyclopedist. Radivoje Kašanin achieved success in many fields of his profession: theory of differential equations, the theory of complex functions, analysis, geometry, interpolation and approximation, mechanics, astronomy and geophysics and in each of mentioned fields of his work he published papers that were widely acknowledged. == Biography == Radivoje Kašanin was born Beli Manastir, then part of the Habsburg monarchy, and attended the Serbian elementary school in his native town from 1892 to 1902. He completed the first three grades of the classical gymnasium in Osijek, and then he moved to Novi Sad, where he finished the fourth grade and passed the final examination. In 1910 he began his studies in mathematics and astronomy at the University of Vienna, then in 1911 at the University of Zagreb, where he stayed for two years before enrolling at the University of Budapest in 1913. The Great War cut short his studies in Budapest when he was mobilized by the Austro-Hungarian Army in 1914. He was immediately sent to the Russian front, where he survived the hostilities and by the end of the war, he went to Paris to pursue his higher studies at the Sorbonne in 1921. In 1924 he successfully defended his dissertation and received his Ph.D. in mathematics. His mentor was the famed Balkan mathematician Mihailo Petrović Alas. He moved back to Yugoslavia and was appointed assistant at the Technical Faculty of the University of Belgrade in 1922, an assistant professor in 1926, associate professor in 1930, and full professor in 1939. He was elected Rector of the Technical High School for two terms of office, 1950 and 1951. Also, he was elected a corresponding member of the Serbian Academy of Sciences and Arts on 2 March 1946, and a full member on June 10, 1955. He served the post of director of the Institute of Mathematics from 1951 to 1958, was president of its Council from 1958 to 1961. In 1950 the Proceeding of the Institute of Mathematics was published and during the next ten years, Radivoje Kašanin was its editor-in-chief. From 1 October 1957 to 12 January 1959 Radivoje Kašanin served as deputy vice-president of the Serbian Academy of Sciences. He devoted his last years mathematically interpreting the cosmogonical theory of Pavle Savić. Radivoje Kašanin died in Belgrade on October 30, 1989 where he was buried. == See also == Milan Kašanin, brother == Bibliography == Only mathematical textbooks are included here, the rest can be found in a five page Bibliography in Serbian and English. "Viša matematika I", Grafički zavod "Slavija", sv. 1, Beograd, 1932, str. 80 "Viša matematika I, sv. 1", Izdavačka knjižarnica Gece Kona, Beograd, 1933, str. 160 "Viša matematika I", Beograd, 1934, str. 627 "Viša matematika I", Centralno udruženje studenata tehnike, Beograd, 1946, str. 791 (2. prer. i dop. izdanje) "Viša matematika I", Beograd, 1949, str. 847 (3. izdanje) "Viša matematika II", knj. 1, Beograd, 1949, str. 624 * "Viša matematika II", knj. 2, Beograd, 1950, str. 679 "Zbirka rešenih zadataka više matematike I, knj. 2", Geografski institut JNA, Beograd, 1952, str. 526 "Zbirka rešenih zadataka više matematike I, knj. 1", Geografski institut JNA, Beograd, 1956, str. 588+(4) "Zbirka rešenih zadataka više matematike I, knj. 3", Geografski institut JNA, Beograd, 1959, str. 164+(4) "Viša matematika I", Sarajevo, 1969, str. 836 (4. izdanje) == References ==
|
Wikipedia:Radoslav Harman#0
|
Radoslav Harman is a Slovak mathematician working in the area of optimal design of statistical experiments. He is currently a docent at Comenius University. == Biography == In 2004, Harman obtained PhD in statistics from Comenius University, under the supervision of Andrej Pazman. He has published 30 research papers in the field of optimal design. == Bibliography == Radoslav Harman; Luc Pronzato (2007). "Improvements on removing non-optimal support points in D-optimum design algorithms". Statistics & Probability Letters. 77 (1): 90–94. arXiv:0706.4394. doi:10.1016/j.spl.2006.05.014. S2CID 5894164. == References ==
|
Wikipedia:Rafael Artzy#0
|
Rafael Artzy (Hebrew: רפאל ארצי; 23 July 1912 – 22 August 2006) was an Israeli mathematician specializing in geometry. == Education and emigration == Artzy was born July 23, 1912, in Königsberg, Germany. His father was Edward I. Deutschlander and his mother Ida Freudenheim. Rafael studied at Königsberg University from 1930 to 1933. He transferred to Hebrew University and obtained a master's degree in 1934. He married Elly Iwiansky on October 12, 1934. Rafael continued his studies at Hebrew University under Theodore Motzkin, obtaining a Ph.D. in 1945. Elly and Rafael raised three children: Ehud, Michal, and Barak. Ehud and Barak died before their father. Michal Artzy is emeritus professor in Marine Civilization at the University of Haifa. Rafael served as both teacher and principal of Israel High School from 1934 to 1951. He was an instructor and assistant professor at the Israel Institute of Technology from 1951 to 1956. == American tour == Rafael Artzy took up a position as research associate and lecturer at University of Wisconsin, Madison in 1956. That year he also made his first of many contributions to Mathematical Reviews. Artzy became associate professor at University of North Carolina, Chapel Hill in 1960. The following year Rutgers University made him a full professor. In 1964 he was a visitor at the Institute for Advanced Study. He wrote Linear Geometry (1965) which was favorably reviewed by H. S. M. Coxeter In 1965 Artzy was at State University of New York in Buffalo. In 1967 he joined Temple University where he was for five years. == Return == In 1972 Rafael Artzy returned to Israel and participated in mathematics at Technion in Haifa. He helped organize a quadrennial conference on geometry at Haifa. For instance, in March 1979 such a conference was held and the proceedings Geometry and Differential Geometry was edited by Artzy and I. Vaisman and published as Lecture Notes in Mathematics #792. In 1992 he published Geometry. An Algebraic Approach Artzy had made 224 contributions to Mathematical Reviews by his last submission in 1995. == References == Allen G. Debus editor (1968) Who’s Who in Science, Marquis Who's Who. Walter Benz (2010) Rafael Artzy (1912–2006), Mitteilungen der Mathematischen Gesellschaft in Hamburg 29:5–7. == External links == Joseph Zaks (2006) Rafael Artzy from University of Haifa. Rafael Artzy at the Mathematics Genealogy Project
|
Wikipedia:Rafael Bombelli#0
|
Rafael Bombelli (baptised on 20 January 1526; died 1572) was an Italian mathematician. Born in Bologna, he is the author of a treatise on algebra and is a central figure in the understanding of imaginary numbers. He was the one who finally managed to address the problem with imaginary numbers. In his 1572 book, L'Algebra, Bombelli solved equations using the method of del Ferro/Tartaglia. He introduced the rhetoric that preceded the representative symbols +i and -i and described how they both worked. == Life == Rafael Bombelli was baptised on 20 January 1526 in Bologna, Papal States. He was born to Antonio Mazzoli, a wool merchant, and Diamante Scudieri, a tailor's daughter. The Mazzoli family was once quite powerful in Bologna. When Pope Julius II came to power, in 1506, he exiled the ruling family, the Bentivoglios. The Bentivoglio family attempted to retake Bologna in 1508, but failed. Rafael's grandfather participated in the coup attempt, and was captured and executed. Later, Antonio was able to return to Bologna, having changed his surname to Bombelli to escape the reputation of the Mazzoli family. Rafael was the oldest of six children. Rafael received no college education, but was instead taught by an engineer-architect by the name of Pier Francesco Clementi. Bombelli felt that none of the works on algebra by the leading mathematicians of his day provided a careful and thorough exposition of the subject. Instead of another convoluted treatise that only mathematicians could comprehend, Rafael decided to write a book on algebra that could be understood by anyone. His text would be self-contained and easily read by those without higher education. Bombelli died in 1572 in Rome. == Bombelli's Algebra == In the book that was published in 1572, entitled Algebra, Bombelli gave a comprehensive account of the algebra known at the time. He was the first European to write down the way of performing computations with negative numbers. The following is an excerpt from the text: "Plus times plus makes plus Minus times minus makes plus Plus times minus makes minus Minus times plus makes minus Plus 8 times plus 8 makes plus 64 Minus 5 times minus 6 makes plus 30 Minus 4 times plus 5 makes minus 20 Plus 5 times minus 4 makes minus 20" As was intended, Bombelli used simple language as can be seen above so that anybody could understand it. But at the same time, he was thorough. === Notation === Bombelli introduced, for the first time in a printed text (in Book II of his Algebra), a form of index notation in which the equation x 3 = 6 x + 40 {\displaystyle x^{3}=6x+40} appeared as 1U3 a. 6U1 p. 40. in which he wrote the U3 as a raised bowl-shape (like the curved part of the capital letter U) with the number 3 above it. Full symbolic notation was developed shortly thereafter by the French mathematician François Viète. === Complex numbers === Perhaps more importantly than his work with algebra, however, the book also includes Bombelli's monumental contributions to complex number theory. Before he writes about complex numbers, he points out that they occur in solutions of equations of the form x 3 = a x + b , {\displaystyle x^{3}=ax+b,} given that ( a / 3 ) 3 > ( b / 2 ) 2 , {\displaystyle (a/3)^{3}>(b/2)^{2},} which is another way of stating that the discriminant of the cubic is negative. The solution of this kind of equation requires taking the cube root of the sum of one number and the square root of some negative number. Before Bombelli delves into using imaginary numbers practically, he goes into a detailed explanation of the properties of complex numbers. Right away, he makes it clear that the rules of arithmetic for imaginary numbers are not the same as for real numbers. This was a big accomplishment, as even numerous subsequent mathematicians were extremely confused on the topic. Bombelli avoided confusion by giving a special name to square roots of negative numbers, instead of just trying to deal with them as regular radicals like other mathematicians did. This made it clear that these numbers were neither positive nor negative. This kind of system avoids the confusion that Euler encountered. Bombelli called the imaginary number i "plus of minus" and used "minus of minus" for -i. Bombelli had the foresight to see that imaginary numbers were crucial and necessary to solving quartic and cubic equations. At the time, people cared about complex numbers only as tools to solve practical equations. As such, Bombelli was able to get solutions using Scipione del Ferro's rule, even in casus irreducibilis, where other mathematicians such as Cardano had given up. In his book, Bombelli explains complex arithmetic as follows: "Plus by plus of minus, makes plus of minus. Minus by plus of minus, makes minus of minus. Plus by minus of minus, makes minus of minus. Minus by minus of minus, makes plus of minus. Plus of minus by plus of minus, makes minus. Plus of minus by minus of minus, makes plus. Minus of minus by plus of minus, makes plus. Minus of minus by minus of minus makes minus." After dealing with the multiplication of real and imaginary numbers, Bombelli goes on to talk about the rules of addition and subtraction. He is careful to point out that real parts add to real parts, and imaginary parts add to imaginary parts. == Reputation == Bombelli is generally regarded as the inventor of complex numbers, as no one before him had made rules for dealing with such numbers, and no one believed that working with imaginary numbers would have useful results. Upon reading Bombelli's Algebra, Leibniz praised Bombelli as an ". . . outstanding master of the analytical art." Crossley writes in his book, "Thus we have an engineer, Bombelli, making practical use of complex numbers perhaps because they gave him useful results, while Cardan found the square roots of negative numbers useless. Bombelli is the first to give a treatment of any complex numbers. . . It is remarkable how thorough he is in his presentation of the laws of calculation of complex numbers. . ." In honor of his accomplishments, a Moon crater was named Bombelli. == Bombelli's method of calculating square roots == Bombelli used a method related to simple continued fractions to calculate square roots. He did not yet have the concept of a continued fraction, and below is the algorithm of a later version given by Pietro Cataldi (1613). The method for finding n {\displaystyle {\sqrt {n}}} begins with n = ( a ± r ) 2 = a 2 ± 2 a r + r 2 {\displaystyle n=(a\pm r)^{2}=a^{2}\pm 2ar+r^{2}\ } with 0 < r < 1 {\displaystyle 0<r<1\ } , from which it can be shown that r = | n − a 2 | 2 a ± r {\displaystyle r={\frac {|n-a^{2}|}{2a\pm r}}} . Repeated substitution of the expression on the right hand side for r {\displaystyle r} into itself yields a continued fraction a ± | n − a 2 | 2 a ± | n − a 2 | 2 a ± | n − a 2 | 2 a ± ⋯ {\displaystyle a\pm {\frac {|n-a^{2}|}{2a\pm {\frac {|n-a^{2}|}{2a\pm {\frac {|n-a^{2}|}{2a\pm \cdots }}}}}}} for the root but Bombelli is more concerned with better approximations for r {\displaystyle r} . The value chosen for a {\displaystyle a} is either of the whole numbers whose squares n {\displaystyle n} lies between. The method gives the following convergents for 13 {\displaystyle {\sqrt {13}}\ } while the actual value is 3.605551275... : 3 2 3 , 3 3 5 , 3 20 33 , 3 66 109 , 3 109 180 , 3 720 1189 , ⋯ {\displaystyle 3{\frac {2}{3}},\ 3{\frac {3}{5}},\ 3{\frac {20}{33}},\ 3{\frac {66}{109}},\ 3{\frac {109}{180}},\ 3{\frac {720}{1189}},\ \cdots } The last convergent equals 3.605550883... . Bombelli's method should be compared with formulas and results used by Heros and Archimedes. The result 265 153 < 3 < 1351 780 {\displaystyle {\frac {265}{153}}<{\sqrt {3}}<{\frac {1351}{780}}} used by Archimedes in his determination of the value of π {\displaystyle \pi } can be found by using 1 and 0 for the initial values of r {\displaystyle r} . == References == === Footnotes === === Citations === === Sources === Morris Kline, Mathematical Thought from Ancient to Modern Times, 1972, Oxford University Press, New York, ISBN 0-19-501496-0 David Eugene Smith, A Source Book in Mathematics, 1959, Dover Publications, New York, ISBN 0-486-64690-4 Crossley, John N. (1987). The emergence of number. Singapore: World Scientific. doi:10.1142/0462. ISBN 978-9971-5-0413-7. Daniel J. Curtin, et al., Rafael Bombelli's L'Algebra, 1996, https://www.people.iup.edu/gsstoudt/history/bombelli/bombelli.pdf == External links == L'Algebra, Libri I, II, III, IV e V, original Italian texts. O'Connor, John J.; Robertson, Edmund F., "Rafael Bombelli", MacTutor History of Mathematics Archive, University of St Andrews Background
|
Wikipedia:Rafael E. Núñez#0
|
Rafael E. Núñez is a professor of cognitive science at the University of California, San Diego and a proponent of embodied cognition. He co-authored Where Mathematics Comes From with George Lakoff. == External links == Academic home page Rafael E. Núñez, Eve Sweetser (2006). "With the Future Behind Them: Convergent Evidence From Aymara Language and Gesture in the Crosslinguistic Comparison of Spatial Construals of Time". (An analysis of the temporal vision in the Aymara culture.)
|
Wikipedia:Ragnar Winther#0
|
Ragnar Winther (born 4 January 1949) is a Norwegian mathematician. He took his PhD in 1977, and was appointed professor at the University of Oslo in 1991. In 2002 he became the leader of the Centre of Mathematics for Applications there. He is a member of the Norwegian Academy of Science and Letters. In 2012 he became a fellow of the American Mathematical Society. == References ==
|
Wikipedia:Raimo Hämäläinen#0
|
Raimo P. Hämäläinen (born 7 July 1948 in Helsinki, Finland): 1 is a professor emeritus at the Aalto University School of Science (Aalto SCI), Finland. Hämäläinen founded Systems Analysis laboratory at Aalto SCI in 1984. His research interests include systems intelligence, multiple-criteria decision analysis, sequential games, simulation, and energy modeling.: 4 Hämäläinen received his Doctor of Technology degree in 1977 from Helsinki University of Technology, advised by Aarne Halme and Olli Lokki. In 2004, The International Society for Multiple Criteria Decision Making awarded Professor Hämäläinen for his work on MCDM research.: 3 Hämäläinen retired on 1 August 2016. == References ==
|
Wikipedia:Rainer Burkard#0
|
Rainer Ernst Burkard (born 28 January 1943, Graz, Austria ) is an Austrian mathematician. His research interests include discrete optimization, graph theory, applied discrete mathematics, and applied number theory. He earned his Ph.D. from the University of Vienna in 1967 and received his habilitation from the University of Graz in 1971. From 1973–1981 Rainer Burkard was a full professor of Applied Mathematics at the University of Cologne (Germany). Since 1981 Rainer Burkard is a full professor at the Graz University of Technology. == Positions held == 1984-1986 Vice President of GMÖOR 1986-1988 President of the Austrian Society of Operations Research 1995-1997 EURO Vice President of IFORS 1993-1996 Dean of the Faculty of Science, Graz University of Technology 1994-1998 Member of the Council of the European Consortium of Mathematics in Industry 1991-2000 Member of the Senate of the Christian Doppler Research Society 2001-2002 Vice President of EURO == Awards == Prize of the Austrian Mathematical Society in 1972 The Scientific Prize of the Society of Mathematics, Economics and Operations Research in 1991 The EURO Gold Medal 1997 Since 1998 Honorary Member of the Hungarian Academy of Sciences Since 2011 Honorary Member of the Austrian Society of Operations Research == Books == Methoden der ganzzahligen Optimierung, Springer Wien, 1972 with Ulrich Derigs: Assignment and Matching Problems: Solution Methods with FORTRAN- Programs. Lecture Notes in Economics and Mathematical Systems, Band 184, Berlin-New York: Springer 1980. Graph Algorithms in Computer Science. HyperCOSTOC Computer Science, Vol. 36, Hofbauer Publ., Wiener Neustadt, 1989. With Mauro Dell' Amico and Silvano Martello: Assignment Problems, SIAM, Philadelphia, 2009. ISBN 978-0-89871-663-4 == References ==
|
Wikipedia:Rajan Hoole#0
|
Michael Richard Ratnarajan Hoole (commonly known as Rajan Hoole) is a Sri Lankan Tamil mathematician, academic and human rights activist. He was one of the founders of University Teachers for Human Rights (UTHR) which documented human rights abuses during the Sri Lankan Civil War. == Early life and family == Hoole is the son of eldest son of Rev. Richard Herbert Ratnathurai Hoole and Jeevamany Somasundaram. He was educated at Chundikuli Girls' College, St. John's College, Jaffna and S. Thomas' College, Mount Lavinia. After school he joined the University of Ceylon. In 1982 he received a Ph.D in mathematical logic from the University of Oxford. Hoole is married to Kirupa (Kirubai) Selvadurai, a fellow academic from the University of Jaffna. He is the brother of Ratnajeevan Hoole. == Career == Hoole worked as a lecturer in the Department of Mathematics at the National University of Singapore. Hoole was amongst three hundred academics who, in 1988, formed the University Teachers for Human Rights (Jaffna) to document and report the increasing number of human rights violations in Sri Lanka's civil war. Following the assassination of Rajini Thiranagama, one of the founders of UTHR(J), in 1989, many members of UTHR(J) left the organisation. Hoole, along with fellow UTHR(J) member Kopalasingham Sritharan, left Jaffna and went into hiding but they continued with their UTHR(J) work, documenting atrocities committed by all sides of the civil war. Hoole and Sritharan were finalists for the 2005 Civil Courage Prize but ultimately won "Certificates of Distinction in Civil Courage" and a $1,000 cash prize. In 2007 Hoole and Sritharan received the Martin Ennals Award for Human Rights Defenders. Hoole is currently a senior lecturer at the University of Jaffna's Department of Mathematics and Statistics. He was trained as a classical pianist. == Works == The Broken Palmyra: The Tamil Crisis in Sri Lanka - an Inside Account (1988, UHarvey Mudd College California) (co-author) Sri Lanka: the Arrogance of Power : Myths, Decadence and Murder (2001, University Teachers for Human Rights (Jaffna)) == References ==
|
Wikipedia:Ralf Seppelt#0
|
Ralf Seppelt is a German mathematician, academic and author. He is a professor of Landscape Ecology and Renewable Resource Economics at Martin Luther University Halle-Wittenberg, head of the Research Unit Ecosystem of the Future and the co-head of the Department of Computational Landscape Ecology at the Helmholtz Centre for Environmental Research. He is the Founding Director of Luxembourg Center for Socio-Environmental Systems, a research center of the University Luxembourg. Seppelt's research has focused on optimizing resource use and land management strategies by modeling human-environment interactions, synthesizing regional studies for global insights, and developing theories for managing multifunctional landscapes. His authored works include articles published in academic journals, as well as contributions to books such as 3 Degrees More: The Impending Hot Season and How Nature Can Help Us Prevent It. == Education == Seppelt earned a diploma in Applied Mathematics from the University of Clausthal, Germany in 1994. He then pursued a doctoral degree in Agroecology and System Analysis at the Technical University of Braunschweig, completing it in 1997. In 2004, he achieved his habilitation, and in 2011, he graduated from the Helmholtz Academy on Science Management. == Career == Seppelt began his academic career in 1994 as a researcher for the Collaborative Research Center at the Technical University of Braunschweig. Between 1997 and 2004, he served as a post-doctoral researcher and lecturer at the Institute for Geoecology at the same institution. From 2004 to 2021, he was a professor of Applied Landscape Ecology at Martin-Luther University Halle-Wittenberg. Since 2022, he has been a professor of Landscape Ecology and Renewable Resource Economics at Martin-Luther University Halle-Wittenberg. He is a fellow of the Stellenbosch Centre of Advanced Studies. Seppelt served as the head of the Department of Computational Landscape Ecology at the Helmholtz Centre for Environmental Research from 2004 to 2022. Since 2022, he has led the research unit 'Ecosystem of the Future' and co-led the Department of Computational Landscape Ecology at the Helmholtz Centre for Environmental Research. He has also been a member of scientific advisory bodies, including Leopoldina and Intergovernmental Platform for Biodiversity and Ecosystem Services (IPBES). In March 2025, he was appointed as the Founding Director of Luxembourg Center for Socio-Environmental Systems, a research center of the University Luxembourg. In December 2024, Seppelt participated in the IPBES11 negotiations in Windhoek, Namibia of the Nexus Assessment. He was a member of the University Council of Universität Hohenheim from 2012 to 2021, member of the Commission of the Senate on Agroecosystem Research of German Research Foundation (DFG) from 2012 to 2018, and member of German National Academy of Sciences Leopoldina for the special synthesis report on Biodiversity decline in Agricultural Landscapes. == Research == As part of his research, Seppelt has contributed to publications, including books and articles in academic journals. === Regional studies on optimizing land use and ecosystem service === Much of Seppelt's research has focused on the regional studies on optimizing land use and ecosystem service. His early research in this regard highlighted the need for consistent, integrated approaches to ecosystem services and land-use conflict research, advocating for methodological rigor, stakeholder involvement, and the use of optimization algorithms for sustainable resource management across scales. Through his research, he demonstrated that increased crop diversity enhances the stability of agricultural production, with benefits varying by region, landscape, and crop type, and highlighted the importance of spatial and temporal diversity in stabilizing food systems. === Global land use science === Seppelt made contributions to global change research by coordinating projects that focused on Sustainable Land Management specifically assessing global land use dynamics and their impact on greenhouse gas emissions and ecosystem services. The project mapped global land system archetypes, offering information on land-use intensification and discussing region-specific strategies for sustainable land management in the face of environmental change. Notably, the spatial analysis of global pollination benefits identified key hotspots for biodiversity protection. As an alternative to the planetary boundary concept, he suggested to investigate global limitation of renewable resource production by identifying the synchronized peak-rate years of 27 global resources, demonstrating that most renewable resources had surpassed their appropriation peak, posing challenges for sustainable resource management in the Anthropocene. In his 2019 study examining the trade-offs between cropland expansion and intensification to meet rising biomass demand, he found that both strategies reduced global crop prices but harmed biodiversity, particularly in tropical regions, while economically benefiting Europe and North America. === Human societies' dependency on biodiversity === Seppelt has conducted research on the relationships between biodiversity, intact ecosystems, and the provisioning of renewable resources to assess humanity's dependence on biodiversity and intact ecosystems. In a 2016 study, he explored reconciling biodiversity conservation and agricultural production by proposing a conceptual framework that linked land use, biodiversity, and production, suggesting nonlinear relationships and offering solutions to harmonize these conflicting objectives. His 2019 study examined how conventional land-use intensification impacts biodiversity and yield, and through a meta-analysis, it found that while intensification increased yield, it generally reduced species richness, with effects varying by system type and intensity level. === Principles of high-quality science === Seppelt also contributed to the discussions on the principles of quality of science. He has advocated for a shift in science from a growth-oriented focus to one emphasizing quality, curiosity, discovery, and societal relevance, addressing concerns about the impact of misinformation in a 'postfactual' era. == Media coverage == Seppelt's work has garnered media attention, with mentions in Die Zeit, The Independent, and interviews with German news outlets. He also co-authored a season of Wissen-vor-8 Natur on sustainbility and Climate Change for German ARD broadcasting. == Bibliography == === Books === Computer-Based Environmental Management (2003) ISBN 9783527307326 3 Degrees More : The Impending Hot Season and How Nature Can Help Us Prevent It (2024) ISBN 9783031581434 Atlas of Ecosystem Services: Drivers, Risks and Societal Responses (2019) ISBN 9783319962283 === Selected articles === Seppelt, R., Dormann, C. F., Eppink, F. V., Lautenbach, S., & Schmidt, S. (2011). A quantitative review of ecosystem service studies: Approaches, shortcomings, and the road ahead. Journal of Applied Ecology, 48(3), 630–636. Lautenbach, S., Seppelt, R., Liebscher, J., & Dormann, C. F. (2012). Spatial and temporal trends of global pollination benefit. PLOS ONE, 7(4), e35954. Václavík, T., Lautenbach, S., Kuemmerle, T., & Seppelt, R. (2013). Mapping global land system archetypes. Global Environmental Change, 23(6), 1637–1647. Seppelt, R., Manceur, A. M., Liu, J., Fenichel, E. P., & Klotz, S. (2014). Synchronized peak-rate years of global resource use. Ecology and Society, 19(4), Article 50. Seppelt, R., Beckmann, M., Ceauşu, S., Cord, A. F., Gerstner, K., Gurevitch, J., & Newbold, T. (2016). Harmonizing biodiversity conservation and productivity in the context of increasing demands on landscapes. BioScience, 66(10), 890–896. Seppelt, R., Beckmann, M., Václavík, T., & Volk, M. (2018). The art of scientific performance. Trends in Ecology & Evolution, 33(11), 805–809. Zabel, F., Delzeit, R., Schneider, J. M., Seppelt, R., Mauser, W., & Václavík, T. (2019). Global impacts of future cropland expansion and intensification on agricultural markets and biodiversity. Nature Communications, 10(1), Article 2844. Beckmann, M., Gerstner, K., Akin‐Fajiye, M., Ceaușu, S., Kambach, S., Kinlock, N. L., & Seppelt, R. (2019). Conventional land‐use intensification reduces species richness and increases production: A global meta‐analysis. Global Change Biology, 25(6), 1941–1956. Egli, L., Mehrabi, Z., & Seppelt, R. (2021). More farms, less specialized landscapes, and higher crop diversity stabilize food supplies. Environmental Research Letters, 16(5), 055015. Mehrabi, Z., Delzeit, R., Ignaciuk, A., Levers, C., Braich, G., Bajaj, K., & You, L. (2022). Research priorities for global food security under extreme events. One Earth, 5(7), 756–766. == References ==
|
Wikipedia:Ralph Gordon Stanton#0
|
Ralph Gordon Stanton (21 October 1923 – 21 April 2010) was a Canadian mathematician, teacher, scholar, and pioneer in mathematics and computing education. As a researcher, he made important contributions in the area of discrete mathematics; and as an educator and administrator, was also instrumental in founding the Faculty of Mathematics at the University of Waterloo, and for establishing its unofficial mascot of the pink tie. == Life and education == Stanton was born in Lambeth, Ontario, Canada on 21 October 1923. He was the eldest of four children. Stanton received his BA in Mathematics and Physics in 1944 from the University of Western Ontario. He went on to receive his MA in 1945 and PhD in 1948, both from the University of Toronto. His PhD dissertation was on the topic "On The Mathiew Group M(Sub 24)", under advisor Richard Dagobert Brauer. He received honorary Doctor of Science degrees from the University of Queensland in 1989, and from the University of Natal in 1997. He also received an honorary D. Math from the University of Waterloo in 1997. == Career == === Faculty positions === From 1946 to 1957 Stanton taught at the University of Toronto. In 1957, he moved to Kitchener-Waterloo to work at what was then Waterloo College, which was undergoing expansion, and became what is currently the University of Waterloo. At the time of his arrival he constituted the entirety of the Mathematics Department. Stanton became the university's first Dean of Graduate Studies in 1960. He turned the Department of Mathematics into the Faculty of Mathematics, which when it opened on January 1, 1967 was the first of its kind in North America. In 1967 he moved to York University to found their Graduate program in Mathematics. In 1970 he moved to the University of Manitoba's Department of Computer Science, serving successively as Head, Professor, and Distinguished Professor. === Research === Stanton's main areas of research were in statistics and applied statistics; algebra; mathematical biology; combinatorial design theory, including pair-wise balanced designs, difference sets, covering and packing designs, and room squares; graph theory, including graph models of networks; and algorithms. == Teaching and other influences == Stanton's influence on the young University of Waterloo extended to many areas. He hired Wes Graham, who Stanton had taught as an undergraduate. Graham became one of the first professors of computer science at the university, and the first Director of its Computing Centre in 1962. Stanton was one of five members of the Academic Advisory Committee that, in 1958, urged the board of governors to buy the Schweitzer farm on the outskirts of Waterloo that today houses the main campus. He introduced computers to classroom teaching in 1960, and introduced co-op programs in applied mathematics and computer science. His interest in teaching extended to the secondary school level. He encouraged teaching of computing science and mathematics in high schools, serving as editor of two high school mathematical journals, member of Ontario provincial curriculum committees, and was actively involved in developing the Canadian Junior Mathematics Contest and the Descartes Senior Mathematics Competition, now administered by the University of Waterloo's Centre for Education in Mathematics and Computing. Stanton's gaudy neckties were the inspiration for the University of Waterloo's Faculty of Mathematics mascot, a giant pink tie that was hung by students over the Math and Computer Building when it opened in 1968. The pink tie remains the unofficial symbol of the Mathematics Faculty, with the Mathematics Society distributing more than 1000 pink ties to new students in the first week of the school year. Stanton founded and administered three not-for-profit corporations dedicated to mathematical research and communication. "Utilitas Mathematica Publishing" started in the 1970s and published conference proceedings in mathematics and scientific computing. He founded the Charles Babbage Research Centre (CBRC), a registered charitable organization, to promote conferences and encourage the publication of research. The CBRC currently has published the Canadian journal of combinatorics, Ars Combinatoria, since its inception in 1976, and continues to publish six volumes per year. In 1990 he began his final project, the Institute of Combinatorics and its Applications (ICA). Although the institute was minimally active after Stanton's death in 2010, in March 2016 it resumed its full activities. Stanton also helped organize the first Southeastern Conference on Combinatorics, Graph Theory, and Computing. He continued as one of the organizers until at least 1991, at which point it was the largest combinatorial meeting in the world. == Awards == In 1985 he was awarded the Killam Prize in Mathematics for Natural Sciences from the Canada Council for the Arts. (As of 2017, five prizes of $100,000 Cdn are awarded annually.) == Literature collection == Stanton's collection of French and Portuguese literature was described as one of the world's largest private collections of classical Portuguese literature. He donated his collection to the Fisher Rare Book Library at the University of Toronto. His donations spanned from 1987 to his death in 2010, with the bulk being donated in the 1990s. Thanks to his generosity, the Fisher Rare Book Library now "boasts comprehensive collections of most of the significant French playwrights of the Classical period". An example of his donations is a two-volume set of the 1587 edition of Holinshed's Chronicles. == References ==
|
Wikipedia:Ralph Henstock#0
|
Ralph Henstock (2 June 1923 – 17 January 2007) was an English mathematician and author. As an Integration theorist, he is notable for Henstock–Kurzweil integral. Henstock brought the theory to a highly developed stage without ever having encountered Jaroslav Kurzweil's 1957 paper on the subject. == Early life == Henstock was born in the coal-mining village of Newstead, Nottinghamshire, the only child of mineworker and former coalminer William Henstock and Mary Ellen Henstock (née Bancroft). On the Henstock side he was descended from 17th century Flemish immigrants called Hemstok. Because of his early academic promise it was expected that Henstock would attend the University of Nottingham where his father and uncle had received technical education, but as it turned out he won scholarships which enabled him to study mathematics at St John's College, Cambridge from October 1941 until November 1943, when he was sent for war service to the Ministry of Supply's department of Statistical Method and Quality Control in London. This work did not satisfy him, so he enrolled at Birkbeck College, London where he joined the weekly seminar of Professor Paul Dienes which was then a focus for mathematical activity in London. Henstock wanted to study divergent series but Dienes prevailed upon him to get involved in the theory of integration, thereby setting him on course for his life's work. A devoted Methodist, the lasting impression he made was one of gentle sincerity and amiability. Henstock married Marjorie Jardine in 1949. Their son John was born 10 July 1952. Ralph Henstock died on 17 January 2007 after a short illness. == Work == Henstock was awarded the Cambridge B.A. in 1944 and began research for the PhD in Birkbeck College, London, under the supervision of Paul Dienes. His PhD thesis, entitled Interval Functions and their Integrals, was submitted in December 1948. His Ph.D. examiners were Burkill and H. Kestelman. In 1947 he returned briefly to Cambridge to complete the undergraduate mathematical studies which had been truncated by his Ministry of Supply work. Most of Henstock's work was concerned with integration. From initial studies of the Burkill and Ward integrals he formulated an integration process whereby the domain of integration is suitably partitioned for Riemann sums to approximate the integral of a function. His methods led to an integral on the real line that was very similar in construction and simplicity to the Riemann integral but which included the Lebesgue integral and, in addition, allowed non-absolute convergence. These ideas were developed from the late 1950s. Independently, Jaroslav Kurzweil developed a similar Riemann-type integral on the real line. The resulting integral is now known as the Henstock-Kurzweil integral. On the real line it is equivalent to the Denjoy-Perron integral, but has a simpler definition. In the following decades, Henstock developed extensively the distinctive features of his theory, inventing the concepts of division spaces or integration bases to demonstrate in general settings the properties and characteristics of mathematical integration. His theory provides a unified approach to non-absolute integral, as different kinds of Henstock integral, choosing an appropriate integration basis (division space, in Henstock's own terminology). It has been used in differential and integral equations, harmonic analysis, probability theory and Feynman integration. Numerous monographs and texts have appeared since 1980 and there have been several conferences devoted to the theory. It has been taught in standard courses in mathematical analysis. Henstock was author of 46 journal papers in the period 1946 to 2006. He published four books on analysis (Theory of Integration, 1963; Linear Analysis, 1967; Lectures on the Theory of Integration, 1988; and The General Theory of Integration, 1991). He wrote 171 reviews for MathSciNet. In 1994 he was awarded the Andy Prize of the XVIII Summer Symposium in Real Analysis. His academic career began as Assistant Lecturer, Bedford College for Women, 1947–48; then Assistant Lecturer at Birkbeck, 1948–51; Lecturer, Queen's University Belfast, 1951–56; Lecturer, Bristol University, 1956–60; Senior Lecturer and Reader, Queen's University Belfast, 1960–64; Reader, Lancaster University, 1964–70; Chair of Pure Mathematics, New University of Ulster, 1970–88; and Leverhulme Fellow 1988–91. === List of publications of Ralph Henstock === Much of Henstock's earliest work was published by the Journal of the London Mathematical Society. These were "On interval functions and their integrals" I (21, 1946) and II (23, 1948); "The efficiency of matrices for Taylor series" (22, 1947); "The efficiency of matrices for bounded sequences" (25, 1950); "The efficiency of convergence factors for functions of a continuous real variable" (30, 1955); "A new description of the Ward integral" (35 1960); and "The integrability of functions of interval functions" (39 1964). His works, published in Proceedings of the London Mathematical Society, were "Density integration" (53, 1951); "On the measure of sum sets (I) The theorems of Brunn, Minkowski, and Lusternik, (with A.M. McBeath)" ([3] 3, 1953); "Linear functions with domain a real countably infinite dimensional space" ([3] 5, 1955); "Linear and bilinear functions with domain contained in a real countably infinite dimensional space" ([3] 6, 1956); "The use of convergence factors in Ward integration" ([3] 10, 1960); "The equivalence of generalized forms of the Ward, variational, Denjoy-Stieltjes, and Perron-Stieltjes integrals" ([3] 10, 1960); "N-variation and N-variational integrals of set functions" ([3] 11, 1961); "Definitions of Riemann type of the variational integrals" ([3] 11, 1961); "Difference-sets and the Banach–Steinhaus theorem" ([3] 13, 1963); "Generalized integrals of vector-valued functions ([3] 19 1969) Additional publications: Sets of uniqueness for trigonometric series and integrals, Proceedings of the Cambridge Philosophical Society 46 (1950) 538–548. On Ward's Perron-Stieltjes integral, Canadian Journal of Mathematics 9 (1957) 96–109. The summation by convergence factors of Laplace-Stieltjes integrals outside their half plane of convergence, Mathematische Zeitschrift 67 (1957) 10–31. Theory of Integration, Butterworths, London, 1962.[1] Tauberian theorems for integrals, Canadian Journal of Mathematics 15 (1963) 433–439. Majorants in variational integration, Canadian Journal of Mathematics 18 (1966) 49–74. A Riemann-type integral of Lebesgue power, Canadian Journal of Mathematics 20 (1968) 79–87. Linear Analysis, Butterworths, London, 1967.[2] Integration by parts, Aequationes Mathematicae 9 (1973) 1–18. The N-variational integral and the Schwartz distributions III, Journal of the London Mathematical Society (2) 6 (1973) 693–700. Integration in product spaces, including Wiener and Feynman integration, Proceedings of the London Mathematical Society (3) 27 (1973) 317–344. Additivity and the Lebesgue limit theorems, The Greek Mathematical Society C. Carathéodory Symposium, 1973, 223–241 (Proceedings published 1974). [3] Integration, variation and differentiation in division spaces, Proceedings of the Royal Irish Academy, Series A (10) 78 (1978) 69–85. The variation on the real line, Proceedings of the Royal Irish Academy, Series A (1) 79 (1979) 1–10. Generalized Riemann integration and an intrinsic topology, Canadian Journal of Mathematics 32 (1980) 395–413. Division spaces, vector-valued functions and backwards martingales, Proceedings of the Royal Irish Academy, Series A (2) 80 (1980) 217–232. Density integration and Walsh functions, Bulletin of the Malaysian Mathematical Society (2) 5 (1982) 1–19. A problem in two-dimensional integration, Journal of the Australian Mathematical Society, (Series A) 35 (1983) 386–404. The Lebesgue syndrome, Real Analysis Exchange 9 (1983–84) 96–110. The reversal of power and integration, Bulletin of the Institute of Mathematics and its Applications 22 (1986) 60–61. Lectures on the Theory of Integration, World Scientific, Singapore, 1988.[4] A short history of integration theory, South East Asian Bulletin of Mathematics 12 (1988) 75–95. Introduction to the new integrals, New integrals (Coleraine, 1988), 7–9, Lecture Notes in Mathematics, 1419, Springer-Verlag, Berlin, 1990. Integration in infinite-dimensional spaces, New integrals (Coleraine, 1988), 54–65, Lecture Notes in Mathematics, 1419, Springer-Verlag, Berlin, 1990. Stochastic and other functional integrals, Real Analysis Exchange 16 (1990/91) 460–470. The General Theory of Integration, Oxford Mathematical Monographs, Clarendon Press, Oxford, 1991.[5] The integral over product spaces and Wiener's formula, Real Analysis Exchange 17 (1991/92) 737–744. Infinite decimals, Mathematica Japonica 38 (1993) 203–209. Measure spaces and division spaces, Real Analysis Exchange 19 (1993/94) 121–128. The construction of path integrals, Mathematica Japonica 39 (1994) 15–18. Gauge or Kurzweil-Henstock integration. Proceedings of the Prague Mathematical Conference 1996, 117–122, Icaris, Prague, 1997. De La Vallée Poussin's contributions to integration theory, Charles-Jean de La Vallée Poussin Oeuvres Scientifiques, Volume II, Académie Royale de Belgique, Circolo Matematico di Palermo, 2001, 3–16. Partitioning infinite-dimensional spaces for generalized Riemann integration, (with P. Muldowney and V.A. Skvortsov) Bulletin of the London Mathematical Society, 38 (2006) 795–803. === Review of Henstock's work === The journal Scientiae Mathematicae Japonicae published a special commemorative issue in Henstock’s honor, January 2008. The above article is copied, with permission, from Real Analysis Exchange and from Scientiae Mathematicae Japonicae. The latter contains the following review of Henstock's work: 1. Ralph Henstock, an obituary, by P. Bullen. 2. Ralph Henstock: research summary, by E. Talvila. 3. The integral à la Henstock, by Peng Yee Lee. 4. The natural integral on the real line, by B. Thomson. 5. Ralph Henstock's influence on integration theory, by W.F. Pfeffer. 6. Henstock on random variation, by P. Muldowney. 7. Henstock integral in harmonic analysis, by V.A. Skvortsov. 8. Convergences on the Henstock-Kurzweil integral, by S. Nakanishi. == See also == Partition of an interval Integrable function == External links == The Calculus and Gauge Integrals, by Ralph Henstock Lectures on Integration, by Ralph Henstock Autobiographical notes, by Ralph Henstock == References == Muldowney, P. (1990). "About Ralph Henstock". In Bullen, P. S. (ed.). New Integrals: Proceedings of the Henstock Conference held in Coleraine, Northern Ireland, August 9–12, 1988. Lecture Notes in Mathematics. Vol. 1419. Springer-Verlag. pp. 1–6. doi:10.1007/BFb0083093. ISBN 0-387-52322-7. Muldowney, Patrick (2007). "Ralph Henstock, 1923-2007" (PDF). Real Analysis Exchange. 32 (2): v–vii. Archived from the original (PDF) on 28 September 2011. "Ralph Henstock". Scientiae Mathematicae Japonicae. 67 (1). 2008. Whole Number 247 Muldowney, Pat (2010). "Ralph Henstock, 1923–2007". Bull. London Math. Soc. 42 (4): 753–758. doi:10.1112/blms/bdq012.
|
Wikipedia:Ralph Lent Jeffery#0
|
Ralph Lent Jeffery (3 October 1889 Overton, Yarmouth County, Nova Scotia, Canada – 1975 Wolfville, Nova Scotia) was a Canadian mathematician working on analysis. He taught at several institutions including Acadia University, the University of Saskatchewan and Queen's University. Jeffery Hall at Queen's was named for him. In 1937 he was elected a Fellow of the Royal Society of Canada. In 1925 Jeffery exhibited a bounded function of two real variables, continuous in each, yet fails to have an integral. In 1951 Jeffery published Theory of Functions of a Real Variable which was noted for its coverage of integration theory. == Selected papers == 1925: "Definite integrals containing a parameter", Annals of Mathematics 26(3): 173 to 80 doi:10.2307/1967895 1926: "Functions of two variables for which the double integral does not exist", American Mathematical Monthly 33(3): 142,3 1931: "The uniform approximation of a sequence of integrals", American Journal of Mathematics 53(1): 61 to 71 1933: "Sets of k-extent in n-dimensional space", Transactions of the American Mathematical Society 35(3): 629 to 47 doi:10.2307/1989852 == References == O'Connor, John J.; Robertson, Edmund F., "Ralph Lent Jeffery", MacTutor History of Mathematics Archive, University of St Andrews Ralph Lent Jeffery at the Mathematics Genealogy Project
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.