text
stringlengths
559
401k
source
stringlengths
13
121
Mathematics is a field of study that discovers and organizes methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of abstract objects that consist of either abstractions from nature or—in modern mathematics—purely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to prove properties of objects, a proof consisting of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, and—in case of abstraction from nature—some basic properties that are considered true starting points of the theory under consideration. Mathematics is essential in the natural sciences, engineering, medicine, finance, computer science, and the social sciences. Although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent of any scientific experimentation. Some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. Other areas are developed independently from any application (and are therefore called pure mathematics) but often later find practical applications. Historically, the concept of a proof and its associated mathematical rigour first appeared in Greek mathematics, most notably in Euclid's Elements. Since its beginning, mathematics was primarily divided into geometry and arithmetic (the manipulation of natural numbers and fractions), until the 16th and 17th centuries, when algebra and infinitesimal calculus were introduced as new fields. Since then, the interaction between mathematical innovations and scientific discoveries has led to a correlated increase in the development of both. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method, which heralded a dramatic increase in the number of mathematical areas and their fields of application. The contemporary Mathematics Subject Classification lists more than sixty first-level areas of mathematics. == Areas of mathematics == Before the Renaissance, mathematics was divided into two main areas: arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes. Some types of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics. During the Renaissance, two more areas appeared. Mathematical notation led to algebra which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two subfields differential calculus and integral calculus, is the study of continuous functions, which model the typically nonlinear relationships between varying quantities, as represented by variables. This division into four main areas—arithmetic, geometry, algebra, and calculus—endured until the end of the 19th century. Areas such as celestial mechanics and solid mechanics were then studied by mathematicians, but now are considered as belonging to physics. The subject of combinatorics has been studied for much of recorded history, yet did not become a separate branch of mathematics until the seventeenth century. At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion of new areas of mathematics. The 2020 Mathematics Subject Classification contains no less than sixty-three first-level areas. Some of these areas correspond to the older division, as is true regarding number theory (the modern name for higher arithmetic) and geometry. Several other first-level areas have "geometry" in their names or are otherwise commonly considered part of geometry. Algebra and calculus do not appear as first-level areas but are respectively split into several first-level areas. Other first-level areas emerged during the 20th century or had not previously been considered as mathematics, such as mathematical logic and foundations. === Number theory === Number theory began with the manipulation of numbers, that is, natural numbers ( N ) , {\displaystyle (\mathbb {N} ),} and later expanded to integers ( Z ) {\displaystyle (\mathbb {Z} )} and rational numbers ( Q ) . {\displaystyle (\mathbb {Q} ).} Number theory was once called arithmetic, but nowadays this term is mostly used for numerical calculations. Number theory dates back to ancient Babylon and probably China. Two prominent early number theorists were Euclid of ancient Greece and Diophantus of Alexandria. The modern study of number theory in its abstract form is largely attributed to Pierre de Fermat and Leonhard Euler. The field came to full fruition with the contributions of Adrien-Marie Legendre and Carl Friedrich Gauss. Many easily stated number problems have solutions that require sophisticated methods, often from across mathematics. A prominent example is Fermat's Last Theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was proved only in 1994 by Andrew Wiles, who used tools including scheme theory from algebraic geometry, category theory, and homological algebra. Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven despite considerable effort. Number theory includes several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), diophantine equations, and transcendence theory (problem oriented). === Geometry === Geometry is one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the needs of surveying and architecture, but has since blossomed out into many other subfields. A fundamental innovation was the ancient Greeks' introduction of the concept of proofs, which require that every assertion must be proved. For example, it is not sufficient to verify by measurement that, say, two lengths are equal; their equality must be proven via reasoning from previously accepted results (theorems) and a few basic statements. The basic statements are not subject to proof because they are self-evident (postulates), or are part of the definition of the subject of study (axioms). This principle, foundational for all mathematics, was first elaborated for geometry, and was systematized by Euclid around 300 BC in his book Elements. The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the three-dimensional Euclidean space. Euclidean geometry was developed without change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This constituted a major change of paradigm: Instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using their coordinates, which are numbers. Algebra (and later, calculus) can thus be used to solve geometrical problems. Geometry was split into two new subfields: synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically. Analytic geometry allows the study of curves unrelated to circles and lines. Such curves can be defined as the graph of functions, the study of which led to differential geometry. They can also be defined as implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry also makes it possible to consider Euclidean spaces of higher than three dimensions. In the 19th century, mathematicians discovered non-Euclidean geometries, which do not follow the parallel postulate. By questioning that postulate's truth, this discovery has been viewed as joining Russell's paradox in revealing the foundational crisis of mathematics. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that do not change under specific transformations of the space. Today's subareas of geometry include: Projective geometry, introduced in the 16th century by Girard Desargues, extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by unifying the treatments for intersecting and parallel lines. Affine geometry, the study of properties relative to parallelism and independent from the concept of length. Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions. Manifold theory, the study of shapes that are not necessarily embedded in a larger space. Riemannian geometry, the study of distance properties in curved spaces. Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials. Topology, the study of properties that are kept under continuous deformations. Algebraic topology, the use in topology of algebraic methods, mainly homological algebra. Discrete geometry, the study of finite configurations in geometry. Convex geometry, the study of convex sets, which takes its importance from its applications in optimization. Complex geometry, the geometry obtained by replacing real numbers with complex numbers. === Algebra === Algebra is the art of manipulating equations and formulas. Diophantus (3rd century) and al-Khwarizmi (9th century) were the two main precursors of algebra. Diophantus solved some equations involving unknown natural numbers by deducing new relations until he obtained the solution. Al-Khwarizmi introduced systematic methods for transforming equations, such as moving a term from one side of an equation into the other side. The term algebra is derived from the Arabic word al-jabr meaning 'the reunion of broken parts' that he used for naming one of these methods in the title of his main treatise. Algebra became an area in its own right only with François Viète (1540–1603), who introduced the use of variables for representing unknown or unspecified numbers. Variables allow mathematicians to describe the operations that have to be done on the numbers represented using mathematical formulas. Until the 19th century, algebra consisted mainly of the study of linear equations (presently linear algebra), and polynomial equations in a single unknown, which were called algebraic equations (a term still in use, although it may be ambiguous). During the 19th century, mathematicians began to use variables to represent things other than numbers (such as matrices, modular integers, and geometric transformations), on which generalizations of arithmetic operations are often valid. The concept of algebraic structure addresses this, consisting of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. The scope of algebra thus grew to include the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, as established by the influence and works of Emmy Noether, and popularized by Van der Waerden's book Moderne Algebra. Some types of algebraic structures have useful and often fundamental properties, in many areas of mathematics. Their study became autonomous parts of algebra, and include: group theory field theory vector spaces, whose study is essentially the same as linear algebra ring theory commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry homological algebra Lie algebra and Lie group theory Boolean algebra, which is widely used for the study of the logical structure of computers The study of types of algebraic structures as mathematical objects is the purpose of universal algebra and category theory. The latter applies to every mathematical structure (not only algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology. === Calculus and analysis === Calculus, formerly called infinitesimal calculus, was introduced independently and simultaneously by 17th-century mathematicians Newton and Leibniz. It is fundamentally the study of the relationship of variables that depend on each other. Calculus was expanded in the 18th century by Euler with the introduction of the concept of a function and many other results. Presently, "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts. Analysis is further subdivided into real analysis, where variables represent real numbers, and complex analysis, where variables represent complex numbers. Analysis includes many subareas shared by other areas of mathematics which include: Multivariable calculus Functional analysis, where variables represent varying functions Integration, measure theory and potential theory, all strongly related with probability theory on a continuum Ordinary differential equations Partial differential equations Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications === Discrete mathematics === Discrete mathematics, broadly speaking, is the study of individual, countable mathematical objects. An example is the set of all integers. Because the objects of study here are discrete, the methods of calculus and mathematical analysis do not directly apply. Algorithms—especially their implementation and computational complexity—play a major role in discrete mathematics. The four color theorem and optimal sphere packing were two major problems of discrete mathematics solved in the second half of the 20th century. The P versus NP problem, which remains open to this day, is also important for discrete mathematics, since its solution would potentially impact a large number of computationally difficult problems. Discrete mathematics includes: Combinatorics, the art of enumerating mathematical objects that satisfy some given constraints. Originally, these objects were elements or subsets of a given set; this has been extended to various objects, which establishes a strong link between combinatorics and other parts of discrete mathematics. For example, discrete geometry includes counting configurations of geometric shapes. Graph theory and hypergraphs Coding theory, including error correcting codes and a part of cryptography Matroid theory Discrete geometry Discrete probability distributions Game theory (although continuous games are also studied, most common games, such as chess and poker are discrete) Discrete optimization, including combinatorial optimization, integer programming, constraint programming === Mathematical logic and set theory === The two subjects of mathematical logic and set theory have belonged to mathematics since the end of the 19th century. Before this period, sets were not considered to be mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Before Cantor's study of infinite sets, mathematicians were reluctant to consider actually infinite collections, and considered infinity to be the result of endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets but by showing that this implies different sizes of infinity, per Cantor's diagonal argument. This led to the controversy over Cantor's set theory. In the same period, various areas of mathematics concluded the former intuitive definitions of the basic mathematical objects were insufficient for ensuring mathematical rigour. This became the foundational crisis of mathematics. It was eventually solved in mainstream mathematics by systematizing the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number has a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. This mathematical abstraction from reality is embodied in the modern philosophy of formalism, as founded by David Hilbert around 1910. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and proofs. The approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every consistent formal system that contains the natural numbers, there are theorems that are true (that is provable in a stronger system), but not provable inside the system. This approach to the foundations of mathematics was challenged during the first half of the 20th century by mathematicians led by Brouwer, who promoted intuitionistic logic, which explicitly lacks the law of excluded middle. These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theories), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, formal verification, program analysis, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories. === Statistics and other decision sciences === The field of statistics is a mathematical application that is employed for the collection and processing of data samples, using procedures based on mathematical methods especially probability theory. Statisticians generate data with random sampling or randomized experiments. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints. For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics overlaps with other decision sciences, such as operations research, control theory, and mathematical economics. === Computational mathematics === Computational mathematics is the study of mathematical problems that are typically too large for human, numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretization with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation. == History == === Etymology === The word mathematics comes from the Ancient Greek word máthēma (μάθημα), meaning 'something learned, knowledge, mathematics', and the derived expression mathēmatikḗ tékhnē (μαθηματικὴ τέχνη), meaning 'mathematical science'. It entered the English language during the Late Middle English period through French and Latin. Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense. The Pythagoreans were likely the first to constrain the use of the word to just the study of arithmetic and geometry. By the time of Aristotle (384–322 BC) this meaning was fully established. In Latin and English, until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This change has resulted in several mistranslations: For example, Saint Augustine's warning that Christians should beware of mathematici, meaning "astrologers", is sometimes mistranslated as a condemnation of mathematicians. The apparent plural form in English goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural ta mathēmatiká (τὰ μαθηματικά) and means roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, inherited from Greek. In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math. === Ancient === In addition to recognizing how to count physical objects, prehistoric peoples may have also known how to count abstract quantities, like time—days, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time. In the 6th century BC, Greek mathematics began to emerge as a distinct discipline and some Ancient Greeks such as the Pythagoreans appeared to have considered it a subject in its own right. Around 300 BC, Euclid organized mathematical knowledge by way of postulates and first principles, which evolved into the axiomatic method that is used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes (c. 287 – c. 212 BC) of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus of Nicaea, 2nd century BC), and the beginnings of algebra (Diophantus, 3rd century AD). The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series. === Medieval and later === During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarizmi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. The Greek and Arabic mathematical texts were in turn translated to Latin during the Middle Ages and made available in Europe. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe, with innovations that revolutionized mathematics, such as the introduction of variables and symbolic notation by François Viète (1540–1603), the introduction of logarithms by John Napier in 1614, which greatly simplified numerical calculations, especially for astronomy and marine navigation, the introduction of coordinates by René Descartes (1596–1650) for reducing geometry to algebra, and the development of calculus by Isaac Newton (1643–1727) and Gottfried Leibniz (1646–1716). Leonhard Euler (1707–1783), the most notable mathematician of the 18th century, unified these innovations into a single corpus with a standardized terminology, and completed them with the discovery and the proof of numerous theorems. Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved. Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews (MR) database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs." == Symbolic notation and terminology == Mathematical notation is widely used in science and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way. This notation consists of symbols used for representing operations, unspecified numbers, relations and any other mathematical objects, and then assembling them into expressions and formulas. More precisely, numbers and other mathematical objects are represented by symbols called variables, which are generally Latin or Greek letters, and often include subscripts. Operation and relations are generally represented by specific symbols or glyphs, such as + (plus), × (multiplication), ∫ {\textstyle \int } (integral), = (equal), and < (less than). All these symbols are generally grouped according to specific rules to form expressions and formulas. Normally, expressions and formulas do not appear alone, but are included in sentences of the current language, where expressions play the role of noun phrases and formulas play the role of clauses. Mathematics has developed a rich terminology covering a broad range of fields that study the properties of various abstract, idealized objects and how they interact. It is based on rigorous definitions that provide a standard foundation for communication. An axiom or postulate is a mathematical statement that is taken to be true without need of proof. If a mathematical statement has yet to be proven (or disproven), it is termed a conjecture. Through a series of rigorous arguments employing deductive reasoning, a statement that is proven to be true becomes a theorem. A specialized theorem that is mainly used to prove another theorem is called a lemma. A proven instance that forms part of a more general finding is termed a corollary. Numerous technical terms used in mathematics are neologisms, such as polynomial and homeomorphism. Other technical terms are words of the common language that are used in an accurate meaning that may differ slightly from their common meaning. For example, in mathematics, "or" means "one, the other or both", while, in common language, it is either ambiguous or means "one or the other but not both" (in mathematics, the latter is called "exclusive or"). Finally, many mathematical terms are common words that are used with a completely different meaning. This may lead to sentences that are correct and true mathematical assertions, but appear to be nonsense to people who do not have the required background. For example, "every free module is flat" and "a field is always a ring". == Relationship with sciences == Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model. There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. === Pure and applied mathematics === Until the 19th century, the development of mathematics in the West was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics. For example, the natural numbers and arithmetic were introduced for the need of counting, and geometry was motivated by surveying, architecture and astronomy. Later, Isaac Newton introduced infinitesimal calculus for explaining the movement of the planets with his law of gravitation. Moreover, most mathematicians were also scientists, and many scientists were also mathematicians. However, a notable exception occurred with the tradition of pure mathematics in Ancient Greece. The problem of integer factorization, for example, which goes back to Euclid in 300 BC, had no practical application before its use in the RSA cryptosystem, now widely used for the security of computer networks. In the 19th century, mathematicians such as Karl Weierstrass and Richard Dedekind increasingly focused their research on internal problems, that is, pure mathematics. This led to split mathematics into pure mathematics and applied mathematics, the latter being often considered as having a lower value among mathematical purists. However, the lines between the two are frequently blurred. The aftermath of World War II led to a surge in the development of applied mathematics in the US and elsewhere. Many of the theories developed for applications were found interesting from the point of view of pure mathematics, and many results of pure mathematics were shown to have applications outside mathematics; in turn, the study of these applications may give new insights on the "pure theory". An example of the first case is the theory of distributions, introduced by Laurent Schwartz for validating computations done in quantum mechanics, which became immediately an important tool of (pure) mathematical analysis. An example of the second case is the decidability of the first-order theory of the real numbers, a problem of pure mathematics that was proved true by Alfred Tarski, with an algorithm that is impossible to implement because of a computational complexity that is much too high. For getting an algorithm that can be implemented and can solve systems of polynomial equations and inequalities, George Collins introduced the cylindrical algebraic decomposition that became a fundamental tool in real algebraic geometry. In the present day, the distinction between pure and applied mathematics is more a question of personal research aim of mathematicians than a division of mathematics into broad areas. The Mathematics Subject Classification has a section for "general applied mathematics" but does not mention "pure mathematics". However, these terms are still used in names of some university departments, such as at the Faculty of Mathematics at the University of Cambridge. === Unreasonable effectiveness === The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics. A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses. In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four. A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon Ω − . {\displaystyle \Omega ^{-}.} In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments. === Specific sciences === ==== Physics ==== Mathematics and physics have influenced each other over their modern history. Modern physics uses mathematics abundantly, and is also considered to be the motivation of major mathematical developments. ==== Computing ==== Computing is closely related to mathematics in several ways. Theoretical computer science is considered to be mathematical in nature. Communication technologies apply branches of mathematics that may be very old (e.g., arithmetic), especially with respect to transmission security, in cryptography and coding theory. Discrete mathematics is useful in many areas of computer science, such as complexity theory, information theory, and graph theory. In 1998, the Kepler conjecture on sphere packing seemed to also be partially proven by computer. ==== Biology and chemistry ==== Biology uses probability extensively in fields such as ecology or neurobiology. Most discussion of probability centers on the concept of evolutionary fitness. Ecology heavily uses modeling to simulate population dynamics, study ecosystems such as the predator-prey model, measure pollution diffusion, or to assess climate change. The dynamics of a population can be modeled by coupled differential equations, such as the Lotka–Volterra equations. Statistical hypothesis testing, is run on data from clinical trials to determine whether a new treatment works. Since the start of the 20th century, chemistry has used computing to model molecules in three dimensions. ==== Earth sciences ==== Structural geology and climatology use probabilistic models to predict the risk of natural catastrophes. Similarly, meteorology, oceanography, and planetology also use mathematics due to their heavy use of models. ==== Social sciences ==== Areas of mathematics used in the social sciences include probability/statistics and differential equations. These are used in linguistics, economics, sociology, and psychology. Often the fundamental postulate of mathematical economics is that of the rational individual actor – Homo economicus (lit. 'economic man'). In this model, the individual seeks to maximize their self-interest, and always makes optimal choices using perfect information. This atomistic view of economics allows it to relatively easily mathematize its thinking, because individual calculations are transposed into mathematical calculations. Such mathematical modeling allows one to probe economic mechanisms. Some reject or criticise the concept of Homo economicus. Economists note that real people have limited information, make poor choices, and care about fairness and altruism, not just personal gain. Without mathematical modeling, it is hard to go beyond statistical observations or untestable speculation. Mathematical modeling allows economists to create structured frameworks to test hypotheses and analyze complex interactions. Models provide clarity and precision, enabling the translation of theoretical concepts into quantifiable predictions that can be tested against real-world data. At the start of the 20th century, there was a development to express historical movements in formulas. In 1922, Nikolai Kondratiev discerned the ~50-year-long Kondratiev cycle, which explains phases of economic growth or crisis. Towards the end of the 19th century, mathematicians extended their analysis into geopolitics. Peter Turchin developed cliodynamics in the 1990s. Mathematization of the social sciences is not without risk. In the controversial book Fashionable Nonsense (1997), Sokal and Bricmont denounced the unfounded or abusive use of scientific terminology, particularly from mathematics or physics, in the social sciences. The study of complex systems (evolution of unemployment, business capital, demographic evolution of a population, etc.) uses mathematical knowledge. However, the choice of counting criteria, particularly for unemployment, or of models, can be subject to controversy. == Philosophy == === Reality === The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects. Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that support his views. Something becomes objective (as opposed to "subjective") as soon as we are convinced that it exists in the minds of others in the same form as it does in ours and that we can think about it and discuss it together. Because the language of mathematics is so precise, it is ideally suited to defining concepts for which such a consensus exists. In my opinion, that is sufficient to provide us with a feeling of an objective existence, of a reality of mathematics ... Nevertheless, Platonism and the concurrent views on abstraction do not explain the unreasonable effectiveness of mathematics (as Platonism assumes mathematics exists independently, but does not explain why it matches reality). === Proposed definitions === There is no general consensus about the definition of mathematics or its epistemological status—that is, its place inside knowledge. A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable. There is not even consensus on whether mathematics is an art or a science. Some just say, "mathematics is what mathematicians do". A common approach is to define mathematics by its object of study. Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart. In the 19th century, when mathematicians began to address topics—such as infinite sets—which have no clear-cut relation to physical reality, a variety of new definitions were given. With the large number of new areas of mathematics that have appeared since the beginning of the 20th century, defining mathematics by its object of study has become increasingly difficult. For example, in lieu of a definition, Saunders Mac Lane in Mathematics, form and function summarizes the basics of several areas of mathematics, emphasizing their inter-connectedness, and observes: the development of Mathematics provides a tightly connected network of formal rules, concepts, and systems. Nodes of this network are closely bound to procedures useful in human activities and to questions arising in science. The transition from activities to the formal Mathematical systems is guided by a variety of general insights and ideas. Another approach for defining mathematics is to use its methods. For example, an area of study is often qualified as mathematics as soon as one can prove theorems—assertions whose validity relies on a proof, that is, a purely-logical deduction. === Rigor === Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules, without any use of empirical evidence and intuition. Rigorous reasoning is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. Despite mathematics' concision, rigorous proofs can require hundreds of pages to express, such as the 255-page Feit–Thompson theorem. The emergence of computer-assisted proofs has allowed proof lengths to further expand. The result of this trend is a philosophy of the quasi-empiricist proof that can not be considered infallible, but has a probability attached to it. The concept of rigor in mathematics dates back to ancient Greece, where their society encouraged logical, deductive reasoning. However, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity. The method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. In the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. This produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs. At the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes (non-Euclidean geometries and Weierstrass function) and contradictions (Russell's paradox). This was solved by the inclusion of axioms with the apodictic inference rules of mathematical theories; the re-introduction of axiomatic method pioneered by the ancient Greeks. It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof, wherein it may be demonstrably refuted by other mathematicians. After a proof has been accepted for many years or even decades, it can then be considered as reliable. Nevertheless, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof. == Training and practice == === Education === Mathematics has a remarkable ability to cross cultural boundaries and time periods. As a human activity, the practice of mathematics has a social side, which includes education, careers, recognition, popularization, and so on. In education, mathematics is a core part of the curriculum and forms an important element of the STEM academic disciplines. Prominent careers for professional mathematicians include mathematics teacher or professor, statistician, actuary, financial analyst, economist, accountant, commodity trader, or computer consultant. Archaeological evidence shows that instruction in mathematics occurred as early as the second millennium BCE in ancient Babylonia. Comparable evidence has been unearthed for scribal mathematics training in the ancient Near East and then for the Greco-Roman world starting around 300 BCE. The oldest known mathematics textbook is the Rhind papyrus, dated from c. 1650 BCE in Egypt. Due to a scarcity of books, mathematical teachings in ancient India were communicated using memorized oral tradition since the Vedic period (c. 1500 – c. 500 BCE). In Imperial China during the Tang dynasty (618–907 CE), a mathematics curriculum was adopted for the civil service exam to join the state bureaucracy. Following the Dark Ages, mathematics education in Europe was provided by religious schools as part of the Quadrivium. Formal instruction in pedagogy began with Jesuit schools in the 16th and 17th century. Most mathematical curricula remained at a basic and practical level until the nineteenth century, when it began to flourish in France and Germany. The oldest journal addressing instruction in mathematics was L'Enseignement Mathématique, which began publication in 1899. The Western advancements in science and technology led to the establishment of centralized education systems in many nation-states, with mathematics as a core component—initially for its military applications. While the content of courses varies, in the present day nearly all countries teach mathematics to students for significant amounts of time. During school, mathematical capabilities and positive expectations have a strong association with career interest in the field. Extrinsic factors such as feedback motivation by teachers, parents, and peer groups can influence the level of interest in mathematics. Some students studying mathematics may develop an apprehension or fear about their performance in the subject. This is known as mathematical anxiety, and is considered the most prominent of the disorders impacting academic performance. Mathematical anxiety can develop due to various factors such as parental and teacher attitudes, social stereotypes, and personal traits. Help to counteract the anxiety can come from changes in instructional approaches, by interactions with parents and teachers, and by tailored treatments for the individual. === Psychology (aesthetic, creativity and intuition) === The validity of a mathematical theorem relies only on the rigor of its proof, which could theoretically be done automatically by a computer program. This does not mean that there is no place for creativity in a mathematical work. On the contrary, many important mathematical results (theorems) are solutions of problems that other mathematicians failed to solve, and the invention of a way for solving them may be a fundamental way of the solving process. An extreme example is Apery's theorem: Roger Apery provided only the ideas for a proof, and the formal proof was given only several months later by three other mathematicians. Creativity and rigor are not the only psychological aspects of the activity of mathematicians. Some mathematicians can see their activity as a game, more specifically as solving puzzles. This aspect of mathematical activity is emphasized in recreational mathematics. Mathematicians can find an aesthetic value to mathematics. Like beauty, it is hard to define, it is commonly related to elegance, which involves qualities like simplicity, symmetry, completeness, and generality. G. H. Hardy in A Mathematician's Apology expressed the belief that the aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to mathematical aesthetics. Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. The 1998 book Proofs from THE BOOK, inspired by Erdős, is a collection of particularly succinct and revelatory mathematical arguments. Some examples of particularly elegant results included are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis. Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science). The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions. == Cultural impact == === Artistic expression === Notes that sound well together to a Western ear are sounds whose fundamental frequencies of vibration are in simple ratios. For example, an octave doubles the frequency and a perfect fifth multiplies it by 3 2 {\displaystyle {\frac {3}{2}}} . Humans, as well as some other animals, find symmetric patterns to be more beautiful. Mathematically, the symmetries of an object form a group known as the symmetry group. For example, the group underlying mirror symmetry is the cyclic group of two elements, Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . A Rorschach test is a figure invariant by this symmetry, as are butterfly and animal bodies more generally (at least on the surface). Waves on the sea surface possess translation symmetry: moving one's viewpoint by the distance between wave crests does not change one's view of the sea. Fractals possess self-similarity. === Popularization === Popular mathematics is the act of presenting mathematics without technical terms. Presenting mathematics may be hard since the general public suffers from mathematical anxiety and mathematical objects are highly abstract. However, popular mathematics writing can overcome this by using applications or cultural links. Despite this, mathematics is rarely the topic of popularization in printed or televised media. === Awards and prize problems === The most prestigious award in mathematics is the Fields Medal, established in 1936 and awarded every four years (except around World War II) to up to four individuals. It is considered the mathematical equivalent of the Nobel Prize. Other prestigious mathematics awards include: The Abel Prize, instituted in 2002 and first awarded in 2003 The Chern Medal for lifetime achievement, introduced in 2009 and first awarded in 2010 The AMS Leroy P. Steele Prize, awarded since 1970 The Wolf Prize in Mathematics, also for lifetime achievement, instituted in 1978 A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list has achieved great celebrity among mathematicians, and at least thirteen of the problems (depending how some are interpreted) have been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Only one of them, the Riemann hypothesis, duplicates one of Hilbert's problems. A solution to any of these problems carries a 1 million dollar reward. To date, only one of these problems, the Poincaré conjecture, has been solved by the Russian mathematician Grigori Perelman. == See also == == Notes == == References == === Citations === === Other sources === == Further reading ==
Wikipedia/math
Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication. Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions. Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures. Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences. == Definition and etymology == Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations defined on that set, like addition and multiplication. Algebra explores the laws, general characteristics, and types of algebraic structures. Within certain algebraic structures, it examines the use of variables in equations and how to manipulate these equations. Algebra is often understood as a generalization of arithmetic. Arithmetic studies operations like addition, subtraction, multiplication, and division, in a particular domain of numbers, such as the real numbers. Elementary algebra constitutes the first level of abstraction. Like arithmetic, it restricts itself to specific types of numbers and operations. It generalizes these operations by allowing indefinite quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal algebra is still more abstract in that it is not interested in specific algebraic structures but investigates the characteristics of algebraic structures in general. The term "algebra" is sometimes used in a more narrow sense to refer only to elementary algebra or only to abstract algebra. When used as a countable noun, an algebra is a specific type of algebraic structure that involves a vector space equipped with a certain type of binary operation. Depending on the context, "algebra" can also refer to other algebraic structures, like a Lie algebra or an associative algebra. The word algebra comes from the Arabic term الجبر (al-jabr), which originally referred to the surgical treatment of bonesetting. In the 9th century, the term received a mathematical meaning when the Persian mathematician Muhammad ibn Musa al-Khwarizmi employed it to describe a method of solving equations and used it in the title of a treatise on algebra, al-Kitāb al-Mukhtaṣar fī Ḥisāb al-Jabr wal-Muqābalah [The Compendious Book on Calculation by Completion and Balancing] which was translated into Latin as Liber Algebrae et Almucabola. The word entered the English language in the 16th century from Italian, Spanish, and medieval Latin. Initially, its meaning was restricted to the theory of equations, that is, to the art of manipulating polynomial equations in view of solving them. This changed in the 19th century when the scope of algebra broadened to cover the study of diverse types of algebraic operations and structures together with their underlying axioms, the laws they follow. == Major branches == === Elementary algebra === Elementary algebra, also called school algebra, college algebra, and classical algebra, is the oldest and most basic form of algebra. It is a generalization of arithmetic that relies on variables and examines how mathematical statements may be transformed. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithm. For example, the operation of addition combines two numbers, called the addends, into a third number, called the sum, as in 2 + 5 = 7 {\displaystyle 2+5=7} . Elementary algebra relies on the same operations while allowing variables in addition to regular numbers. Variables are symbols for unspecified or unknown quantities. They make it possible to state relationships for which one does not know the exact values and to express general laws that are true, independent of which numbers are used. For example, the equation 2 × 3 = 3 × 2 {\displaystyle 2\times 3=3\times 2} belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of multiplication, which is expressed in the equation a × b = b × a {\displaystyle a\times b=b\times a} . Algebraic expressions are formed by using arithmetic operations to combine variables and numbers. By convention, the lowercase letters ⁠ x {\displaystyle x} ⁠, ⁠ y {\displaystyle y} ⁠, and z {\displaystyle z} represent variables. In some cases, subscripts are added to distinguish variables, as in ⁠ x 1 {\displaystyle x_{1}} ⁠, ⁠ x 2 {\displaystyle x_{2}} ⁠, and ⁠ x 3 {\displaystyle x_{3}} ⁠. The lowercase letters ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠, and c {\displaystyle c} are usually used for constants and coefficients. The expression 5 x + 3 {\displaystyle 5x+3} is an algebraic expression created by multiplying the number 5 with the variable x {\displaystyle x} and adding the number 3 to the result. Other examples of algebraic expressions are 32 x y z {\displaystyle 32xyz} and 64 x 1 2 + 7 x 2 − c {\displaystyle 64x_{1}{}^{2}+7x_{2}-c} . Some algebraic expressions take the form of statements that relate two expressions to one another. An equation is a statement formed by comparing two expressions, saying that they are equal. This can be expressed using the equals sign (⁠ = {\displaystyle =} ⁠), as in ⁠ 5 x 2 + 6 x = 3 y + 4 {\displaystyle 5x^{2}+6x=3y+4} ⁠. Inequations involve a different type of comparison, saying that the two sides are different. This can be expressed using symbols such as the less-than sign (⁠ < {\displaystyle <} ⁠), the greater-than sign (⁠ > {\displaystyle >} ⁠), and the inequality sign (⁠ ≠ {\displaystyle \neq } ⁠). Unlike other expressions, statements can be true or false, and their truth value usually depends on the values of the variables. For example, the statement x 2 = 4 {\displaystyle x^{2}=4} is true if x {\displaystyle x} is either 2 or −2 and false otherwise. Equations with variables can be divided into identity equations and conditional equations. Identity equations are true for all values that can be assigned to the variables, such as the equation ⁠ 2 x + 5 x = 7 x {\displaystyle 2x+5x=7x} ⁠. Conditional equations are only true for some values. For example, the equation x + 4 = 9 {\displaystyle x+4=9} is only true if x {\displaystyle x} is 5. The main goal of elementary algebra is to determine the values for which a statement is true. This can be achieved by transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs to subtract 5 from the right side to balance both sides. The goal of these steps is usually to isolate the variable one is interested in on one side, a process known as solving the equation for that variable. For example, the equation x − 7 = 4 {\displaystyle x-7=4} can be solved for x {\displaystyle x} by adding 7 to both sides, which isolates x {\displaystyle x} on the left side and results in the equation ⁠ x = 11 {\displaystyle x=11} ⁠. There are many other techniques used to solve equations. Simplification is employed to replace a complicated expression with an equivalent simpler one. For example, the expression 7 x − 3 x {\displaystyle 7x-3x} can be replaced with the expression 4 x {\displaystyle 4x} since 7 x − 3 x = ( 7 − 3 ) x = 4 x {\displaystyle 7x-3x=(7-3)x=4x} by the distributive property. For statements with several variables, substitution is a common technique to replace one variable with an equivalent expression that does not use this variable. For example, if one knows that y = 3 x {\displaystyle y=3x} then one can simplify the expression 7 x y {\displaystyle 7xy} to arrive at ⁠ 21 x 2 {\displaystyle 21x^{2}} ⁠. In a similar way, if one knows the value of one variable one may be able to use it to determine the value of other variables. Algebraic equations can be interpreted geometrically to describe spatial figures in the form of a graph. To do so, the different variables in the equation are understood as coordinates and the values that solve the equation are interpreted as points of a graph. For example, if x {\displaystyle x} is set to zero in the equation ⁠ y = 0.5 x − 1 {\displaystyle y=0.5x-1} ⁠, then y {\displaystyle y} must be −1 for the equation to be true. This means that the ( x , y ) {\displaystyle (x,y)} -pair ( 0 , − 1 ) {\displaystyle (0,-1)} is part of the graph of the equation. The ( x , y ) {\displaystyle (x,y)} -pair ⁠ ( 0 , 7 ) {\displaystyle (0,7)} ⁠, by contrast, does not solve the equation and is therefore not part of the graph. The graph encompasses the totality of ( x , y ) {\displaystyle (x,y)} -pairs that solve the equation. ==== Polynomials ==== A polynomial is an expression consisting of one or more terms that are added or subtracted from each other, like ⁠ x 4 + 3 x y 2 + 5 x 3 − 1 {\displaystyle x^{4}+3xy^{2}+5x^{3}-1} ⁠. Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive integer power. A monomial is a polynomial with one term while two- and three-term polynomials are called binomials and trinomials. The degree of a polynomial is the maximal value (among its terms) of the sum of the exponents of the variables (4 in the above example). Polynomials of degree one are called linear polynomials. Linear algebra studies systems of linear polynomials. A polynomial is said to be univariate or multivariate, depending on whether it uses one or more variables. Factorization is a method used to simplify polynomials, making it easier to analyze them and determine the values for which they evaluate to zero. Factorization consists of rewriting a polynomial as a product of several factors. For example, the polynomial x 2 − 3 x − 10 {\displaystyle x^{2}-3x-10} can be factorized as ⁠ ( x + 2 ) ( x − 5 ) {\displaystyle (x+2)(x-5)} ⁠. The polynomial as a whole is zero if and only if one of its factors is zero, i.e., if x {\displaystyle x} is either −2 or 5. Before the 19th century, much of algebra was devoted to polynomial equations, that is equations obtained by equating a polynomial to zero. The first attempts for solving polynomial equations were to express the solutions in terms of nth roots. The solution of a second-degree polynomial equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} is given by the quadratic formula x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}}.} Solutions for the degrees 3 and 4 are given by the cubic and quartic formulas. There are no general solutions for higher degrees, as proven in the 19th century by the Abel–Ruffini theorem. Even when general solutions do not exist, approximate solutions can be found by numerical tools like the Newton–Raphson method. The fundamental theorem of algebra asserts that every univariate polynomial equation of positive degree with real or complex coefficients has at least one complex solution. Consequently, every polynomial of a positive degree can be factorized into linear polynomials. This theorem was proved at the beginning of the 19th century, but this does not close the problem since the theorem does not provide any way for computing the solutions. === Linear algebra === Linear algebra starts with the study of systems of linear equations. An equation is linear if it can be expressed in the form ⁠ a 1 x 1 + a 2 x 2 + . . . + a n x n = b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+...+a_{n}x_{n}=b} ⁠, where ⁠ a 1 {\displaystyle a_{1}} ⁠, ⁠ a 2 {\displaystyle a_{2}} ⁠, ..., a n {\displaystyle a_{n}} and b {\displaystyle b} are constants. Examples are x 1 − 7 x 2 + 3 x 3 = 0 {\displaystyle x_{1}-7x_{2}+3x_{3}=0} and ⁠ 1 4 x − y = 4 {\displaystyle \textstyle {\frac {1}{4}}x-y=4} ⁠. A system of linear equations is a set of linear equations for which one is interested in common solutions. Matrices are rectangular arrays of values that have been originally introduced for having a compact and synthetic notation for systems of linear equations. For example, the system of equations 9 x 1 + 3 x 2 − 13 x 3 = 0 2.3 x 1 + 7 x 3 = 9 − 5 x 1 − 17 x 2 = − 3 {\displaystyle {\begin{aligned}9x_{1}+3x_{2}-13x_{3}&=0\\2.3x_{1}+7x_{3}&=9\\-5x_{1}-17x_{2}&=-3\end{aligned}}} can be written as A X = B , {\displaystyle AX=B,} where ⁠ A {\displaystyle A} ⁠, X {\displaystyle X} and B {\displaystyle B} are the matrices A = [ 9 3 − 13 2.3 0 7 − 5 − 17 0 ] , X = [ x 1 x 2 x 3 ] , B = [ 0 9 − 3 ] . {\displaystyle A={\begin{bmatrix}9&3&-13\\2.3&0&7\\-5&-17&0\end{bmatrix}},\quad X={\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}},\quad B={\begin{bmatrix}0\\9\\-3\end{bmatrix}}.} Under some conditions on the number of rows and columns, matrices can be added, multiplied, and sometimes inverted. All methods for solving linear systems may be expressed as matrix manipulations using these operations. For example, solving the above system consists of computing an inverted matrix A − 1 {\displaystyle A^{-1}} such that A − 1 A = I , {\displaystyle A^{-1}A=I,} where I {\displaystyle I} is the identity matrix. Then, multiplying on the left both members of the above matrix equation by A − 1 , {\displaystyle A^{-1},} one gets the solution of the system of linear equations as X = A − 1 B . {\displaystyle X=A^{-1}B.} Methods of solving systems of linear equations range from the introductory, like substitution and elimination, to more advanced techniques using matrices, such as Cramer's rule, the Gaussian elimination, and LU decomposition. Some systems of equations are inconsistent, meaning that no solutions exist because the equations contradict each other. Consistent systems have either one unique solution or an infinite number of solutions. The study of vector spaces and linear maps form a large part of linear algebra. A vector space is an algebraic structure formed by a set with an addition that makes it an abelian group and a scalar multiplication that is compatible with addition (see vector space for details). A linear map is a function between vector spaces that is compatible with addition and scalar multiplication. In the case of finite-dimensional vector spaces, vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear map: if one chooses a particular basis to describe the vectors being transformed, then the entries in the matrix give the results of applying the linear map to the basis vectors. Systems of equations can be interpreted as geometric figures. For systems with two variables, each equation represents a line in two-dimensional space. The point where the two lines intersect is the solution of the full system because this is the only point that solves both the first and the second equation. For inconsistent systems, the two lines run parallel, meaning that there is no solution since they never intersect. If two equations are not independent then they describe the same line, meaning that every solution of one equation is also a solution of the other equation. These relations make it possible to seek solutions graphically by plotting the equations and determining where they intersect. The same principles also apply to systems of equations with more variables, with the difference being that the equations do not describe lines but higher dimensional figures. For instance, equations with three variables correspond to planes in three-dimensional space, and the points where all planes intersect solve the system of equations. === Abstract algebra === Abstract algebra, also called modern algebra, is the study of algebraic structures. An algebraic structure is a framework for understanding operations on mathematical objects, like the addition of numbers. While elementary algebra and linear algebra work within the confines of particular algebraic structures, abstract algebra takes a more general approach that compares how algebraic structures differ from each other and what types of algebraic structures there are, such as groups, rings, and fields. The key difference between these types of algebraic structures lies in the number of operations they use and the laws they obey. In mathematics education, abstract algebra refers to an advanced undergraduate course that mathematics majors take after completing courses in linear algebra. On a formal level, an algebraic structure is a set of mathematical objects, called the underlying set, together with one or several operations. Abstract algebra is primarily interested in binary operations, which take any two objects from the underlying set as inputs and map them to another object from this set as output. For example, the algebraic structure ⟨ N , + ⟩ {\displaystyle \langle \mathbb {N} ,+\rangle } has the natural numbers (⁠ N {\displaystyle \mathbb {N} } ⁠) as the underlying set and addition (⁠ + {\displaystyle +} ⁠) as its binary operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary operation is function composition, which takes two transformations as input and has the transformation resulting from applying the first transformation followed by the second as its output. ==== Group theory ==== Abstract algebra classifies algebraic structures based on the laws or axioms that its operations obey and the number of operations it uses. One of the most basic types is a group, which has one operation and requires that this operation is associative and has an identity element and inverse elements. An operation is associative if the order of several applications does not matter, i.e., if ( a ∘ b ) ∘ c {\displaystyle (a\circ b)\circ c} is the same as a ∘ ( b ∘ c ) {\displaystyle a\circ (b\circ c)} for all elements. An operation has an identity element or a neutral element if one element e exists that does not change the value of any other element, i.e., if ⁠ a ∘ e = e ∘ a = a {\displaystyle a\circ e=e\circ a=a} ⁠. An operation has inverse elements if for any element a {\displaystyle a} there exists a reciprocal element a − 1 {\displaystyle a^{-1}} that undoes ⁠ a {\displaystyle a} ⁠. If an element operates on its inverse then the result is the neutral element e, expressed formally as ⁠ a ∘ a − 1 = a − 1 ∘ a = e {\displaystyle a\circ a^{-1}=a^{-1}\circ a=e} ⁠. Every algebraic structure that fulfills these requirements is a group. For example, ⟨ Z , + ⟩ {\displaystyle \langle \mathbb {Z} ,+\rangle } is a group formed by the set of integers together with the operation of addition. The neutral element is 0 and the inverse element of any number a {\displaystyle a} is − a {\displaystyle -a} . The natural numbers with addition, by contrast, do not form a group since they contain only positive integers and therefore lack inverse elements. Group theory examines the nature of groups, with basic theorems such as the fundamental theorem of finite abelian groups and the Feit–Thompson theorem. The latter was a key early step in one of the most important mathematical achievements of the 20th century: the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups. ==== Ring theory and field theory ==== A ring is an algebraic structure with two operations that work similarly to the addition and multiplication of numbers and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, a ( b + c ) = a b + a c {\displaystyle a(b+c)=ab+ac} and ( b + c ) a = b a + c a . {\displaystyle (b+c)a=ba+ca.} Moreover, multiplication is associative and has an identity element generally denoted as 1. Multiplication needs not to be commutative; if it is commutative, one has a commutative ring. The ring of integers (⁠ Z {\displaystyle \mathbb {Z} } ⁠) is one of the simplest commutative rings. A field is a commutative ring such that ⁠ 1 ≠ 0 {\displaystyle 1\neq 0} ⁠ and each nonzero element has a multiplicative inverse. The ring of integers does not form a field because it lacks multiplicative inverses. For example, the multiplicative inverse of 7 {\displaystyle 7} is ⁠ 1 7 {\displaystyle {\tfrac {1}{7}}} ⁠, which is not an integer. The rational numbers, the real numbers, and the complex numbers each form a field with the operations of addition and multiplication. Ring theory is the study of rings, exploring concepts such as subrings, quotient rings, polynomial rings, and ideals as well as theorems such as Hilbert's basis theorem. Field theory is concerned with fields, examining field extensions, algebraic closures, and finite fields. Galois theory explores the relation between field theory and group theory, relying on the fundamental theorem of Galois theory. ==== Theories of interrelations among structures ==== Besides groups, rings, and fields, there are many other algebraic structures studied by algebra. They include magmas, semigroups, monoids, abelian groups, commutative rings, modules, lattices, vector spaces, algebras over a field, and associative and non-associative algebras. They differ from each other regarding the types of objects they describe and the requirements that their operations fulfill. Many are related to each other in that a basic structure can be turned into a more specialized structure by adding constraints. For example, a magma becomes a semigroup if its operation is associative. Homomorphisms are tools to examine structural features by comparing two algebraic structures. A homomorphism is a function from the underlying set of one algebraic structure to the underlying set of another algebraic structure that preserves certain structural characteristics. If the two algebraic structures use binary operations and have the form ⟨ A , ∘ ⟩ {\displaystyle \langle A,\circ \rangle } and ⟨ B , ⋆ ⟩ {\displaystyle \langle B,\star \rangle } then the function h : A → B {\displaystyle h:A\to B} is a homomorphism if it fulfills the following requirement: ⁠ h ( x ∘ y ) = h ( x ) ⋆ h ( y ) {\displaystyle h(x\circ y)=h(x)\star h(y)} ⁠. The existence of a homomorphism reveals that the operation ⋆ {\displaystyle \star } in the second algebraic structure plays the same role as the operation ∘ {\displaystyle \circ } does in the first algebraic structure. Isomorphisms are a special type of homomorphism that indicates a high degree of similarity between two algebraic structures. An isomorphism is a bijective homomorphism, meaning that it establishes a one-to-one relationship between the elements of the two algebraic structures. This implies that every element of the first algebraic structure is mapped to one unique element in the second structure without any unmapped elements in the second structure. Another tool of comparison is the relation between an algebraic structure and its subalgebra. The algebraic structure and its subalgebra use the same operations, which follow the same axioms. The only difference is that the underlying set of the subalgebra is a subset of the underlying set of the algebraic structure. All operations in the subalgebra are required to be closed in its underlying set, meaning that they only produce elements that belong to this set. For example, the set of even integers together with addition is a subalgebra of the full set of integers together with addition. This is the case because the sum of two even numbers is again an even number. But the set of odd integers together with addition is not a subalgebra because it is not closed: adding two odd numbers produces an even number, which is not part of the chosen subset. Universal algebra is the study of algebraic structures in general. As part of its general perspective, it is not concerned with the specific elements that make up the underlying sets and considers operations with more than two inputs, such as ternary operations. It provides a framework for investigating what structural features different algebraic structures have in common. One of those structural features concerns the identities that are true in different algebraic structures. In this context, an identity is a universal equation or an equation that is true for all elements of the underlying set. For example, commutativity is a universal equation that states that a ∘ b {\displaystyle a\circ b} is identical to b ∘ a {\displaystyle b\circ a} for all elements. A variety is a class of all algebraic structures that satisfy certain identities. For example, if two algebraic structures satisfy commutativity then they are both part of the corresponding variety. Category theory examines how mathematical objects are related to each other using the concept of categories. A category is a collection of objects together with a collection of morphisms or "arrows" between those objects. These two collections must satisfy certain conditions. For example, morphisms can be joined, or composed: if there exists a morphism from object a {\displaystyle a} to object ⁠ b {\displaystyle b} ⁠, and another morphism from object b {\displaystyle b} to object ⁠ c {\displaystyle c} ⁠, then there must also exist one from object a {\displaystyle a} to object ⁠ c {\displaystyle c} ⁠. Composition of morphisms is required to be associative, and there must be an "identity morphism" for every object. Categories are widely used in contemporary mathematics since they provide a unifying framework to describe and analyze many fundamental mathematical concepts. For example, sets can be described with the category of sets, and any group can be regarded as the morphisms of a category with just one object. == History == The origin of algebra lies in attempts to solve mathematical problems involving arithmetic calculations and unknown quantities. These developments happened in the ancient period in Babylonia, Egypt, Greece, China, and India. One of the earliest documents on algebraic problems is the Rhind Mathematical Papyrus from ancient Egypt, which was written around 1650 BCE. It discusses solutions to linear equations, as expressed in problems like "A quantity; its fourth is added to it. It becomes fifteen. What is the quantity?" Babylonian clay tablets from around the same time explain methods to solve linear and quadratic polynomial equations, such as the method of completing the square. Many of these insights found their way to the ancient Greeks. Starting in the 6th century BCE, their main interest was geometry rather than algebra, but they employed algebraic methods to solve geometric problems. For example, they studied geometric figures while taking their lengths and areas as unknown quantities to be determined, as exemplified in Pythagoras' formulation of the difference of two squares method and later in Euclid's Elements. In the 3rd century CE, Diophantus provided a detailed treatment of how to solve algebraic equations in a series of books called Arithmetica. He was the first to experiment with symbolic notation to express polynomials. Diophantus's work influenced Arab development of algebra with many of his methods reflected in the concepts and techniques used in medieval Arabic algebra. In ancient China, The Nine Chapters on the Mathematical Art, a book composed over the period spanning from the 10th century BCE to the 2nd century CE, explored various techniques for solving algebraic equations, including the use of matrix-like constructs. There is no unanimity of opinion as to whether these early developments are part of algebra or only precursors. They offered solutions to algebraic problems but did not conceive them in an abstract and general manner, focusing instead on specific cases and applications. This changed with the Persian mathematician al-Khwarizmi, who published his The Compendious Book on Calculation by Completion and Balancing in 825 CE. It presents the first detailed treatment of general methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries. In India, Brahmagupta investigated how to solve quadratic equations and systems of equations with several variables in the 7th century CE. Among his innovations were the use of zero and negative numbers in algebraic equations. The Indian mathematicians Mahāvīra in the 9th century and Bhāskara II in the 12th century further refined Brahmagupta's methods and concepts. In 1247, the Chinese mathematician Qin Jiushao wrote the Mathematical Treatise in Nine Sections, which includes an algorithm for the numerical evaluation of polynomials, including polynomials of higher degrees. The Italian mathematician Fibonacci brought al-Khwarizmi's ideas and techniques to Europe in books including his Liber Abaci. In 1545, the Italian polymath Gerolamo Cardano published his book Ars Magna, which covered many topics in algebra, discussed imaginary numbers, and was the first to present general methods for solving cubic and quartic equations. In the 16th and 17th centuries, the French mathematicians François Viète and René Descartes introduced letters and symbols to denote variables and operations, making it possible to express equations in an concise and abstract manner. Their predecessors had relied on verbal descriptions of problems and solutions. Some historians see this development as a key turning point in the history of algebra and consider what came before it as the prehistory of algebra because it lacked the abstract nature based on symbolic manipulation. In the 17th and 18th centuries, many attempts were made to find general solutions to polynomials of degree five and higher. All of them failed. At the end of the 18th century, the German mathematician Carl Friedrich Gauss proved the fundamental theorem of algebra, which describes the existence of zeros of polynomials of any degree without providing a general solution. At the beginning of the 19th century, the Italian mathematician Paolo Ruffini and the Norwegian mathematician Niels Henrik Abel were able to show that no general solution exists for polynomials of degree five and higher. In response to and shortly after their findings, the French mathematician Évariste Galois developed what came later to be known as Galois theory, which offered a more in-depth analysis of the solutions of polynomials while also laying the foundation of group theory. Mathematicians soon realized the relevance of group theory to other fields and applied it to disciplines like geometry and number theory. Starting in the mid-19th century, interest in algebra shifted from the study of polynomials associated with elementary algebra towards a more general inquiry into algebraic structures, marking the emergence of abstract algebra. This approach explored the axiomatic basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as the Austrian mathematician Emil Artin. They researched different forms of algebraic structures and categorized them based on their underlying axioms into types, like groups, rings, and fields. The idea of the even more general approach associated with universal algebra was conceived by the English mathematician Alfred North Whitehead in his 1898 book A Treatise on Universal Algebra. Starting in the 1930s, the American mathematician Garrett Birkhoff expanded these ideas and developed many of the foundational concepts of this field. The invention of universal algebra led to the emergence of various new areas focused on the algebraization of mathematics—that is, the application of algebraic methods to other branches of mathematics. Topological algebra arose in the early 20th century, studying algebraic structures such as topological groups and Lie groups. In the 1940s and 50s, homological algebra emerged, employing algebraic techniques to study homology. Around the same time, category theory was developed and has since played a key role in the foundations of mathematics. Other developments were the formulation of model theory and the study of free algebras. == Applications == The influence of algebra is wide-reaching, both within mathematics and in its applications to other fields. The algebraization of mathematics is the process of applying algebraic methods and principles to other branches of mathematics, such as geometry, topology, number theory, and calculus. It happens by employing symbols in the form of variables to express mathematical insights on a more general level, allowing mathematicians to develop formal models describing how objects interact and relate to each other. One application, found in geometry, is the use of algebraic statements to describe geometric figures. For example, the equation y = 3 x − 7 {\displaystyle y=3x-7} describes a line in two-dimensional space while the equation x 2 + y 2 + z 2 = 1 {\displaystyle x^{2}+y^{2}+z^{2}=1} corresponds to a sphere in three-dimensional space. Of special interest to algebraic geometry are algebraic varieties, which are solutions to systems of polynomial equations that can be used to describe more complex geometric figures. Algebraic reasoning can also solve geometric problems. For example, one can determine whether and where the line described by y = x + 1 {\displaystyle y=x+1} intersects with the circle described by x 2 + y 2 = 25 {\displaystyle x^{2}+y^{2}=25} by solving the system of equations made up of these two equations. Topology studies the properties of geometric figures or topological spaces that are preserved under operations of continuous deformation. Algebraic topology relies on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them. Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field of inquiry. Examples are the use of algebraic expressions to describe general laws, like Fermat's Last Theorem, and of algebraic structures to analyze the behavior of numbers, such as the ring of integers. The related field of combinatorics uses algebraic techniques to solve problems related to counting, arrangement, and combination of discrete objects. An example in algebraic combinatorics is the application of group theory to analyze graphs and symmetries. The insights of algebra are also relevant to calculus, which uses mathematical expressions to examine rates of change and accumulation. It relies on algebra, for instance, to understand how these expressions can be transformed and what role variables play in them. Algebraic logic employs the methods of algebra to describe and analyze the structures and patterns that underlie logical reasoning, exploring both the relevant mathematical structures themselves and their application to concrete problems of logic. It includes the study of Boolean algebra to describe propositional logic as well as the formulation and analysis of algebraic structures corresponding to more complex systems of logic. Algebraic methods are also commonly employed in other areas, like the natural sciences. For example, they are used to express scientific laws and solve equations in physics, chemistry, and biology. Similar applications are found in fields like economics, geography, engineering (including electronics and robotics), and computer science to express relationships, solve problems, and model systems. Linear algebra plays a central role in artificial intelligence and machine learning, for instance, by enabling the efficient processing and analysis of large datasets. Various fields rely on algebraic structures investigated by abstract algebra. For example, physical sciences like crystallography and quantum mechanics make extensive use of group theory, which is also employed to study puzzles such as Sudoku and Rubik's cubes, and origami. Both coding theory and cryptology rely on abstract algebra to solve problems associated with data transmission, like avoiding the effects of noise and ensuring data security. == Education == Algebra education mostly focuses on elementary algebra, which is one of the reasons why elementary algebra is also called school algebra. It is usually not introduced until secondary education since it requires mastery of the fundamentals of arithmetic while posing new cognitive challenges associated with abstract reasoning and generalization. It aims to familiarize students with the formal side of mathematics by helping them understand mathematical symbolism, for example, how variables can be used to represent unknown quantities. An additional difficulty for students lies in the fact that, unlike arithmetic calculations, algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often to determine an unknown quantity. Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including sticks or cups, and "function machines" representing equations as flow diagrams. One method uses balance scales as a pictorial approach to help students grasp basic problems of algebra. The mass of some objects on the scale is unknown and represents variables. Solving an equation corresponds to adding and removing objects on both sides in such a way that the sides stay in balance until the only object remaining on one side is the object of unknown mass. Word problems are another tool to show how algebra is applied to real-life situations. For example, students may be presented with a situation in which Naomi's brother has twice as many apples as Naomi. Given that both together have twelve apples, students are then asked to find an algebraic equation that describes this situation (⁠ 2 x + x = 12 {\displaystyle 2x+x=12} ⁠) and to determine how many apples Naomi has (⁠ x = 4 {\displaystyle x=4} ⁠). At the university level, mathematics students encounter advanced algebra topics from linear and abstract algebra. Initial undergraduate courses in linear algebra focus on matrices, vector spaces, and linear maps. Upon completing them, students are usually introduced to abstract algebra, where they learn about algebraic structures like groups, rings, and fields, as well as the relations between them. The curriculum typically also covers specific instances of algebraic structures, such as the systems of rational numbers, the real numbers, and the polynomials. == See also == == References == === Notes === === Citations === === Sources === == External links ==
Wikipedia/algebra
In mathematics, an algebraic expression is an expression built up from constants (usually, algebraic numbers), variables, and the basic algebraic operations: addition (+), subtraction (-), multiplication (×), division (÷), whole number powers, and roots (fractional powers).. For example, ⁠ 3 x 2 − 2 x y + c {\displaystyle 3x^{2}-2xy+c} ⁠ is an algebraic expression. Since taking the square root is the same as raising to the power ⁠1/2⁠, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} An algebraic equation is an equation involving polynomials, for which algebraic expressions may be solutions. If you restrict your set of constants to be numbers, any algebraic expression can be called an arithmetic expression. However, algebraic expressions can be used on more abstract objects such as in Abstract algebra. If you restrict your constants to integers, the set of numbers that can be described with an algebraic expression are called Algebraic numbers. By contrast, transcendental numbers like π and e are not algebraic, since they are not derived from integer constants and algebraic operations. Usually, π is constructed as a geometric relationship, and the definition of e requires an infinite number of algebraic operations. More generally, expressions which are algebraically independent from their constants and/or variables are called transcendental. == Terminology == Algebra has its own terminology to describe parts of an expression: == Conventions == === Variables === By convention, letters at the beginning of the alphabet (e.g. a , b , c {\displaystyle a,b,c} ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x , y {\displaystyle x,y} and z {\displaystyle z} ) are used to represent variables. They are usually written in italics. === Exponents === By convention, terms with the highest power (exponent), are written on the left, for example, x 2 {\displaystyle x^{2}} is written to the left of x {\displaystyle x} . When a coefficient is one, it is usually omitted (e.g. 1 x 2 {\displaystyle 1x^{2}} is written x 2 {\displaystyle x^{2}} ). Likewise when the exponent (power) is one, (e.g. 3 x 1 {\displaystyle 3x^{1}} is written 3 x {\displaystyle 3x} ), and, when the exponent is zero, the result is always 1 (e.g. 3 x 0 {\displaystyle 3x^{0}} is written 3 {\displaystyle 3} , since x 0 {\displaystyle x^{0}} is always 1 {\displaystyle 1} ). == In roots of polynomials == The roots of a polynomial expression of degree n, or equivalently the solutions of a polynomial equation, can always be written as algebraic expressions if n < 5 (see quadratic formula, cubic function, and quartic equation). Such a solution of an equation is called an algebraic solution. But the Abel–Ruffini theorem states that algebraic solutions do not exist for all such equations (just for some of them) if n ≥ {\displaystyle \geq } 5. == Rational expressions == Given two polynomials ⁠ P ( x ) {\displaystyle P(x)} ⁠ and ⁠ Q ( x ) {\displaystyle Q(x)} ⁠ , their quotient is called a rational expression or simply rational fraction. A rational expression P ( x ) Q ( x ) {\textstyle {\frac {P(x)}{Q(x)}}} is called proper if deg ⁡ P ( x ) < deg ⁡ Q ( x ) {\displaystyle \deg P(x)<\deg Q(x)} , and improper otherwise. For example, the fraction 2 x x 2 − 1 {\displaystyle {\tfrac {2x}{x^{2}-1}}} is proper, and the fractions x 3 + x 2 + 1 x 2 − 5 x + 6 {\displaystyle {\tfrac {x^{3}+x^{2}+1}{x^{2}-5x+6}}} and x 2 − x + 1 5 x 2 + 3 {\displaystyle {\tfrac {x^{2}-x+1}{5x^{2}+3}}} are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has x 3 + x 2 + 1 x 2 − 5 x + 6 = ( x + 6 ) + 24 x − 35 x 2 − 5 x + 6 , {\displaystyle {\frac {x^{3}+x^{2}+1}{x^{2}-5x+6}}=(x+6)+{\frac {24x-35}{x^{2}-5x+6}},} where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example, 2 x x 2 − 1 = 1 x − 1 + 1 x + 1 . {\displaystyle {\frac {2x}{x^{2}-1}}={\frac {1}{x-1}}+{\frac {1}{x+1}}.} Here, the two terms on the right are called partial fractions. === Irrational fraction === An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is x 1 / 2 − 1 3 a x 1 / 3 − x 1 / 2 . {\displaystyle {\frac {x^{1/2}-{\tfrac {1}{3}}a}{x^{1/3}-x^{1/2}}}.} The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute x = z 6 {\displaystyle x=z^{6}} to obtain z 3 − 1 3 a z 2 − z 3 . {\displaystyle {\frac {z^{3}-{\tfrac {1}{3}}a}{z^{2}-z^{3}}}.} == Algebraic and other mathematical expressions == The table below summarizes how algebraic expressions compare with several other types of mathematical expressions by the type of elements they may contain, according to common but not universal conventions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as x2 + 4x + 4. An irrational algebraic expression is one that is not rational, such as √x + 4. == See also == Algebraic function Analytical expression Closed-form expression Expression (mathematics) Precalculus Term (logic) == Notes == == References == James, Robert Clarke; James, Glenn (1992). Mathematics dictionary. Springer. p. 8. ISBN 9780412990410. == External links == Weisstein, Eric W. "Algebraic Expression". MathWorld.
Wikipedia/Algebraic_expression
A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic, that is, relies only on addition, subtraction, multiplication, division, raising to integer powers, and extraction of nth roots (square roots, cube roots, etc.). A well-known example is the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}},} which expresses the solutions of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} There exist algebraic solutions for cubic equations and quartic equations, which are more complicated than the quadratic formula. The Abel–Ruffini theorem,: 211  and, more generally Galois theory, state that some quintic equations, such as x 5 − x + 1 = 0 , {\displaystyle x^{5}-x+1=0,} do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation x 10 = 2 {\displaystyle x^{10}=2} can be solved as x = ± 2 10 . {\displaystyle x=\pm {\sqrt[{10}]{2}}.} The eight other solutions are nonreal complex numbers, which are also algebraic and have the form x = ± r 2 10 , {\displaystyle x=\pm r{\sqrt[{10}]{2}},} where r is a fifth root of unity, which can be expressed with two nested square roots. See also Quintic function § Other solvable quintics for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. == See also == Radical symbol Solvable quintics Solvable sextics Solvable septics == References ==
Wikipedia/Solution_in_radicals
Multilinear algebra is the study of functions with multiple vector-valued arguments, with the functions being linear maps with respect to each argument. It involves concepts such as matrices, tensors, multivectors, systems of linear equations, higher-dimensional spaces, determinants, inner and outer products, and dual spaces. It is a mathematical tool used in engineering, machine learning, physics, and mathematics. == Origin == While many theoretical concepts and applications involve single vectors, mathematicians such as Hermann Grassmann considered structures involving pairs, triplets, and multivectors that generalize vectors. With multiple combinational possibilities, the space of multivectors expands to 2n dimensions, where n is the dimension of the relevant vector space. The determinant can be formulated abstractly using the structures of multilinear algebra. Multilinear algebra appears in the study of the mechanical response of materials to stress and strain, involving various moduli of elasticity. The term "tensor" describes elements within the multilinear space due to its added structure. Despite Grassmann's early work in 1844 with his Ausdehnungslehre, which was also republished in 1862, the subject was initially not widely understood, as even ordinary linear algebra posed many challenges at the time. The concepts of multilinear algebra find applications in certain studies of multivariate calculus and manifolds, particularly concerning the Jacobian matrix. Infinitesimal differentials encountered in single-variable calculus are transformed into differential forms in multivariate calculus, and their manipulation is carried out using exterior algebra. Following Grassmann, developments in multilinear algebra were made by Victor Schlegel in 1872 with the publication of the first part of his System der Raumlehre and by Elwin Bruno Christoffel. Notably, significant advancements came through the work of Gregorio Ricci-Curbastro and Tullio Levi-Civita, particularly in the form of absolute differential calculus within multilinear algebra. Marcel Grossmann and Michele Besso introduced this form to Albert Einstein, and in 1915, Einstein's publication on general relativity, explaining the precession of Mercury's perihelion, established multilinear algebra and tensors as important mathematical tools in physics. In 1958, Nicolas Bourbaki included a chapter on multilinear algebra titled "Algèbre Multilinéaire" in his series Éléments de mathématique, specifically within the algebra book. The chapter covers topics such as bilinear functions, the tensor product of two modules, and the properties of tensor products. == Applications == Multilinear algebra concepts find applications in various areas, including: == See also == == References == Greub, W. H. (1967) Multilinear Algebra, Springer Douglas Northcott (1984) Multilinear Algebra, Cambridge University Press ISBN 0-521-26269-0 Shaw, Ronald (1983). Multilinear algebra and group representations. Vol. 2. Academic Press. ISBN 978-0-12-639202-9. OCLC 59106339.
Wikipedia/Multilinear_algebra
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are: Typed λ-calculus of Alonzo Church Intuitionistic type theory of Per Martin-Löf Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. == History == Type theory was created to avoid paradoxes in naive set theory and formal logic, such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory. Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic. In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Rocq (previously known as Coq), Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory. == Applications == === Mathematical foundations === The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory. Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS). Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy). === Proof assistants === Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages: LF is used by Twelf, often to define other type theories; many type theories which fall under higher-order logic are used by the HOL family of provers and PVS; computational type theory is used by NuPRL; calculus of constructions and its derivatives are used by Rocq (previously known as Coq), Matita, and Lean; UTT (Luo's Unified Theory of dependent Types) is used by Agda which is both a programming language and proof assistant Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory. === Programming languages === Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system. The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them. === Linguistics === Type theory is also widely used in formal theories of semantics of natural languages, especially Montague grammar and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words. The most common construction takes the basic types e {\displaystyle e} and t {\displaystyle t} for individuals and truth-values, respectively, and defines the set of types recursively as follows: if a {\displaystyle a} and b {\displaystyle b} are types, then so is ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } ; nothing except the basic types, and what can be constructed from them by means of the previous clause are types. A complex type ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } is the type of functions from entities of type a {\displaystyle a} to entities of type b {\displaystyle b} . Thus one has types like ⟨ e , t ⟩ {\displaystyle \langle e,t\rangle } which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type ⟨ ⟨ e , t ⟩ , t ⟩ {\displaystyle \langle \langle e,t\rangle ,t\rangle } is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981). Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems. === Social sciences === Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types. == Logic == A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition φ {\displaystyle \varphi } is true", or "The formula φ {\displaystyle \varphi } is a well-formed formula". A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as t e r m : t y p e {\displaystyle \mathrm {term} :{\mathsf {type}}} . === Terms === A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number 0 {\displaystyle 0} , the Boolean value t r u e {\displaystyle \mathrm {true} } , and functions such as the successor function S {\displaystyle \mathrm {S} } and conditional operator i f {\displaystyle \mathrm {if} } . Thus some terms could be 0 {\displaystyle 0} , ( S 0 ) {\displaystyle (\mathrm {S} \,0)} , ( S ( S 0 ) ) {\displaystyle (\mathrm {S} \,(\mathrm {S} \,0))} , and ( i f t r u e 0 ( S 0 ) ) {\displaystyle (\mathrm {if} \,\mathrm {true} \,0\,(\mathrm {S} \,0))} . === Judgments === Most type theories have 4 judgments: " T {\displaystyle T} is a type" " t {\displaystyle t} is a term of type T {\displaystyle T} " "Type T 1 {\displaystyle T_{1}} is equal to type T 2 {\displaystyle T_{2}} " "Terms t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} both of type T {\displaystyle T} are equal" Judgments may follow from assumptions. For example, one might say "assuming x {\displaystyle x} is a term of type b o o l {\displaystyle {\mathsf {bool}}} and y {\displaystyle y} is a term of type n a t {\displaystyle {\mathsf {nat}}} , it follows that ( i f x y y ) {\displaystyle (\mathrm {if} \,x\,y\,y)} is a term of type n a t {\displaystyle {\mathsf {nat}}} ". Such judgments are formally written with the turnstile symbol ⊢ {\displaystyle \vdash } . x : b o o l , y : n a t ⊢ ( if x y y ) : n a t {\displaystyle x:{\mathsf {bool}},y:{\mathsf {nat}}\vdash ({\textrm {if}}\,x\,y\,y):{\mathsf {nat}}} If there are no assumptions, there will be nothing to the left of the turnstile. ⊢ S : n a t → n a t {\displaystyle \vdash \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} The list of assumptions on the left is the context of the judgment. Capital greek letters, such as Γ {\displaystyle \Gamma } and Δ {\displaystyle \Delta } , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows. Some textbooks use a triple equal sign ≡ {\displaystyle \equiv } to stress that this is judgmental equality and thus an extrinsic notion of equality. The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term. === Rules of Inference === A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line. For example, the following inference rule states a substitution rule for judgmental equality. Γ ⊢ t : T 1 Δ ⊢ T 1 = T 2 Γ , Δ ⊢ t : T 2 {\displaystyle {\begin{array}{c}\Gamma \vdash t:T_{1}\qquad \Delta \vdash T_{1}=T_{2}\\\hline \Gamma ,\Delta \vdash t:T_{2}\end{array}}} The rules are syntactic and work by rewriting. The metavariables Γ {\displaystyle \Gamma } , Δ {\displaystyle \Delta } , t {\displaystyle t} , T 1 {\displaystyle T_{1}} , and T 2 {\displaystyle T_{2}} may actually consist of complex terms and types that contain many function applications, not just single symbols. To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term 0 {\displaystyle 0} of type n a t {\displaystyle {\mathsf {nat}}} , one would write the following. ⊢ 0 : n a t {\displaystyle {\begin{array}{c}\hline \vdash 0:nat\\\end{array}}} ==== Type inhabitation ==== Generally, the desired conclusion of a proof in type theory is one of type inhabitation. The decision problem of type inhabitation (abbreviated by ∃ t . Γ ⊢ t : τ ? {\displaystyle \exists t.\Gamma \vdash t:\tau ?} ) is: Given a context Γ {\displaystyle \Gamma } and a type τ {\displaystyle \tau } , decide whether there exists a term t {\displaystyle t} that can be assigned the type τ {\displaystyle \tau } in the type environment Γ {\displaystyle \Gamma } . Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. A type theory usually has several rules, including ones to: create a judgment (known as a context in this case) add an assumption to the context (context weakening) rearrange the assumptions use an assumption to create a variable define reflexivity, symmetry and transitivity for judgmental equality define substitution for application of lambda terms list all the interactions of equality, such as substitution define a hierarchy of type universes assert the existence of new types Also, for each "by rule" type, there are 4 different kinds of rules "type formation" rules say how to create the type "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S". "term elimination" rules define the other functions like "first", "second", and "R". "computation" rules specify how computation is performed with the type-specific functions. For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book, or read Martin-Löf's Intuitionistic Type Theory. == Connections to foundations == The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic. Additionally, connections can be made to category theory and computer programs. === Intuitionistic logic === When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation. Under this intuitionistic interpretation, there are common types that act as the logical operators: Because the law of excluded middle does not hold, there is no term of type Π a . A + ( A → ⊥ ) {\displaystyle \Pi a.A+(A\to \bot )} . Likewise, double negation does not hold, so there is no term of type Π A . ( ( A → ⊥ ) → ⊥ ) → A {\displaystyle \Pi A.((A\to \bot )\to \bot )\to A} . It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. ==== Constructive mathematics ==== Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics. Constructive mathematics requires when proving "there exists an x {\displaystyle x} with property P ( x ) {\displaystyle P(x)} ", one must construct a particular x {\displaystyle x} and a proof that it has property P {\displaystyle P} . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type. An example of a non-constructive proof is proof by contradiction. The first step is assuming that x {\displaystyle x} does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that x {\displaystyle x} does not exist". The last step is, by double negation, concluding that x {\displaystyle x} exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that x {\displaystyle x} exists. Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants. It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity. === Curry-Howard correspondence === The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A → {\displaystyle \to } B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs". The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information. ==== Type inference ==== Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user. === Research areas === ==== Category theory ==== Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts ), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way: cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970); C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980); locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984). The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance. ==== Homotopy type theory ==== Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization. == Definitions == === Terms and types === ==== Atomic terms ==== The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type n a t {\displaystyle {\mathsf {nat}}} , Boolean logic values ( t r u e {\displaystyle \mathrm {true} } / f a l s e {\displaystyle \mathrm {false} } ), notated with the type b o o l {\displaystyle {\mathsf {bool}}} , and formal variables, whose type may vary. For example, the following may be atomic terms. 42 : n a t {\displaystyle 42:{\mathsf {nat}}} t r u e : b o o l {\displaystyle \mathrm {true} :{\mathsf {bool}}} x : n a t {\displaystyle x:{\mathsf {nat}}} y : b o o l {\displaystyle y:{\mathsf {bool}}} ==== Function terms ==== In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If σ {\displaystyle \sigma } and τ {\displaystyle \tau } are types, then the notation σ → τ {\displaystyle \sigma \to \tau } is the type of a function which takes a parameter of type σ {\displaystyle \sigma } and returns a term of type τ {\displaystyle \tau } . Types of this form are known as simple types. Some terms may be declared directly as having a simple type, such as the following term, a d d {\displaystyle \mathrm {add} } , which takes in two natural numbers in sequence and returns one natural number. a d d : n a t → ( n a t → n a t ) {\displaystyle \mathrm {add} :{\mathsf {nat}}\to ({\mathsf {nat}}\to {\mathsf {nat}})} Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that a d d {\displaystyle \mathrm {add} } is a function which takes in a natural number and returns a function of the form n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} . The parentheses clarify that a d d {\displaystyle \mathrm {add} } does not have the type ( n a t → n a t ) → n a t {\displaystyle ({\mathsf {nat}}\to {\mathsf {nat}})\to {\mathsf {nat}}} , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from a d d {\displaystyle \mathrm {add} } 's type. ==== Lambda terms ==== New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form ( λ v . t ) {\displaystyle (\lambda v.t)} , where v {\displaystyle v} is a formal variable and t {\displaystyle t} is a term, and its type is notated σ → τ {\displaystyle \sigma \to \tau } , where σ {\displaystyle \sigma } is the type of v {\displaystyle v} , and τ {\displaystyle \tau } is the type of t {\displaystyle t} . The following lambda term represents a function which doubles an input natural number. ( λ x . a d d x x ) : n a t → n a t {\displaystyle (\lambda x.\mathrm {add} \,x\,x):{\mathsf {nat}}\to {\mathsf {nat}}} The variable is x {\displaystyle x} and (implicit from the lambda term's type) must have type n a t {\displaystyle {\mathsf {nat}}} . The term a d d x x {\displaystyle \mathrm {add} \,x\,x} has type n a t {\displaystyle {\mathsf {nat}}} , which is seen by applying the function application inference rule twice. Thus, the lambda term has type n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} , which means it is a function taking a natural number as an argument and returning a natural number. A lambda term is often referred to as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages. === Inference Rules === ==== Function application ==== The power of type theories is in specifying how terms may be combined by way of inference rules. Type theories which have functions also have the inference rule of function application: if t {\displaystyle t} is a term of type σ → τ {\displaystyle \sigma \to \tau } , and s {\displaystyle s} is a term of type σ {\displaystyle \sigma } , then the application of t {\displaystyle t} to s {\displaystyle s} , often written ( t s ) {\displaystyle (t\,s)} , has type τ {\displaystyle \tau } . For example, if one knows the type notations 0 : nat {\displaystyle 0:{\textsf {nat}}} , 1 : nat {\displaystyle 1:{\textsf {nat}}} , and 2 : nat {\displaystyle 2:{\textsf {nat}}} , then the following type notations can be deduced from function application. ( a d d 1 ) : nat → nat {\displaystyle (\mathrm {add} \,1):{\textsf {nat}}\to {\textsf {nat}}} ( ( a d d 2 ) 0 ) : nat {\displaystyle ((\mathrm {add} \,2)\,0):{\textsf {nat}}} ( ( a d d 1 ) ( ( a d d 2 ) 0 ) ) : nat {\displaystyle ((\mathrm {add} \,1)((\mathrm {add} \,2)\,0)):{\textsf {nat}}} Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate. In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to a d d 1 ( a d d 2 0 ) : nat {\displaystyle \mathrm {add} \,1\,(\mathrm {add} \,2\,0):{\textsf {nat}}} . ==== Reductions ==== Type theories that allow for lambda terms also include inference rules known as β {\displaystyle \beta } -reduction and η {\displaystyle \eta } -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written ( λ v . t ) s → t [ v : = s ] {\displaystyle (\lambda v.t)\,s\rightarrow t[v\colon =s]} ( β {\displaystyle \beta } -reduction). ( λ v . t v ) → t {\displaystyle (\lambda v.t\,v)\rightarrow t} , if v {\displaystyle v} is not a free variable in t {\displaystyle t} ( η {\displaystyle \eta } -reduction). The first reduction describes how to evaluate a lambda term: if a lambda expression ( λ v . t ) {\displaystyle (\lambda v.t)} is applied to a term s {\displaystyle s} , one replaces every occurrence of v {\displaystyle v} in t {\displaystyle t} with s {\displaystyle s} . The second reduction makes explicit the relationship between lambda expressions and function types: if ( λ v . t v ) {\displaystyle (\lambda v.t\,v)} is a lambda term, then it must be that t {\displaystyle t} is a function term because it is being applied to v {\displaystyle v} . Therefore, the lambda expression is equivalent to just t {\displaystyle t} , as both take in one argument and apply t {\displaystyle t} to it. For example, the following term may be β {\displaystyle \beta } -reduced. ( λ x . a d d x x ) 2 → a d d 2 2 {\displaystyle (\lambda x.\mathrm {add} \,x\,x)\,2\rightarrow \mathrm {add} \,2\,2} In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of β {\displaystyle \beta } -equality and η {\displaystyle \eta } -equality. === Common terms and types === ==== Empty type ==== The empty type has no terms. The type is usually written ⊥ {\displaystyle \bot } or 0 {\displaystyle \mathbb {0} } . One use for the empty type is proofs of type inhabitation. If for a type a {\displaystyle a} , it is consistent to derive a function of type a → ⊥ {\displaystyle a\to \bot } , then a {\displaystyle a} is uninhabited, which is to say it has no terms. ==== Unit type ==== The unit type has exactly 1 canonical term. The type is written ⊤ {\displaystyle \top } or 1 {\displaystyle \mathbb {1} } and the single canonical term is written ∗ {\displaystyle \ast } . The unit type is also used in proofs of type inhabitation. If for a type a {\displaystyle a} , it is consistent to derive a function of type ⊤ → a {\displaystyle \top \to a} , then a {\displaystyle a} is inhabited, which is to say it must have one or more terms. ==== Boolean type ==== The Boolean type has exactly 2 canonical terms. The type is usually written bool {\displaystyle {\textsf {bool}}} or B {\displaystyle \mathbb {B} } or 2 {\displaystyle \mathbb {2} } . The canonical terms are usually t r u e {\displaystyle \mathrm {true} } and f a l s e {\displaystyle \mathrm {false} } . ==== Natural numbers ==== Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term 0 : n a t {\displaystyle 0:{\mathsf {nat}}} for zero. Canonical values larger than zero use iterated applications of a successor function S : n a t → n a t {\displaystyle \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} . === Type constructors === Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments; these are called type constructors. For example, a type theory could have the dependent type l i s t a {\displaystyle {\mathsf {list}}\,a} , which should correspond to lists of terms, where each term must have type a {\displaystyle a} . In this case, l i s t {\displaystyle {\mathsf {list}}} has the kind U → U {\displaystyle U\to U} , where U {\displaystyle U} denotes the universe of all types in the theory. ==== Product type ==== The product type, × {\displaystyle \times } , depends on two types, and its terms are commonly written as ordered pairs ( s , t ) {\displaystyle (s,t)} . The pair ( s , t ) {\displaystyle (s,t)} has the product type σ × τ {\displaystyle \sigma \times \tau } , where σ {\displaystyle \sigma } is the type of s {\displaystyle s} and τ {\displaystyle \tau } is the type of t {\displaystyle t} . Each product type is then usually defined with eliminator functions f i r s t : σ × τ → σ {\displaystyle \mathrm {first} :\sigma \times \tau \to \sigma } and s e c o n d : σ × τ → τ {\displaystyle \mathrm {second} :\sigma \times \tau \to \tau } . f i r s t ( s , t ) {\displaystyle \mathrm {first} \,(s,t)} returns s {\displaystyle s} , and s e c o n d ( s , t ) {\displaystyle \mathrm {second} \,(s,t)} returns t {\displaystyle t} . Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection. ==== Sum type ==== The sum type is written as either + {\displaystyle +} or ⊔ {\displaystyle \sqcup } . In programming languages, sum types may be referred to as tagged unions. Each type σ ⊔ τ {\displaystyle \sigma \sqcup \tau } is usually defined with constructors l e f t : σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\sigma \to (\sigma \sqcup \tau )} and r i g h t : τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\tau \to (\sigma \sqcup \tau )} , which are injective, and an eliminator function m a t c h : ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that m a t c h f g ( l e f t x ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {left} \,x)} returns f x {\displaystyle f\,x} , and m a t c h f g ( r i g h t y ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {right} \,y)} returns g y {\displaystyle g\,y} . The sum type is used for the concepts of logical disjunction and union. === Polymorphic types === Some theories also allow terms to have their definitions depend on types. For instance, an identity function of any type could be written as λ x . x : ∀ α . α → α {\displaystyle \lambda x.x:\forall \alpha .\alpha \to \alpha } . The function is said to be polymorphic in α {\displaystyle \alpha } , or generic in x {\displaystyle x} . As another example, consider a function a p p e n d {\displaystyle \mathrm {append} } , which takes in a l i s t a {\displaystyle {\mathsf {list}}\,a} and a term of type a {\displaystyle a} , and returns the list with the element at the end. The type annotation of such a function would be a p p e n d : ∀ a . l i s t a → a → l i s t a {\displaystyle \mathrm {append} :\forall \,a.{\mathsf {list}}\,a\to a\to {\mathsf {list}}\,a} , which can be read as "for any type a {\displaystyle a} , pass in a l i s t a {\displaystyle {\mathsf {list}}\,a} and an a {\displaystyle a} , and return a l i s t a {\displaystyle {\mathsf {list}}\,a} ". Here a p p e n d {\displaystyle \mathrm {append} } is polymorphic in a {\displaystyle a} . ==== Products and sums ==== With polymorphism, the eliminator functions can be defined generically for all product types as f i r s t : ∀ σ τ . σ × τ → σ {\displaystyle \mathrm {first} :\forall \,\sigma \,\tau .\sigma \times \tau \to \sigma } and s e c o n d : ∀ σ τ . σ × τ → τ {\displaystyle \mathrm {second} :\forall \,\sigma \,\tau .\sigma \times \tau \to \tau } . f i r s t ( s , t ) {\displaystyle \mathrm {first} \,(s,t)} returns s {\displaystyle s} , and s e c o n d ( s , t ) {\displaystyle \mathrm {second} \,(s,t)} returns t {\displaystyle t} . Likewise, the sum type constructors can be defined for all valid types of sum members as l e f t : ∀ σ τ . σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\forall \,\sigma \,\tau .\sigma \to (\sigma \sqcup \tau )} and r i g h t : ∀ σ τ . τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\forall \,\sigma \,\tau .\tau \to (\sigma \sqcup \tau )} , which are injective, and the eliminator function can be given as m a t c h : ∀ σ τ ρ . ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :\forall \,\sigma \,\tau \,\rho .(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that m a t c h f g ( l e f t x ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {left} \,x)} returns f x {\displaystyle f\,x} , and m a t c h f g ( r i g h t y ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {right} \,y)} returns g y {\displaystyle g\,y} . === Dependent typing === Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type v e c t o r n {\displaystyle {\mathsf {vector}}\,n} , where n {\displaystyle n} is a term of type n a t {\displaystyle {\mathsf {nat}}} encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type. There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing. ==== Dependent products and sums ==== Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence. As they also connect to products and sums in set theory, they are often written with the symbols Π {\displaystyle \Pi } and Σ {\displaystyle \Sigma } , respectively. Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function i f {\displaystyle \mathrm {if} } , which takes three arguments and behaves as follows. i f t r u e x y {\displaystyle \mathrm {if} \,\mathrm {true} \,x\,y} returns x {\displaystyle x} , and i f f a l s e x y {\displaystyle \mathrm {if} \,\mathrm {false} \,x\,y} returns y {\displaystyle y} . Ordinary definitions of i f {\displaystyle \mathrm {if} } require x {\displaystyle x} and y {\displaystyle y} to have the same type. If the type theory allows for dependent types, then it is possible to define a dependent type x : b o o l ⊢ T F x : U → U → U {\displaystyle x:{\mathsf {bool}}\,\vdash \,\mathrm {TF} \,x:U\to U\to U} such that T F t r u e σ τ {\displaystyle \mathrm {TF} \,\mathrm {true} \,\sigma \,\tau } returns σ {\displaystyle \sigma } , and T F f a l s e σ τ {\displaystyle \mathrm {TF} \,\mathrm {false} \,\sigma \,\tau } returns τ {\displaystyle \tau } . The type of i f {\displaystyle \mathrm {if} } may then be written as ∀ σ τ . Π x : b o o l . σ → τ → T F x σ τ {\displaystyle \forall \,\sigma \,\tau .\Pi _{x:{\mathsf {bool}}}.\sigma \to \tau \to \mathrm {TF} \,x\,\sigma \,\tau } . ==== Identity type ==== Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol = {\displaystyle =} . For example, if x + 1 {\displaystyle x+1} and 1 + x {\displaystyle 1+x} are terms, then x + 1 = 1 + x {\displaystyle x+1=1+x} is a possible type. Canonical terms are created with a reflexivity function, r e f l {\displaystyle \mathrm {refl} } . For a term t {\displaystyle t} , the call r e f l t {\displaystyle \mathrm {refl} \,t} returns the canonical term inhabiting the type t = t {\displaystyle t=t} . The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory. ==== Inductive types ==== Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding. Some proof assistants, such as Rocq (previously known as Coq) and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types. == Differences from set theory == The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches. Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference. Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle. In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element x {\displaystyle x} is paired with a proof that the subset's property holds for x {\displaystyle x} . Where a union would be used, type theory uses the sum type, which contains new canonical terms. Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality. Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms. In type theory, proofs are types whereas in set theory, proofs are part of the underlying first-order logic. Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to Category theory. === Properties of type theories === Terms usually belong to a single type. However, there are set theories that define "subtyping". Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule". Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities. Thus, 0 + A ≅ A {\displaystyle {\mathbb {0} }+A\cong A} , 1 × A ≅ A {\displaystyle {\mathbb {1} }\times A\cong A} , 1 + 1 ≅ 2 {\displaystyle {\mathbb {1} }+{\mathbb {1} }\cong {\mathbb {2} }} , A B + C ≅ A B × A C {\displaystyle A^{B+C}\cong A^{B}\times A^{C}} , A B × C ≅ ( A B ) C {\displaystyle A^{B\times C}\cong (A^{B})^{C}} . === Axioms === Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets. Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules. Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory. Some commonly encountered axioms are: "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity. "Univalence Axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom. "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic. The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See Axiom of choice § In constructive mathematics.) == List of type theories == === Major === Simply typed lambda calculus which is a higher-order logic Intuitionistic type theory System F LF is often used to define other type theories Calculus of constructions and its derivatives === Minor === Automath ST type theory UTT (Luo's Unified Theory of dependent Types) some forms of combinatory logic others defined in the lambda cube (also known as pure type systems) others under the name typed lambda calculus === Active research === Homotopy type theory explores equality of types Cubical Type Theory is an implementation of homotopy type theory == See also == Class (set theory) Type–token distinction == Further reading == == Notes == == References == == External links == === Introductory material === Type Theory at nLab, which has articles on many topics. Intuitionistic Type Theory article at the Stanford Encyclopedia of Philosophy Lambda Calculi with Types book by Henk Barendregt Calculus of Constructions / Typed Lambda Calculus textbook style paper by Helmut Brandl Intuitionistic Type Theory notes by Per Martin-Löf Programming in Martin-Löf's Type Theory book Homotopy Type Theory book, which proposed homotopy type theory as a mathematical foundation. === Advanced material === Robert L. Constable (ed.). "Computational type theory". Scholarpedia. The TYPES Forum — moderated e-mail forum focusing on type theory in computer science, operating since 1987. The Nuprl Book: "Introduction to Type Theory." Types Project lecture notes of summer schools 2005–2008 The 2005 summer school has introductory lectures Oregon Programming Languages Summer School, many lectures and some notes. Summer 2013 lectures including Robert Harper's talks on YouTube Summer 2015 Types, Logic, Semantics, and Verification Andrej Bauer's blog
Wikipedia/Type_theory
In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1. Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; and they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry. == Etymology == The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or "name". It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-. That is, it means a sum of many terms (many monomials). The word polynomial was first used in the 17th century. == Notation and terminology == The x occurring in a polynomial is commonly called a variable or an indeterminate. When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably. A polynomial P in the indeterminate x is commonly denoted either as P or as P(x). Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial. The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials. If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function a ↦ P ( a ) , {\displaystyle a\mapsto P(a),} which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number. However, one may use it over any domain where addition and multiplication are defined (that is, any ring). In particular, if a is a polynomial then P(a) is also a polynomial. More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything). In other words, P ( x ) = P , {\displaystyle P(x)=P,} which justifies formally the existence of two notations for the same polynomial. == Definition == A polynomial expression is an expression that can be built from constants and symbols called variables or indeterminates by means of addition, multiplication and exponentiation to a non-negative integer power. The constants are generally numbers, but may be any expression that do not involve the indeterminates, and represent mathematical objects that can be added and multiplied. Two polynomial expressions are considered as defining the same polynomial if they may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication. For example ( x − 1 ) ( x − 2 ) {\displaystyle (x-1)(x-2)} and x 2 − 3 x + 2 {\displaystyle x^{2}-3x+2} are two polynomial expressions that represent the same polynomial; so, one has the equality ( x − 1 ) ( x − 2 ) = x 2 − 3 x + 2 {\displaystyle (x-1)(x-2)=x^{2}-3x+2} . A polynomial in a single indeterminate x can always be written (or rewritten) in the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 , {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constants that are called the coefficients of the polynomial, and x {\displaystyle x} is the indeterminate. The word "indeterminate" means that x {\displaystyle x} represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a function, called a polynomial function. This can be expressed more concisely by using summation notation: ∑ k = 0 n a k x k {\displaystyle \sum _{k=0}^{n}a_{k}x^{k}} That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a number – called the coefficient of the term – and a finite number of indeterminates, raised to non-negative integer powers. == Classification == The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient. Because x = x1, the degree of an indeterminate without a written exponent is one. A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial. The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below). For example: − 5 x 2 y {\displaystyle -5x^{2}y} is a term. The coefficient is −5, the indeterminates are x and y, the degree of x is two, while the degree of y is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is 2 + 1 = 3. Forming a sum of several terms produces a polynomial. For example, the following is a polynomial: 3 x 2 ⏟ t e r m 1 − 5 x ⏟ t e r m 2 + 4 ⏟ t e r m 3 . {\displaystyle \underbrace {_{\,}3x^{2}} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {1} \end{smallmatrix}}\underbrace {-_{\,}5x} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {2} \end{smallmatrix}}\underbrace {+_{\,}4} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {3} \end{smallmatrix}}.} It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero. Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial, or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term 2x in x2 + 2x + 1 is a linear term in a quadratic polynomial. The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞). The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots. The graph of the zero polynomial, f(x) = 0, is the x-axis. In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of degree n if all of its non-zero terms have degree n. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, x3y2 + 7x2y3 − 3x5 is homogeneous of degree 5. For more details, see Homogeneous polynomial. The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x". The polynomial 3x2 − 5x + 4 is written in descending powers of x. The first term has coefficient 3, indeterminate x, and exponent 2. In the second term, the coefficient is −5. The third term is a constant. Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two. Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0. Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial. A real polynomial is a polynomial with real coefficients. When it is used to define a function, the domain is not so restricted. However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial. Similarly, an integer polynomial is a polynomial with integer coefficients, and a complex polynomial is a polynomial with complex coefficients. A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial. A polynomial with two indeterminates is called a bivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in x, y, and z", listing the indeterminates allowed. == Operations == === Addition and subtraction === Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if P = 3 x 2 − 2 x + 5 x y − 2 {\displaystyle P=3x^{2}-2x+5xy-2} and Q = − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle Q=-3x^{2}+3x+4y^{2}+8} then the sum P + Q = 3 x 2 − 2 x + 5 x y − 2 − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle P+Q=3x^{2}-2x+5xy-2-3x^{2}+3x+4y^{2}+8} can be reordered and regrouped as P + Q = ( 3 x 2 − 3 x 2 ) + ( − 2 x + 3 x ) + 5 x y + 4 y 2 + ( 8 − 2 ) {\displaystyle P+Q=(3x^{2}-3x^{2})+(-2x+3x)+5xy+4y^{2}+(8-2)} and then simplified to P + Q = x + 5 x y + 4 y 2 + 6. {\displaystyle P+Q=x+5xy+4y^{2}+6.} When polynomials are added together, the result is another polynomial. Subtraction of polynomials is similar. === Multiplication === Polynomials can also be multiplied. To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if P = 2 x + 3 y + 5 Q = 2 x + 5 y + x y + 1 {\displaystyle {\begin{aligned}\color {Red}P&\color {Red}{=2x+3y+5}\\\color {Blue}Q&\color {Blue}{=2x+5y+xy+1}\end{aligned}}} then P Q = ( 2 x ⋅ 2 x ) + ( 2 x ⋅ 5 y ) + ( 2 x ⋅ x y ) + ( 2 x ⋅ 1 ) + ( 3 y ⋅ 2 x ) + ( 3 y ⋅ 5 y ) + ( 3 y ⋅ x y ) + ( 3 y ⋅ 1 ) + ( 5 ⋅ 2 x ) + ( 5 ⋅ 5 y ) + ( 5 ⋅ x y ) + ( 5 ⋅ 1 ) {\displaystyle {\begin{array}{rccrcrcrcr}{\color {Red}{P}}{\color {Blue}{Q}}&{=}&&({\color {Red}{2x}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{3y}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{5}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{5}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{5}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{5}}\cdot {\color {Blue}{1}})\end{array}}} Carrying out the multiplication in each term produces P Q = 4 x 2 + 10 x y + 2 x 2 y + 2 x + 6 x y + 15 y 2 + 3 x y 2 + 3 y + 10 x + 25 y + 5 x y + 5. {\displaystyle {\begin{array}{rccrcrcrcr}PQ&=&&4x^{2}&+&10xy&+&2x^{2}y&+&2x\\&&+&6xy&+&15y^{2}&+&3xy^{2}&+&3y\\&&+&10x&+&25y&+&5xy&+&5.\end{array}}} Combining similar terms yields P Q = 4 x 2 + ( 10 x y + 6 x y + 5 x y ) + 2 x 2 y + ( 2 x + 10 x ) + 15 y 2 + 3 x y 2 + ( 3 y + 25 y ) + 5 {\displaystyle {\begin{array}{rcccrcrcrcr}PQ&=&&4x^{2}&+&(10xy+6xy+5xy)&+&2x^{2}y&+&(2x+10x)\\&&+&15y^{2}&+&3xy^{2}&+&(3y+25y)&+&5\end{array}}} which can be simplified to P Q = 4 x 2 + 21 x y + 2 x 2 y + 12 x + 15 y 2 + 3 x y 2 + 28 y + 5. {\displaystyle PQ=4x^{2}+21xy+2x^{2}y+12x+15y^{2}+3xy^{2}+28y+5.} As in the example, the product of polynomials is always a polynomial. === Composition === Given a polynomial f {\displaystyle f} of a single variable and another polynomial g of any number of variables, the composition f ∘ g {\displaystyle f\circ g} is obtained by substituting each copy of the variable of the first polynomial by the second polynomial. For example, if f ( x ) = x 2 + 2 x {\displaystyle f(x)=x^{2}+2x} and g ( x ) = 3 x + 2 {\displaystyle g(x)=3x+2} then ( f ∘ g ) ( x ) = f ( g ( x ) ) = ( 3 x + 2 ) 2 + 2 ( 3 x + 2 ) . {\displaystyle (f\circ g)(x)=f(g(x))=(3x+2)^{2}+2(3x+2).} A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial. === Division === The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called rational fractions, rational expressions, or rational functions, depending on context. This is analogous to the fact that the ratio of two integers is a rational number, not necessarily an integer. For example, the fraction 1/(x2 + 1) is not a polynomial, and it cannot be written as a finite sum of powers of the variable x. For polynomials in one variable, there is a notion of Euclidean division of polynomials, generalizing the Euclidean division of integers. This notion of the division a(x)/b(x) results in two polynomials, a quotient q(x) and a remainder r(x), such that a = b q + r and degree(r) < degree(b). The quotient and remainder may be computed by any of several algorithms, including polynomial long division and synthetic division. When the denominator b(x) is monic and linear, that is, b(x) = x − c for some constant c, then the polynomial remainder theorem asserts that the remainder of the division of a(x) by b(x) is the evaluation a(c). In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division. === Factoring === All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree. For example, the factored form of 5 x 3 − 5 {\displaystyle 5x^{3}-5} is 5 ( x − 1 ) ( x 2 + x + 1 ) {\displaystyle 5(x-1)\left(x^{2}+x+1\right)} over the integers and the reals, and 5 ( x − 1 ) ( x + 1 + i 3 2 ) ( x + 1 − i 3 2 ) {\displaystyle 5(x-1)\left(x+{\frac {1+i{\sqrt {3}}}{2}}\right)\left(x+{\frac {1-i{\sqrt {3}}}{2}}\right)} over the complex numbers. The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, efficient polynomial factorization algorithms are available in most computer algebra systems. === Calculus === Calculating derivatives and integrals of polynomials is particularly simple, compared to other kinds of functions. The derivative of the polynomial P = a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = ∑ i = 0 n a i x i {\displaystyle P=a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{2}x^{2}+a_{1}x+a_{0}=\sum _{i=0}^{n}a_{i}x^{i}} with respect to x is the polynomial n a n x n − 1 + ( n − 1 ) a n − 1 x n − 2 + ⋯ + 2 a 2 x + a 1 = ∑ i = 1 n i a i x i − 1 . {\displaystyle na_{n}x^{n-1}+(n-1)a_{n-1}x^{n-2}+\dots +2a_{2}x+a_{1}=\sum _{i=1}^{n}ia_{i}x^{i-1}.} Similarly, the general antiderivative (or indefinite integral) of P {\displaystyle P} is a n x n + 1 n + 1 + a n − 1 x n n + ⋯ + a 2 x 3 3 + a 1 x 2 2 + a 0 x + c = c + ∑ i = 0 n a i x i + 1 i + 1 {\displaystyle {\frac {a_{n}x^{n+1}}{n+1}}+{\frac {a_{n-1}x^{n}}{n}}+\dots +{\frac {a_{2}x^{3}}{3}}+{\frac {a_{1}x^{2}}{2}}+a_{0}x+c=c+\sum _{i=0}^{n}{\frac {a_{i}x^{i+1}}{i+1}}} where c is an arbitrary constant. For example, antiderivatives of x2 + 1 have the form ⁠1/3⁠x3 + x + c. For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number p, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient kak understood to mean the sum of k copies of ak. For example, over the integers modulo p, the derivative of the polynomial xp + x is the polynomial 1. == Polynomial functions == A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function f of one argument from a given domain is a polynomial function if there exists a polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}} that evaluates to f ( x ) {\displaystyle f(x)} for all x in the domain of f (here, n is a non-negative integer and a0, a1, a2, ..., an are constant coefficients). Generally, unless otherwise specified, polynomial functions have complex coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is also restricted to the reals, the resulting function is a real function that maps reals to reals. For example, the function f, defined by f ( x ) = x 3 − x , {\displaystyle f(x)=x^{3}-x,} is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in f ( x , y ) = 2 x 3 + 4 x 2 y + x y 5 + y 2 − 7. {\displaystyle f(x,y)=2x^{3}+4x^{2}y+xy^{5}+y^{2}-7.} According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression ( 1 − x 2 ) 2 , {\displaystyle \left({\sqrt {1-x^{2}}}\right)^{2},} which takes the same values as the polynomial 1 − x 2 {\displaystyle 1-x^{2}} on the interval [ − 1 , 1 ] {\displaystyle [-1,1]} , and thus both expressions define the same polynomial function on this interval. Every polynomial function is continuous, smooth, and entire. The evaluation of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions. For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method, which consists of rewriting the polynomial as ( ( ( ( ( a n x + a n − 1 ) x + a n − 2 ) x + ⋯ + a 3 ) x + a 2 ) x + a 1 ) x + a 0 . {\displaystyle (((((a_{n}x+a_{n-1})x+a_{n-2})x+\dotsb +a_{3})x+a_{2})x+a_{1})x+a_{0}.} === Graphs === A polynomial function in one real variable can be represented by a graph. The graph of the zero polynomial is the x-axis. The graph of a degree 0 polynomial is a horizontal line with y-intercept a0 The graph of a degree 1 polynomial (or linear function) is an oblique line with y-intercept a0 and slope a1. The graph of a degree 2 polynomial is a parabola. The graph of a degree 3 polynomial is a cubic curve. The graph of any polynomial with degree 2 or greater is a continuous non-linear curve. A non-constant polynomial function tends to infinity when the variable increases indefinitely (in absolute value). If the degree is higher than one, the graph does not have any asymptote. It has two parabolic branches with vertical direction (one branch for positive x and one for negative x). Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior. == Equations == A polynomial equation, also called an algebraic equation, is an equation of the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = 0. {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0}=0.} For example, 3 x 2 + 4 x − 5 = 0 {\displaystyle 3x^{2}+4x-5=0} is a polynomial equation. When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like (x + y)(x − y) = x2 − y2, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. In elementary algebra, methods such as the quadratic formula are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals. However, root-finding algorithms may be used to find numerical approximations of the roots of a polynomial expression of any degree. The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra. === Solving equations === A root of a nonzero univariate polynomial P is a value a of x such that P(a) = 0. In other words, a root of P is a solution of the polynomial equation P(x) = 0 or a zero of the polynomial function defined by P. In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered. A number a is a root of a polynomial P if and only if the linear polynomial x − a divides P, that is if there is another polynomial Q such that P = (x − a) Q. It may happen that a power (greater than 1) of x − a divides P; in this case, a is a multiple root of P, and otherwise a is a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a)m divides P, which is called the multiplicity of a as a root of P. The number of roots of a nonzero polynomial P, counted with their respective multiplicities, cannot exceed the degree of P, and equals this degree if all complex roots are considered (this is a consequence of the fundamental theorem of algebra). The coefficients of a polynomial and its roots are related by Vieta's formulas. Some polynomials, such as x2 + 1, do not have any roots among the real numbers. If, however, the set of accepted solutions is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial. There may be several meanings of "solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of 2x − 1 = 0 is 1/2. This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as algebraic expressions; for example, the golden ratio ( 1 + 5 ) / 2 {\displaystyle (1+{\sqrt {5}})/2} is the unique positive solution of x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} In the ancient times, they succeeded only for degrees one and two. For quadratic equations, the quadratic formula provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see cubic equation and quartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, Niels Henrik Abel proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see Abel–Ruffini theorem). In 1830, Évariste Galois proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation). When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute numerical approximations of the solutions. There are many methods for that; some are restricted to polynomials and others may apply to any continuous function. The most efficient algorithms allow solving easily (on a computer) polynomial equations of degree higher than 1,000 (see Root-finding algorithm). For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots". The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem. == Polynomial expressions == Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name. === Trigonometric polynomials === A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using the multiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform. === Matrix polynomials === A matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial P ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n , {\displaystyle P(x)=\sum _{i=0}^{n}{a_{i}x^{i}}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},} this polynomial evaluated at a matrix A is P ( A ) = ∑ i = 0 n a i A i = a 0 I + a 1 A + a 2 A 2 + ⋯ + a n A n , {\displaystyle P(A)=\sum _{i=0}^{n}{a_{i}A^{i}}=a_{0}I+a_{1}A+a_{2}A^{2}+\cdots +a_{n}A^{n},} where I is the identity matrix. A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R). === Exponential polynomials === A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example P(x, ex), may be called an exponential polynomial. == Related concepts == === Rational functions === A rational fraction is the quotient (algebraic fraction) of two polynomials. Any algebraic expression that can be rewritten as a rational fraction is a rational function. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero. The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate. === Laurent polynomials === Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur. === Power series === Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge. == Polynomial ring == A polynomial f over a commutative ring R is a polynomial all of whose coefficients belong to R. It is straightforward to verify that the polynomials in a given set of indeterminates over R form a commutative ring, called the polynomial ring in these indeterminates, denoted R [ x ] {\displaystyle R[x]} in the univariate case and R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} in the multivariate case. One has R [ x 1 , … , x n ] = ( R [ x 1 , … , x n − 1 ] ) [ x n ] . {\displaystyle R[x_{1},\ldots ,x_{n}]=\left(R[x_{1},\ldots ,x_{n-1}]\right)[x_{n}].} So, most of the theory of the multivariate case can be reduced to an iterated univariate case. The map from R to R[x] sending r to itself considered as a constant polynomial is an injective ring homomorphism, by which R is viewed as a subring of R[x]. In particular, R[x] is an algebra over R. One can think of the ring R[x] as arising from R by adding one new element x to R, and extending in a minimal way to a ring in which x satisfies no other relations than the obligatory ones, plus commutation with all elements of R (that is xr = rx). To do this, one must add all powers of x and their linear combinations as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[x] over the real numbers by factoring out the ideal of multiples of the polynomial x2 + 1. Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic). If R is commutative, then one can associate with every polynomial P in R[x] a polynomial function f with domain and range equal to R. (More generally, one can take domain and range to be any same unital associative algebra over R.) One obtains the value f(r) by substitution of the value r for the symbol x in P. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where R is the integers modulo p). This is not the case when R is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for x. === Divisibility === If R is an integral domain and f and g are polynomials in R[x], it is said that f divides g or f is a divisor of g if there exists a polynomial q in R[x] such that f q = g. If a ∈ R , {\displaystyle a\in R,} then a is a root of f if and only x − a {\displaystyle x-a} divides f. In this case, the quotient can be computed using the polynomial long division. If F is a field and f and g are polynomials in F[x] with g ≠ 0, then there exist unique polynomials q and r in F[x] with f = q g + r {\displaystyle f=q\,g+r} and such that the degree of r is smaller than the degree of g (using the convention that the polynomial 0 has a negative degree). The polynomials q and r are uniquely determined by f and g. This is called Euclidean division, division with remainder or polynomial long division and shows that the ring F[x] is a Euclidean domain. Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in any computer algebra system. Eisenstein's criterion can also be used in some cases to determine irreducibility. == Applications == === Positional notation === In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, 4 × 101 + 5 × 100. As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number 1 × 52 + 3 × 51 + 2 × 50 = 42. This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form a = r m b m + r m − 1 b m − 1 + ⋯ + r 1 b + r 0 , {\displaystyle a=r_{m}b^{m}+r_{m-1}b^{m-1}+\dotsb +r_{1}b+r_{0},} where m is a nonnegative integer and the r's are integers such that 0 < rm < b and 0 ≤ ri < b for i = 0, 1, . . . , m − 1. === Interpolation and approximation === The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include polynomial interpolation and the use of splines. === Other applications === Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph. The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input. == History == Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, c. 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write 3x + 2y + z = 29. === History of the notation === The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie, 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the as denote constants and x denotes a variable. Descartes introduced the use of superscripts to denote exponents as well. == See also == List of polynomial topics == Notes == == References == == External links == Markushevich, A.I. (2001) [1994], "Polynomial", Encyclopedia of Mathematics, EMS Press "Euler's Investigations on the Roots of Equations". Archived from the original on September 24, 2012.
Wikipedia/Polynomial_function
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear". The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead. An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space. Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra. Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients. == Definition and motivation == === Motivating examples === === Definition === Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K: Right distributivity: (x + y) · z = x · z + y · z Left distributivity: z · (x + y) = z · x + z · y Compatibility with scalars: (ax) · (by) = (ab) (x · y). These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra. When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs. == Basic concepts == === Algebra homomorphisms === Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as H o m K -alg ( A , B ) . {\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).} A K-algebra isomorphism is a bijective K-algebra homomorphism. === Subalgebras and ideals === A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L. In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra. A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements. x + y is in L (L is closed under addition), cx is in L (L is closed under scalar multiplication), z · x is in L (L is closed under left multiplication by arbitrary elements). If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra. This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2). === Extension of scalars === If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product V F := V ⊗ K F {\displaystyle V_{F}:=V\otimes _{K}F} . So if A is an algebra over K, then A F {\displaystyle A_{F}} is an algebra over F. == Kinds of algebras and examples == Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different. === Unital algebra === An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra. === Zero algebra === An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative. A unital zero algebra is the direct sum ⁠ K ⊕ V {\displaystyle K\oplus V} ⁠ of a field ⁠ K {\displaystyle K} ⁠ and a ⁠ K {\displaystyle K} ⁠-vector space ⁠ V {\displaystyle V} ⁠, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra. More precisely, every element of the algebra may be uniquely written as ⁠ k + v {\displaystyle k+v} ⁠ with ⁠ k ∈ K {\displaystyle k\in K} ⁠ and ⁠ v ∈ V {\displaystyle v\in V} ⁠, and the product is the only bilinear operation such that ⁠ v w = 0 {\displaystyle vw=0} ⁠ for every ⁠ v {\displaystyle v} ⁠ and ⁠ w {\displaystyle w} ⁠ in ⁠ V {\displaystyle V} ⁠. So, if ⁠ k 1 , k 2 ∈ K {\displaystyle k_{1},k_{2}\in K} ⁠ and ⁠ v 1 , v 2 ∈ V {\displaystyle v_{1},v_{2}\in V} ⁠, one has ( k 1 + v 1 ) ( k 2 + v 2 ) = k 1 k 2 + ( k 1 v 2 + k 2 v 1 ) . {\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).} A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space. This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module". Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module ⁠ V {\displaystyle V} ⁠ correspond exactly to the ideals of ⁠ K ⊕ V {\displaystyle K\oplus V} ⁠ that are contained in ⁠ V {\displaystyle V} ⁠. For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals. Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals. === Associative algebra === Examples of associative algebras include the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication. group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication. the commutative algebra K[x] of all polynomials over K (see polynomial ring). algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative. Incidence algebras are built on certain partially ordered sets. algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis. === Non-associative algebra === A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map A × A → A {\displaystyle A\times A\rightarrow A} . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative". Examples detailed in the main article include: Euclidean space R3 with multiplication given by the vector cross product Octonions Lie algebras Jordan algebras Alternative algebras Flexible algebras Power-associative algebras == Algebras and rings == The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism η : K → Z ( A ) , {\displaystyle \eta \colon K\to Z(A),} where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication K × A → A {\displaystyle K\times A\to A} given by ( k , a ) ↦ η ( k ) a . {\displaystyle (k,a)\mapsto \eta (k)a.} Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as f ( k a ) = k f ( a ) {\displaystyle f(ka)=kf(a)} for all k ∈ K {\displaystyle k\in K} and a ∈ A {\displaystyle a\in A} . In other words, the following diagram commutes: K η A ↙ η B ↘ A f ⟶ B {\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}} == Structure coefficients == For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A. Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws. Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars. These structure coefficients determine the multiplication in A via the following rule: e i e j = ∑ k = 1 n c i , j , k e k {\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}} where e1,...,en form a basis of A. Note however that several different sets of structure coefficients can give rise to isomorphic algebras. In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as eiej = ci,jkek. If you apply this to vectors written in index notation, then this becomes (xy)k = ci,jkxiyj. If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism. == Classification of low-dimensional unital associative algebras over the complex numbers == Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study. There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element, 1 ⋅ 1 = 1 , 1 ⋅ a = a , a ⋅ 1 = a . {\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.} It remains to specify a a = 1 {\displaystyle \textstyle aa=1} for the first algebra, a a = 0 {\displaystyle \textstyle aa=0} for the second algebra. There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify a a = a , b b = b , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0} for the first algebra, a a = a , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0} for the second algebra, a a = b , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0} for the third algebra, a a = 1 , b b = 0 , a b = − b a = b {\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b} for the fourth algebra, a a = 0 , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0} for the fifth algebra. The fourth of these algebras is non-commutative, and the others are commutative. == Generalization: algebra over a ring == In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space). === Associative algebras over rings === A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to H × H {\displaystyle \mathbb {H} \times \mathbb {H} } , the direct product of two quaternion algebras. The center of that ring is R × R {\displaystyle \mathbb {R} \times \mathbb {R} } , and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional R {\displaystyle \mathbb {R} } -algebra. In commutative algebra, if A is a commutative ring, then any unital ring homomorphism R → A {\displaystyle R\to A} defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural Z {\displaystyle \mathbb {Z} } -module structure, since one can take the unique homomorphism Z → A {\displaystyle \mathbb {Z} \to A} . On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field. == See also == Algebra over an operad Alternative algebra Clifford algebra Composition algebra Differential algebra Free algebra Geometric algebra Max-plus algebra Mutation (algebra) Operator algebra Zariski's lemma == Notes == == References == Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0.
Wikipedia/Algebra_over_a_field
In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,} where a ≠ 0. The quartic is the highest order polynomial equation that can be solved by radicals in the general case. == History == Lodovico Ferrari is attributed with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna (1545). The proof that this was the highest order general polynomial for which such solutions could be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois before his death in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result. == Special case solutions == Consider a quartic equation expressed in the form a 0 x 4 + a 1 x 3 + a 2 x 2 + a 3 x + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{3}x+a_{4}=0} : There exists a general formula for finding the roots to quartic equations, provided the coefficient of the leading term is non-zero. However, since the general method is quite complex and susceptible to errors in execution, it is better to apply one of the special cases listed below if possible. === Degenerate case === If the constant term a4 = 0, then one of the roots is x = 0, and the other roots can be found by dividing by x, and solving the resulting cubic equation, a 0 x 3 + a 1 x 2 + a 2 x + a 3 = 0. {\displaystyle a_{0}x^{3}+a_{1}x^{2}+a_{2}x+a_{3}=0.\,} === Evident roots: 1 and −1 and −k === Call our quartic polynomial Q(x). Since 1 raised to any power is 1, Q ( 1 ) = a 0 + a 1 + a 2 + a 3 + a 4 . {\displaystyle Q(1)=a_{0}+a_{1}+a_{2}+a_{3}+a_{4}\ .} Thus if a 0 + a 1 + a 2 + a 3 + a 4 = 0 , {\displaystyle \ a_{0}+a_{1}+a_{2}+a_{3}+a_{4}=0\ ,} Q(1) = 0 and so x = 1 is a root of Q(x). It can similarly be shown that if a 0 + a 2 + a 4 = a 1 + a 3 , {\displaystyle \ a_{0}+a_{2}+a_{4}=a_{1}+a_{3}\ ,} x = −1 is a root. In either case the full quartic can then be divided by the factor (x − 1) or (x + 1) respectively yielding a new cubic polynomial, which can be solved to find the quartic's other roots. If a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 2 = 0 {\displaystyle \ a_{2}=0\ } and a 4 = a 3 k , {\displaystyle \ a_{4}=a_{3}k\ ,} then x = − k {\displaystyle \ x=-k\ } is a root of the equation. The full quartic can then be factorized this way: a 0 x 4 + a 0 k x 3 + a 3 x + a 3 k = a 0 x 3 ( x + k ) + a 3 ( x + k ) = ( a 0 x 3 + a 3 ) ( x + k ) . {\displaystyle \ a_{0}x^{4}+a_{0}kx^{3}+a_{3}x+a_{3}k=a_{0}x^{3}(x+k)+a_{3}(x+k)=(a_{0}x^{3}+a_{3})(x+k)\ .} Alternatively, if a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 3 = a 2 k , {\displaystyle \ a_{3}=a_{2}k\ ,} and a 4 = 0 , {\displaystyle \ a_{4}=0\ ,} then x = 0 and x = −k become two known roots. Q(x) divided by x(x + k) is a quadratic polynomial. === Biquadratic equations === A quartic equation where a3 and a1 are equal to 0 takes the form a 0 x 4 + a 2 x 2 + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{2}x^{2}+a_{4}=0\,\!} and thus is a biquadratic equation, which is easy to solve: let z = x 2 {\displaystyle z=x^{2}} , so our equation becomes a 0 z 2 + a 2 z + a 4 = 0 {\displaystyle a_{0}z^{2}+a_{2}z+a_{4}=0\,\!} which is a simple quadratic equation, whose solutions are easily found using the quadratic formula: z = − a 2 ± a 2 2 − 4 a 0 a 4 2 a 0 {\displaystyle z={\frac {-a_{2}\pm {\sqrt {a_{2}^{2}-4a_{0}a_{4}}}}{2a_{0}}}\,\!} When we've solved it (i.e. found these two z values), we can extract x from them x 1 = + z + {\displaystyle x_{1}=+{\sqrt {z_{+}}}\,\!} x 2 = − z + {\displaystyle x_{2}=-{\sqrt {z_{+}}}\,\!} x 3 = + z − {\displaystyle x_{3}=+{\sqrt {z_{-}}}\,\!} x 4 = − z − {\displaystyle x_{4}=-{\sqrt {z_{-}}}\,\!} If either of the z solutions were negative or complex numbers, then some of the x solutions are complex numbers. === Quasi-symmetric equations === a 0 x 4 + a 1 x 3 + a 2 x 2 + a 1 m x + a 0 m 2 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}=0\,} Steps: Divide by x 2. Use variable change z = x + m/x. So, z 2 = x 2 + (m/x) 2 + 2m. This leads to: a 0 ( x 2 + m 2 / x 2 ) + a 1 ( x + m / x ) + a 2 = 0 {\displaystyle a_{0}(x^{2}+m^{2}/x^{2})+a_{1}(x+m/x)+a_{2}=0} , a 0 ( z 2 − 2 m ) + a 1 ( z ) + a 2 = 0 {\displaystyle a_{0}(z^{2}-2m)+a_{1}(z)+a_{2}=0} , z 2 + ( a 1 / a 0 ) z + ( a 2 / a 0 − 2 m ) = 0 {\displaystyle z^{2}+(a_{1}/a_{0})z+(a_{2}/a_{0}-2m)=0} (a quadratic in z = x + m/x) === Multiple roots === If the quartic has a double root, it can be found by taking the polynomial greatest common divisor with its derivative. Then they can be divided out and the resulting quadratic equation solved. In general, there exist only four possible cases of quartic equations with multiple roots, which are listed below: Multiplicity-4 (M4): when the general quartic equation can be expressed as a ( x − l ) 4 = 0 {\displaystyle a(x-l)^{4}=0} , for some real number l {\displaystyle l} . This case can always be reduced to a biquadratic equation. Multiplicity-3 (M3): when the general quartic equation can be expressed as a ( x − l ) 3 ( x − m ) = 0 {\displaystyle a(x-l)^{3}(x-m)=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers. This is the only case that can never be reduced to a biquadratic equation. Double Multiplicity-2 (DM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) 2 = 0 {\displaystyle a(x-l)^{2}(x-m)^{2}=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers or a pair of non-real complex conjugate numbers. This case can also always be reduced to a biquadratic equation. Single Multiplicity-2 (SM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) ( x − n ) = 0 {\displaystyle a(x-l)^{2}(x-m)(x-n)=0} , where l {\displaystyle l} , m {\displaystyle m} , and n {\displaystyle n} are three different real numbers or l {\displaystyle l} is a real number and m {\displaystyle m} and n {\displaystyle n} are a pair of non-real complex conjugate numbers. This case is divided into two subcases, those that can be reduced to a biquadratic equation and those that can't. Consider the case in which the three non-monic coefficients of the depressed quartic equation, x 4 + p x 2 + q x + r = 0 {\displaystyle x^{4}+px^{2}+qx+r=0} , can be expressed in terms of the five coefficients of the general quartic equation as follows: p = 8 a c − 3 b 2 8 a 2 {\displaystyle p={\frac {8ac-3b^{2}}{8a^{2}}}} q = b 3 − 4 a b c + 8 a 2 d 8 a 3 {\displaystyle q={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}} r = 16 a b 2 c − 64 a 2 b d − 3 b 4 + 256 a 3 e 256 a 4 {\displaystyle r={\frac {16ab^{2}c-64a^{2}bd-3b^{4}+256a^{3}e}{256a^{4}}}} , Then, the criteria to identify a priori each case of quartic equations with multiple roots and their respective solutions are shown below. M4. The general quartic equation corresponds to this case whenever p = q = r = 0 {\displaystyle p=q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = x 3 = x 4 = − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=x_{4}=-{\frac {b}{4a}}} . M3. The general quartic equation corresponds to this case whenever p 2 = − 12 r > 0 {\displaystyle p^{2}=-12r>0} and 27 q 2 = − 8 p 3 > 0 {\displaystyle 27q^{2}=-8p^{3}>0} , so the four roots of this equation are given as follows if q > 0 {\displaystyle q>0} : x 1 = x 2 = x 3 = − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}={\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} x 4 = − − 3 p 2 − b 4 a {\displaystyle x_{4}=-{\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} Otherwise, if q ≤ 0 {\displaystyle q\leq 0} : x 1 = x 2 = x 3 = − − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=-{\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} x 4 = − 3 p 2 − b 4 a {\displaystyle x_{4}={\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} . DM2. The general quartic equation corresponds to this case whenever p 2 = 4 r > 0 = q {\displaystyle p^{2}=4r>0=q} , so the four roots of this equation are given as follows: x 1 = x 3 = − p 2 − b 4 a {\displaystyle x_{1}=x_{3}={\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} x 2 = x 4 = − − p 2 − b 4 a {\displaystyle x_{2}=x_{4}=-{\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} . Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever p ≠ q = r = 0 {\displaystyle p\neq q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = − b 4 a {\displaystyle x_{1}=x_{2}=-{\frac {b}{4a}}} x 3 = − p − b 4 a {\displaystyle x_{3}={\sqrt {-p}}-{\frac {b}{4a}}} x 4 = − − p − b 4 a {\displaystyle x_{4}=-{\sqrt {-p}}-{\frac {b}{4a}}} . Non-Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever ( p 2 + 12 r ) 3 = [ p ( p 2 − 36 r ) + 27 2 q 2 ] 2 > 0 ≠ q {\displaystyle (p^{2}+12r)^{3}=[p(p^{2}-36r)+{\frac {27}{2}}q^{2}]^{2}>0\neq {q}} , so the four roots of this equation are given by the following formula: x = 1 2 [ ξ s 1 ± 2 ( s 2 − ξ q s 1 ) ] − b 4 a {\displaystyle x={\frac {1}{2}}\left[\xi {\sqrt {s_{1}}}\pm {\sqrt {2{\biggl (}s_{2}-{\frac {\xi q}{\sqrt {s_{1}}}}{\biggr )}}}\right]-{\frac {b}{4a}}} , where: s 1 = 9 q 2 − 32 p r p 2 + 12 r > 0 {\displaystyle s_{1}={\frac {9q^{2}-32pr}{p^{2}+12r}}>0} s 2 = − 2 p ( p 2 − 4 r ) + 9 q 2 2 ( p 2 + 12 r ) ≠ 0 {\displaystyle s_{2}=-{\frac {2p(p^{2}-4r)+9q^{2}}{2(p^{2}+12r)}}\neq 0} ξ = ± 1 {\displaystyle \xi =\pm 1} . == The general case == To begin, the quartic must first be converted to a depressed quartic. === Converting to a depressed quartic === Let be the general quartic equation which it is desired to solve. Divide both sides by A, x 4 + B A x 3 + C A x 2 + D A x + E A = 0 . {\displaystyle \ x^{4}+{B \over A}x^{3}+{C \over A}x^{2}+{D \over A}x+{E \over A}=0\ .} The first step, if B is not already zero, should be to eliminate the x3 term. To do this, change variables from x to u, such that x = u − B 4 A . {\displaystyle \ x=u-{B \over 4A}\ .} Then ( u − B 4 A ) 4 + B A ( u − B 4 A ) 3 + C A ( u − B 4 A ) 2 + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u-{B \over 4A}\right)^{4}+{B \over A}\left(u-{B \over 4A}\right)^{3}+{C \over A}\left(u-{B \over 4A}\right)^{2}+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Expanding the powers of the binomials produces ( u 4 − B A u 3 + 6 u 2 B 2 16 A 2 − 4 u B 3 64 A 3 + B 4 256 A 4 ) + B A ( u 3 − 3 u 2 B 4 A + 3 u B 2 16 A 2 − B 3 64 A 3 ) + C A ( u 2 − u B 2 A + B 2 16 A 2 ) + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u^{4}-{B \over A}u^{3}+{6u^{2}B^{2} \over 16A^{2}}-{4uB^{3} \over 64A^{3}}+{B^{4} \over 256A^{4}}\right)+{B \over A}\left(u^{3}-{3u^{2}B \over 4A}+{3uB^{2} \over 16A^{2}}-{B^{3} \over 64A^{3}}\right)+{C \over A}\left(u^{2}-{uB \over 2A}+{B^{2} \over 16A^{2}}\right)+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Collecting the same powers of u yields u 4 + ( − 3 B 2 8 A 2 + C A ) u 2 + ( B 3 8 A 3 − B C 2 A 2 + D A ) u + ( − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A ) = 0 . {\displaystyle \ u^{4}+\left({-3B^{2} \over 8A^{2}}+{C \over A}\right)u^{2}+\left({B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\right)u+\left({-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\right)=0\ .} Now rename the coefficients of u. Let a = − 3 B 2 8 A 2 + C A , b = B 3 8 A 3 − B C 2 A 2 + D A , c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle {\begin{aligned}a&={-3B^{2} \over 8A^{2}}+{C \over A}\ ,\\b&={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\ ,\\c&={-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\ .\end{aligned}}} The resulting equation is which is a depressed quartic equation. If b = 0 {\displaystyle \ b=0\ } then we have the special case of a biquadratic equation, which is easily solved, as explained above. Note that the general solution, given below, will not work for the special case b = 0 . {\displaystyle \ b=0\ .} The equation must be solved as a biquadratic. In either case, once the depressed quartic is solved for u, substituting those values into x = u − B 4 A {\displaystyle \ x=u-{B \over 4A}\ } produces the values for x that solve the original quartic. === Solving a depressed quartic when b ≠ 0 === After converting to a depressed quartic equation u 4 + a u 2 + b u + c = 0 {\displaystyle u^{4}+au^{2}+bu+c=0} and excluding the special case b = 0, which is solved as a biquadratic, we assume from here on that b ≠ 0 . We will separate the terms left and right as u 4 = − a u 2 − b u − c {\displaystyle u^{4}=-au^{2}-bu-c} and add in terms to both sides which make them both into perfect squares. Let y be any solution of this cubic equation: 2 y 3 − a y 2 − 2 c y + ( a c − 1 4 b 2 ) = ( 2 y − a ) ( y 2 − c ) − 1 4 b 2 = 0 . {\displaystyle 2y^{3}-ay^{2}-2cy+(ac-{\tfrac {1}{4}}b^{2})=(2y-a)(y^{2}-c)-{\tfrac {1}{4}}b^{2}=0\ .} Then (since b ≠ 0) 2 y − a ≠ 0 {\displaystyle 2y-a\neq 0} so we may divide by it, giving y 2 − c = b 2 4 ( 2 y − a ) . {\displaystyle y^{2}-c={\frac {b^{2}}{4(2y-a)}}\ .} Then ( u 2 + y ) 2 = u 4 + 2 y u 2 + y 2 = ( 2 y − a ) u 2 − b u + ( y 2 − c ) = ( 2 y − a ) u 2 − b u + b 2 4 ( 2 y − a ) = ( 2 y − a u − b 2 2 y − a ) 2 . {\displaystyle (u^{2}+y)^{2}=u^{4}+2yu^{2}+y^{2}=(2y-a)u^{2}-bu+(y^{2}-c)=(2y-a)u^{2}-bu+{\frac {b^{2}}{\ 4(2y-a)\ }}=\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}\ .} Subtracting, we get the difference of two squares which is the product of the sum and difference of their roots ( u 2 + y ) 2 − ( 2 y − a u − b 2 2 y − a ) 2 = ( u 2 + y + 2 y − a u − b 2 2 y − a ) ( u 2 + y − 2 y − a u + b 2 2 y − a ) = 0 {\displaystyle (u^{2}+y)^{2}-\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}=\left(u^{2}+y+{\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)\left(u^{2}+y-{\sqrt {2y-a\ }}\,u+{\frac {b}{2{\sqrt {2y-a\ }}}}\right)=0} which can be solved by applying the quadratic formula to each of the two factors. So the possible values of u are: u = 1 2 ( − 2 y − a + − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}+{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( − 2 y − a − − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}-{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( 2 y − a + − 2 y − a − 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}+{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} or u = 1 2 ( 2 y − a − − 2 y − a − 2 b 2 y − a ) . {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}-{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ .} Using another y from among the three roots of the cubic simply causes these same four values of u to appear in a different order. The solutions of the cubic are: y = a 6 + w − p 3 w {\displaystyle \ y={\frac {a}{6}}+w-{\frac {p}{3w}}\ } w = − q 2 + q 2 4 + p 3 27 3 {\displaystyle \ w={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}\ }}\ }}} using any one of the three possible cube roots. A wise strategy is to choose the sign of the square-root that makes the absolute value of w as large as possible. p = − a 2 12 − c , {\displaystyle \ p=-{\frac {a^{2}}{12}}-c\ ,} q = − a 3 108 + a c 3 − b 2 8 . {\displaystyle \ q=-{\frac {a^{3}}{108}}+{\frac {ac}{3}}-{\frac {b^{2}}{8}}\ .} === Ferrari's solution === Otherwise, the depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. Once the depressed quartic has been obtained, the next step is to add the valid identity ( u 2 + a ) 2 − u 4 − 2 a u 2 = a 2 {\displaystyle \left(u^{2}+a\right)^{2}-u^{4}-2au^{2}=a^{2}} to equation (1), yielding The effect has been to fold up the u4 term into a perfect square: (u2 + a)2. The second term, au2 did not disappear, but its sign has changed and it has been moved to the right side. The next step is to insert a variable y into the perfect square on the left side of equation (2), and a corresponding 2y into the coefficient of u2 in the right side. To accomplish these insertions, the following valid formulas will be added to equation (2), ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = 2 y ( u 2 + a ) + y 2 = 2 y u 2 + 2 y a + y 2 , {\displaystyle {\begin{aligned}(u^{2}+a+y)^{2}-(u^{2}+a)^{2}&=2y(u^{2}+a)+y^{2}\ \ \\&=2yu^{2}+2ya+y^{2},\end{aligned}}} and 0 = ( a + 2 y ) u 2 − 2 y u 2 − a u 2 {\displaystyle 0=(a+2y)u^{2}-2yu^{2}-au^{2}\,} These two formulas, added together, produce ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = ( a + 2 y ) u 2 − a u 2 + 2 y a + y 2 ( y -insertion ) {\displaystyle \left(u^{2}+a+y\right)^{2}-\left(u^{2}+a\right)^{2}=\left(a+2y\right)u^{2}-au^{2}+2ya+y^{2}\qquad \qquad (y{\hbox{-insertion}})\,} which added to equation (2) produces ( u 2 + a + y ) 2 + b u + c = ( a + 2 y ) u 2 + ( 2 y a + y 2 + a 2 ) . {\displaystyle \left(u^{2}+a+y\right)^{2}+bu+c=\left(a+2y\right)u^{2}+\left(2ya+y^{2}+a^{2}\right).\,} This is equivalent to The objective now is to choose a value for y such that the right side of equation (3) becomes a perfect square. This can be done by letting the discriminant of the quadratic function become zero. To explain this, first expand a perfect square so that it equals a quadratic function: ( s u + t ) 2 = ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) . {\displaystyle \left(su+t\right)^{2}=\left(s^{2}\right)u^{2}+\left(2st\right)u+\left(t^{2}\right).\,} The quadratic function on the right side has three coefficients. It can be verified that squaring the second coefficient and then subtracting four times the product of the first and third coefficients yields zero: ( 2 s t ) 2 − 4 ( s 2 ) ( t 2 ) = 0. {\displaystyle \left(2st\right)^{2}-4\left(s^{2}\right)\left(t^{2}\right)=0.\,} Therefore to make the right side of equation (3) into a perfect square, the following equation must be solved: ( − b ) 2 − 4 ( 2 y + a ) ( y 2 + 2 y a + a 2 − c ) = 0. {\displaystyle (-b)^{2}-4\left(2y+a\right)\left(y^{2}+2ya+a^{2}-c\right)=0.\,} Multiply the binomial with the polynomial, b 2 − 4 ( 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c ) ) = 0 {\displaystyle b^{2}-4\left(2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac\right)\right)=0\,} Divide both sides by −4, and move the −b2/4 to the right, 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c − b 2 4 ) = 0 {\displaystyle 2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac-{\frac {b^{2}}{4}}\right)=0} Divide both sides by 2, This is a cubic equation in y. Solve for y using any method for solving such equations (e.g. conversion to a reduced cubic and application of Cardano's formula). Any of the three possible roots will do. ==== Folding the second perfect square ==== With the value for y so selected, it is now known that the right side of equation (3) is a perfect square of the form ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) = ( ( s 2 ) u + ( 2 s t ) 2 s 2 ) 2 {\displaystyle \left(s^{2}\right)u^{2}+(2st)u+\left(t^{2}\right)=\left(\left({\sqrt {s^{2}}}\right)u+{(2st) \over 2{\sqrt {s^{2}}}}\right)^{2}} (This is correct for both signs of square root, as long as the same sign is taken for both square roots. A ± is redundant, as it would be absorbed by another ± a few equations further down this page.) so that it can be folded: ( a + 2 y ) u 2 + ( − b ) u + ( y 2 + 2 y a + a 2 − c ) = ( ( a + 2 y ) u + ( − b ) 2 a + 2 y ) 2 . {\displaystyle (a+2y)u^{2}+(-b)u+\left(y^{2}+2ya+a^{2}-c\right)=\left(\left({\sqrt {a+2y}}\right)u+{(-b) \over 2{\sqrt {a+2y}}}\right)^{2}.} Note: If b ≠ 0 then a + 2y ≠ 0. If b = 0 then this would be a biquadratic equation, which we solved earlier. Therefore equation (3) becomes Equation (5) has a pair of folded perfect squares, one on each side of the equation. The two perfect squares balance each other. If two squares are equal, then the sides of the two squares are also equal, as shown by: Collecting like powers of u produces Note: The subscript s of ± s {\displaystyle \pm _{s}} and ∓ s {\displaystyle \mp _{s}} is to note that they are dependent. Equation (6) is a quadratic equation for u. Its solution is u = ± s a + 2 y ± t ( a + 2 y ) − 4 ( a + y ± s b 2 a + 2 y ) 2 . {\displaystyle u={\frac {\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {(a+2y)-4\left(a+y\pm _{s}{b \over 2{\sqrt {a+2y}}}\right)}}}{2}}.} Simplifying, one gets u = ± s a + 2 y ± t − ( 3 a + 2 y ± s 2 b a + 2 y ) 2 . {\displaystyle u={\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over {\sqrt {a+2y}}}\right)}} \over 2}.} This is the solution of the depressed quartic, therefore the solutions of the original quartic equation are Remember: The two ± s {\displaystyle \pm _{s}} come from the same place in equation (5'), and should both have the same sign, while the sign of ± t {\displaystyle \pm _{t}} is independent. ==== Summary of Ferrari's method ==== Given the quartic equation A x 4 + B x 3 + C x 2 + D x + E = 0 , {\displaystyle Ax^{4}+Bx^{3}+Cx^{2}+Dx+E=0,\,} its solution can be found by means of the following calculations: a = − 3 B 2 8 A 2 + C A , {\displaystyle a=-{3B^{2} \over 8A^{2}}+{C \over A},} b = B 3 8 A 3 − B C 2 A 2 + D A , {\displaystyle b={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A},} c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle c=-{3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}.} If b = 0 , {\displaystyle \,b=0,} then x = − B 4 A ± s − a ± t a 2 − 4 c 2 (for b = 0 only) . {\displaystyle x=-{B \over 4A}\pm _{s}{\sqrt {-a\pm _{t}{\sqrt {a^{2}-4c}} \over 2}}\qquad {\mbox{(for }}b=0{\mbox{ only)}}.} Otherwise, continue with P = − a 2 12 − c , {\displaystyle P=-{a^{2} \over 12}-c,} Q = − a 3 108 + a c 3 − b 2 8 , {\displaystyle Q=-{a^{3} \over 108}+{ac \over 3}-{b^{2} \over 8},} R = − Q 2 ± Q 2 4 + P 3 27 , {\displaystyle R=-{Q \over 2}\pm {\sqrt {{Q^{2} \over 4}+{P^{3} \over 27}}},} (either sign of the square root will do) U = R 3 , {\displaystyle U={\sqrt[{3}]{R}},} (there are 3 complex roots, any one of them will do) y = − 5 6 a + { U = 0 → − Q 3 U ≠ 0 , → U − P 3 U , {\displaystyle y=-{5 \over 6}a+{\begin{cases}U=0&\to -{\sqrt[{3}]{Q}}\\U\neq 0,&\to U-{P \over 3U},\end{cases}}\quad \quad \quad } W = a + 2 y {\displaystyle W={\sqrt {a+2y}}} x = − B 4 A + ± s W ± t − ( 3 a + 2 y ± s 2 b W ) 2 . {\displaystyle x=-{B \over 4A}+{\pm _{s}W\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over W}\right)}} \over 2}.} The two ±s must have the same sign, the ±t is independent. To get all roots, compute x for (±s,±t) = (+,+); (+,−); (−,+); (−,−). This formula handles repeated roots without problem. Ferrari was the first to discover one of these labyrinthine solutions. The equation which he solved was x 4 + 6 x 2 − 60 x + 36 = 0 {\displaystyle x^{4}+6x^{2}-60x+36=0} which was already in depressed form. It has a pair of solutions which can be found with the set of formulas shown above. === Ferrari's solution in the special case of real coefficients === If the coefficients of the quartic equation are real then the nested depressed cubic equation (5) also has real coefficients, thus it has at least one real root. Furthermore the cubic function C ( v ) = v 3 + P v + Q , {\displaystyle C(v)=v^{3}+Pv+Q,} where P and Q are given by (5) has the properties that C ( a 3 ) = − b 2 8 < 0 {\displaystyle C\left({a \over 3}\right)={-b^{2} \over 8}<0} and lim v → ∞ C ( v ) = ∞ , {\displaystyle \lim _{v\to \infty }C(v)=\infty ,} where a and b are given by (1). This means that (5) has a real root greater than a 3 {\displaystyle a \over 3} , and therefore that (4) has a real root greater than − a 2 {\displaystyle -a \over 2} . Using this root the term a + 2 y {\displaystyle {\sqrt {a+2y}}} in (6) is always real, which ensures that the two quadratic equations (6) have real coefficients. === Obtaining alternative solutions the hard way === It could happen that one only obtained one solution through the formulae above, because not all four sign patterns are tried for four solutions, and the solution obtained is complex. It may also be the case that one is only looking for a real solution. Let x1 denote the complex solution. If all the original coefficients A, B, C, D and E are real—which should be the case when one desires only real solutions – then there is another complex solution x2 which is the complex conjugate of x1. If the other two roots are denoted as x3 and x4 then the quartic equation can be expressed as ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) ( x − x 4 ) = 0 , {\displaystyle (x-x_{1})(x-x_{2})(x-x_{3})(x-x_{4})=0,\,} but this quartic equation is equivalent to the product of two quadratic equations: and Since x 2 = x 1 ⋆ {\displaystyle x_{2}=x_{1}^{\star }} then ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 1 ⋆ ) x + x 1 x 1 ⋆ = x 2 − 2 Re ⁡ ( x 1 ) x + [ Re ⁡ ( x 1 ) ] 2 + [ Im ⁡ ( x 1 ) ] 2 . {\displaystyle {\begin{aligned}(x-x_{1})(x-x_{2})&=x^{2}-(x_{1}+x_{1}^{\star })x+x_{1}x_{1}^{\star }\\&=x^{2}-2\operatorname {Re} (x_{1})x+[\operatorname {Re} (x_{1})]^{2}+[\operatorname {Im} (x_{1})]^{2}.\end{aligned}}} Let a = − 2 Re ⁡ ( x 1 ) , {\displaystyle a=-2\operatorname {Re} (x_{1}),} b = [ Re ⁡ ( x 1 ) ] 2 + [ Im ⁡ ( x 1 ) ] 2 {\displaystyle b=\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}} so that equation (9) becomes Also let there be (unknown) variables w and v such that equation (10) becomes Multiplying equations (11) and (12) produces Comparing equation (13) to the original quartic equation, it can be seen that a + w = B A , {\displaystyle a+w={B \over A},} b + w a + v = C A , {\displaystyle b+wa+v={C \over A},} w b + v a = D A , {\displaystyle wb+va={D \over A},} and v b = E A . {\displaystyle vb={E \over A}.} Therefore w = B A − a = B A + 2 Re ⁡ ( x 1 ) , {\displaystyle w={B \over A}-a={B \over A}+2\operatorname {Re} (x_{1}),} v = E A b = E A ( [ Re ⁡ ( x 1 ) ] 2 + [ Im ⁡ ( x 1 ) ] 2 ) . {\displaystyle v={E \over Ab}={\frac {E}{A\left(\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}\right)}}.} Equation (12) can be solved for x yielding x 3 = − w + w 2 − 4 v 2 , {\displaystyle x_{3}={-w+{\sqrt {w^{2}-4v}} \over 2},} x 4 = − w − w 2 − 4 v 2 . {\displaystyle x_{4}={-w-{\sqrt {w^{2}-4v}} \over 2}.} One of these two solutions should be the desired real solution. == Alternative methods == === Quick and memorable solution from first principles === Most textbook solutions of the quartic equation require a substitution that is hard to memorize. Here is an approach that makes it easy to understand. The job is done if we can factor the quartic equation into a product of two quadratics. Let 0 = x 4 + b x 3 + c x 2 + d x + e = ( x 2 + p x + q ) ( x 2 + r x + s ) = x 4 + ( p + r ) x 3 + ( q + s + p r ) x 2 + ( p s + q r ) x + q s {\displaystyle {\begin{aligned}0&=x^{4}+bx^{3}+cx^{2}+dx+e\\&=\left(x^{2}+px+q\right)\left(x^{2}+rx+s\right)\\&=x^{4}+(p+r)x^{3}+(q+s+pr)x^{2}+(ps+qr)x+qs\end{aligned}}} By equating coefficients, this results in the following set of simultaneous equations: b = p + r c = q + s + p r d = p s + q r e = q s {\displaystyle {\begin{aligned}b&=p+r\\c&=q+s+pr\\d&=ps+qr\\e&=qs\end{aligned}}} This is harder to solve than it looks, but if we start again with a depressed quartic where b = 0 {\displaystyle b=0} , which can be obtained by substituting ( x − b / 4 ) {\displaystyle (x-b/4)} for x {\displaystyle x} , then r = − p {\displaystyle r=-p} , and: c + p 2 = s + q d / p = s − q e = s q {\displaystyle {\begin{aligned}c+p^{2}&=s+q\\d/p&=s-q\\e&=sq\end{aligned}}} It's now easy to eliminate both s {\displaystyle s} and q {\displaystyle q} by doing the following: ( c + p 2 ) 2 − ( d / p ) 2 = ( s + q ) 2 − ( s − q ) 2 = 4 s q = 4 e {\displaystyle {\begin{aligned}\left(c+p^{2}\right)^{2}-(d/p)^{2}&=(s+q)^{2}-(s-q)^{2}\\&=4sq\\&=4e\end{aligned}}} If we set P = p 2 {\displaystyle P=p^{2}} , then this equation turns into the cubic equation: P 3 + 2 c P 2 + ( c 2 − 4 e ) P − d 2 = 0 {\displaystyle P^{3}+2cP^{2}+\left(c^{2}-4e\right)P-d^{2}=0} which is solved elsewhere. Once you have p {\displaystyle p} , then: r = − p 2 s = c + p 2 + d / p 2 q = c + p 2 − d / p {\displaystyle {\begin{aligned}r&=-p\\2s&=c+p^{2}+d/p\\2q&=c+p^{2}-d/p\end{aligned}}} The symmetries in this solution are easy to see. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of p {\displaystyle p} for the square root of P {\displaystyle P} merely exchanges the two quadratics with one another. === Möbius transformation method === A suitably chosen Möbius transformation can transform a quartic equation into a quadratic equation in the new variable squared. This is a known method. Finding such a Möbius transformation involves solving a cubic equation and so simplifies the problem. For example, start with the depressed quartic equation with unity leading coefficient and with neither a 1 {\displaystyle a_{1}} nor a 0 {\displaystyle a_{0}} equal to zero: x 4 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle x^{4}+a_{2}x^{2}+a_{1}x+a_{0}=0} and do the Möbius transformation: x = A + B y 1 + y {\displaystyle x={\frac {A+By}{1+y}}} Set the first and third order coefficients of the resulting quartic equation in y {\displaystyle y} to zero. After some algebra, one finds A + B {\displaystyle A+B} is to be obtained from the cubic equation a 1 ( A + B ) 3 + ( 4 a 0 − 2 a 1 a 2 − a 2 2 ) ( A + B ) 2 − 2 a 1 a 2 ( A + B ) − a 1 2 = 0 {\displaystyle a_{1}(A+B)^{3}+(4a_{0}-2a_{1}a_{2}-{a_{2}}^{2})(A+B)^{2}-2a_{1}a_{2}(A+B)-{a_{1}}^{2}=0} and, regarding A + B {\displaystyle A+B} as known, A {\displaystyle A} is to be obtained from the quadratic equation 2 ( A + B ) A 2 − 2 ( A + B ) 2 A − a 2 ( A + B ) − a 1 = 0 {\displaystyle 2(A+B)A^{2}-2(A+B)^{2}A-a_{2}(A+B)-a_{1}=0} Solving the resulting quadratic equation for y 2 {\displaystyle y^{2}} gives two values for y 2 {\displaystyle y^{2}} and each square root of y 2 {\displaystyle y^{2}} has two values, giving a total of four solutions, as expected. The cubic equation in A + B {\displaystyle {\textbf {A}}+{\textbf {B}}} given earlier is the same as P 2 − Q ( A + B ) 2 = 0 {\displaystyle P^{2}-Q(A+B)^{2}=0} , where P ≡ b 1 − b 3 2 ( A − B ) = 2 A B ( A + B ) + a 2 ( A + B ) + a 1 {\displaystyle P\equiv {\frac {b_{1}-b_{3}}{2(A-B)}}=2\,A\,B\,(A+B)+a_{2}(A+B)+a_{1}} Q ≡ B b 1 − A b 3 A − B = 4 A 2 B 2 − a 1 ( A + B ) − 4 a 0 = 0 {\displaystyle Q\equiv {\frac {B\,b_{1}-A\,b_{3}}{A-B}}=4A^{2}B^{2}-a_{1}(A+B)-4a_{0}=0} Here bi are the coefficients of the quartic polynomial in y. This shows how this equation was obtained. === Galois theory and factorization === The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots. Suppose ri for i from 0 to 3 are roots of x 4 + b x 3 + c x 2 + d x + e = 0 ( 1 ) {\displaystyle x^{4}+bx^{3}+cx^{2}+dx+e=0\qquad (1)} If we now set s 0 = 1 2 ( r 0 + r 1 + r 2 + r 3 ) , s 1 = 1 2 ( r 0 − r 1 + r 2 − r 3 ) , s 2 = 1 2 ( r 0 + r 1 − r 2 − r 3 ) , s 3 = 1 2 ( r 0 − r 1 − r 2 + r 3 ) , {\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(r_{0}+r_{1}+r_{2}+r_{3}),\\s_{1}&={\tfrac {1}{2}}(r_{0}-r_{1}+r_{2}-r_{3}),\\s_{2}&={\tfrac {1}{2}}(r_{0}+r_{1}-r_{2}-r_{3}),\\s_{3}&={\tfrac {1}{2}}(r_{0}-r_{1}-r_{2}+r_{3}),\end{aligned}}} then since the transformation is an involution, we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −b/2, we really only need the values for s1, s2 and s3. These we may find by expanding the polynomial ( z 2 − s 1 2 ) ( z 2 − s 2 2 ) ( z 2 − s 3 2 ) ( 2 ) {\displaystyle \left(z^{2}-s_{1}^{2}\right)\left(z^{2}-s_{2}^{2}\right)\left(z^{2}-s_{3}^{2}\right)\qquad (2)} which if we make the simplifying assumption that b = 0, is equal to z 6 + 2 c z 4 + ( c 2 − 4 e ) z 2 − d 2 ( 3 ) {\displaystyle z^{6}+2cz^{4}+\left(c^{2}-4e\right)z^{2}-d^{2}\qquad (3)} This polynomial is of degree six, but only of degree three in z2, and so the corresponding equation is solvable. By trial we can determine which three roots are the correct ones, and hence find the solutions of the quartic. We can remove any requirement for trial by using a root of the same resolvent polynomial for factoring; if w is any root of (3), and if F 1 = x 2 + w x + 1 2 w 2 + 1 2 c − 1 2 ⋅ c 2 w d − 1 2 ⋅ w 5 d − c w 3 d + 2 e w d {\displaystyle F_{1}=x^{2}+wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c-{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}-{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}-{\frac {cw^{3}}{d}}+2{\frac {ew}{d}}} F 2 = x 2 − w x + 1 2 w 2 + 1 2 c + 1 2 ⋅ w 5 d + c w 3 d − 2 e w d + 1 2 ⋅ c 2 w d {\displaystyle F_{2}=x^{2}-wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c+{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}+{\frac {cw^{3}}{d}}-2{\frac {ew}{d}}+{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}} then F 1 F 2 = x 4 + c x 2 + d x + e ( 4 ) {\displaystyle F_{1}F_{2}=x^{4}+cx^{2}+dx+e\qquad \qquad (4)} We therefore can solve the quartic by solving for w and then solving for the roots of the two factors using the quadratic formula. === Approximate methods === The methods described above are, in principle, exact root-finding methods. It is also possible to use successive approximation methods which iteratively converge towards the roots, such as the Durand–Kerner method. Iterative methods are the only ones available for quintic and higher-order equations, beyond trivial or special cases. == See also == Linear equation Quadratic equation Cubic equation Quintic equation Polynomial Newton's method Principal equation form == References == Ferrari's achievement Quartic formula as four single equations at PlanetMath. == Notes == == External links == Calculator for solving Quartics
Wikipedia/Quartic_equation
Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers Z {\displaystyle \mathbb {Z} } ; and p-adic integers. Commutative algebra is the main technical tool of algebraic geometry, and many results and concepts of commutative algebra are strongly related with geometrical concepts. The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras. == Overview == Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry. Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings. Polynomial rings in several indeterminates over a field are examples of commutative rings. Since algebraic geometry is fundamentally the study of the common zeros of these rings, many results and concepts of algebraic geometry have counterparts in commutative algebra, and their names recall often their geometric origin; for example "Krull dimension", "localization of a ring", "local ring", "regular ring". An affine algebraic variety corresponds to a prime ideal in a polynomial ring, and the points of such an affine variety correspond to the maximal ideals that contain this prime ideal. The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal. The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology. This is the starting point of scheme theory, a generalization of algebraic geometry introduced by Grothendieck, which is strongly based on commutative algebra, and has induced, in turns, many developments of commutative algebra. == History == The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem. The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that of regular local rings. He established the concept of the Krull dimension of a ring, first for Noetherian rings before moving on to expand his theory to cover general valuation rings and Krull rings. To this day, Krull's principal ideal theorem is widely considered the single most important foundational theorem in commutative algebra. These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject. Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Krull and Noether. == Main tools and results == === Noetherian rings === A Noetherian ring, named after Emmy Noether, is a ring in which every ideal is finitely generated; that is, all elements of any ideal can be written as a linear combinations of a finite set of elements, with coefficients in the ring. Many commonly considered commutative rings are Noetherian, in particular, every field, the ring of the integer, and every polynomial ring in one or several indeterminates over them. The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem. Moreover, many ring constructions preserve the Noetherian property. In particular, if a commutative ring R is Noetherian, the same is true for every polynomial ring over it, and for every quotient ring, localization, or completion of the ring. The importance of the Noetherian property lies in its ubiquity and also in the fact that many important theorems of commutative algebra require that the involved rings are Noetherian, This is the case, in particular of Lasker–Noether theorem, the Krull intersection theorem, and Nakayama's lemma. Furthermore, if a ring is Noetherian, then it satisfies the descending chain condition on prime ideals, which implies that every Noetherian local ring has a finite Krull dimension. === Primary decomposition === An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals. The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic: For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime. === Localization === The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions m s {\displaystyle {\frac {m}{s}}} . where the denominators s range in a given subset S of R. The archetypal example is the construction of the ring Q of rational numbers from the ring Z of integers. === Completion === A completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have simpler structure than the general ones and Hensel's lemma applies to them. === Zariski topology on prime ideals === The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets V ( I ) = { P ∈ Spec ( A ) ∣ I ⊆ P } {\displaystyle V(I)=\{P\in \operatorname {Spec} \,(A)\mid I\subseteq P\}} where A is a fixed commutative ring and I is an ideal. This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring. == Connections with algebraic geometry == Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form a category that is antiequivalent (dual) to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc. Nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions, leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. == See also == List of commutative algebra topics Glossary of commutative algebra Combinatorial commutative algebra Gröbner basis Homological algebra == Notes == == References == Atiyah, Michael; Macdonald, Ian G. (2018) [1969]. Introduction to Commutative Algebra. CRC Press. ISBN 978-0-429-96218-9. Bourbaki, Nicolas (1998) [1989]. "Chapters 1–7". Commutative algebra. Elements of Mathematics. Springer. ISBN 3-540-64239-0. Bourbaki, Nicolas (2006) [1983]. "Chapitres 8 et 9". Algèbre commutative. Éléments de mathématique. Springer. ISBN 978-3-540-33942-7. Eisenbud, David (1995). Commutative algebra with a view toward algebraic geometry. Graduate Texts in Mathematics. Vol. 150. New York: Springer-Verlag. xvi+785. ISBN 0-387-94268-8. MR 1322960. Goblot, Rémi (2001). Algèbre commutative, cours et exercices corrigés (2e ed.). Dunod. ISBN 2-10-005779-0. Kunz, Ernst (1985). Introduction to Commutative algebra and algebraic geometry. Birkhauser. ISBN 0-8176-3065-1. Matsumura, Hideyuki (1980). Commutative algebra. Mathematics Lecture Note Series. Vol. 56 (2nd ed.). Benjamin/Cummings. ISBN 0-8053-7026-9. Matsumura, Hideyuki (1989). Commutative Ring Theory. Cambridge Studies in Advanced Mathematics (2nd ed.). Cambridge University Press. ISBN 0-521-36764-6. Nagata, Masayoshi (1975) [1962]. Local rings. Interscience Tracts in Pure and Applied Mathematics. Vol. 13. Interscience. ISBN 978-0-88275-228-0. OCLC 1137934. Reid, Miles (1996). Undergraduate Commutative Algebra. London Mathematical Society Student Texts. Cambridge University Press. ISBN 978-0-521-45889-4. Serre, Jean-Pierre (2000). Local algebra. Springer Monographs in Mathematics. Translated by Chin, CheeWhye. Springer. ISBN 3-540-66641-9. Sharp, R.Y. (2000). Steps in commutative algebra. London Mathematical Society Student Texts. Vol. 51 (2nd ed.). Cambridge University Press. p. 2000. ISBN 0-521-64623-5. Zariski, Oscar; Samuel, Pierre (1975). Commutative algebra. Graduate Texts in Mathematics. Vol. 28. Springer. ISBN 978-0-387-90171-8. Zariski, Oscar; Samuel, Pierre (1975). Vol II. Vol. 29. Springer. ISBN 978-0-387-90089-6.
Wikipedia/Commutative_algebra
An independent equation is an equation in a system of simultaneous equations which cannot be derived algebraically from the other equations. The concept typically arises in the context of linear equations. If it is possible to duplicate one of the equations in a system by multiplying each of the other equations by some number (potentially a different number for each equation) and summing the resulting equations, then that equation is dependent on the others. But if this is not possible, then that equation is independent of the others. If an equation is independent of the other equations in its system, then it provides information beyond that which is provided by the other equations. In contrast, if an equation is dependent on the others, then it provides no information not contained in the others collectively, and the equation can be dropped from the system without any information loss. The number of independent equations in a system equals the rank of the augmented matrix of the system—the system's coefficient matrix with one additional column appended, that column being the column vector of constants. The number of independent equations in a system of consistent equations (a system that has at least one solution) can never be greater than the number of unknowns. Equivalently, if a system has more independent equations than unknowns, it is inconsistent and has no solutions. The concepts of dependence and independence of systems are partially generalized in numerical linear algebra by the condition number, which (roughly) measures how close a system of equations is to being dependent (a condition number of infinity is a dependent system, and a system of orthogonal equations is maximally independent and has a condition number close to 1.) == See also == Linear algebra Indeterminate system Independent variable == References ==
Wikipedia/Independent_equation
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: Data compression (or source coding) Error control (or channel coding) Cryptographic coding Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination. Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes. == History of coding theory == Shannon’s paper focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory. The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth. Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, and Hamming distance. In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3. == Source coding == The aim of source coding is to take the source data and make it smaller. === Definition === Data can be seen as a random variable X : Ω → X {\displaystyle X:\Omega \to {\mathcal {X}}} , where x ∈ X {\displaystyle x\in {\mathcal {X}}} appears with probability P [ X = x ] {\displaystyle \mathbb {P} [X=x]} . Data are encoded by strings (words) over an alphabet Σ {\displaystyle \Sigma } . A code is a function C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} (or Σ + {\displaystyle \Sigma ^{+}} if the empty string is not part of the alphabet). C ( x ) {\displaystyle C(x)} is the code word associated with x {\displaystyle x} . Length of the code word is written as l ( C ( x ) ) . {\displaystyle l(C(x)).} Expected length of a code is l ( C ) = ∑ x ∈ X l ( C ( x ) ) P [ X = x ] . {\displaystyle l(C)=\sum _{x\in {\mathcal {X}}}l(C(x))\mathbb {P} [X=x].} The concatenation of code words C ( x 1 , … , x k ) = C ( x 1 ) C ( x 2 ) ⋯ C ( x k ) {\displaystyle C(x_{1},\ldots ,x_{k})=C(x_{1})C(x_{2})\cdots C(x_{k})} . The code word of the empty string is the empty string itself: C ( ϵ ) = ϵ {\displaystyle C(\epsilon )=\epsilon } === Properties === C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} is non-singular if injective. C : X ∗ → Σ ∗ {\displaystyle C:{\mathcal {X}}^{*}\to \Sigma ^{*}} is uniquely decodable if injective. C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} is instantaneous if C ( x 1 ) {\displaystyle C(x_{1})} is not a proper prefix of C ( x 2 ) {\displaystyle C(x_{2})} (and vice versa). === Principle === Entropy of a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information. Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding. Various techniques used by source coding schemes try to achieve the limit of entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source. === Example === Facsimile transmission uses a simple run length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission. == Channel coding == The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade-off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. CDs use cross-interleaved Reed–Solomon coding to spread the data out over the disk. Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we do not merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used. Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance. Cell phones are subject to rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading. === Linear codes === The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched. Algebraic coding theory is basically divided into two major types of codes: Linear block codes Convolutional codes It analyzes the following three properties of a code – mainly: Code word length Total number of valid code words The minimum distance between two valid code words, using mainly the Hamming distance, sometimes also other distances like the Lee distance ==== Linear block codes ==== Linear block codes have the property of linearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property. Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin) where n is the length of the codeword, in symbols, m is the number of source symbols that will be used for encoding at once, dmin is the minimum hamming distance for the code. There are many types of linear block codes, such as Cyclic codes (e.g., Hamming codes) Repetition codes Parity codes Polynomial codes (e.g., BCH codes) Reed–Solomon codes Algebraic geometric codes Reed–Muller codes Perfect codes Locally recoverable code Block codes are tied to the sphere packing problem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12) Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above. The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r – 1, 2r – 1 – r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes. Another code property is the number of neighbors that a single codeword may have. Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers. Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping, one of the best-known shaping codes. ==== Convolutional codes ==== The idea behind a convolutional code is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response. So we generally find the output of the system convolutional encoder, which is the convolution of the input bit, against the states of the convolution encoder, registers. Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware. The Viterbi algorithm is the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally been found to give good results in low noise environments. Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices. == Cryptographic coding == Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that block adversaries; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. == Line coding == A line code (also called digital baseband modulation or digital baseband transmission method) is a code chosen for use within a communications system for baseband transmission purposes. Line coding is often used for digital data transport. It consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding. The common types of line encoding are unipolar, polar, bipolar, and Manchester encoding. == Other applications of coding theory == Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel. Another application of codes, used in some mobile phone systems, is code-division multiple access (CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones. When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise. Another general class of codes are the automatic repeat-request (ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplest wide area network protocols use ARQ. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP."RFC793". RFCS. Internet Engineering Task Force (IETF). September 1981. === Group testing === Group testing uses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in the Second World War when the United States Army Air Forces needed to test its soldiers for syphilis. === Analog coding === Information is encoded analogously in the neural networks of brains, in analog signal processing, and analog electronics. Aspects of analog coding include analog error correction, analog data compression and analog encryption. == Neural coding == Neural coding is a neuroscience-related field concerned with how sensory and other information is represented in the brain by networks of neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. It is thought that neurons can encode both digital and analog information, and that neurons follow the principles of information theory and compress information, and detect and correct errors in the signals that are sent throughout the brain and wider nervous system. == See also == Coding gain Covering code Error correction code Folded Reed–Solomon code Group testing Hamming distance, Hamming weight Lee distance List of algebraic coding theory topics Spatial coding and MIMO in multiple antenna research Spatial diversity coding is spatial coding that transmits replicas of the information signal along different spatial paths, so as to increase the reliability of the data transmission. Spatial interference cancellation coding Spatial multiplex coding Timeline of information theory, data compression, and error correcting codes == Notes == == References == Elwyn R. Berlekamp (2014), Algebraic Coding Theory, World Scientific Publishing (revised edition), ISBN 978-9-81463-589-9. MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1 Vera Pless (1982), Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc., ISBN 0-471-08684-3. Randy Yates, A Coding Theory Tutorial.
Wikipedia/Coding_theory
Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations. == History == === Diophantus === The beginnings of algebraic number theory can be traced to Diophantine equations, named after the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively: A = x + y {\displaystyle A=x+y\ } B = x 2 + y 2 . {\displaystyle B=x^{2}+y^{2}.\ } Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x2 + y2 = z2 are given by the Pythagorean triples, originally solved by the Babylonians (c. 1800 BC). Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC). Diophantus's major work was the Arithmetica, of which only a portion has survived. === Fermat === Fermat's Last Theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. === Gauss === One of the founding works of algebraic number theory, the Disquisitiones Arithmeticae (Latin: Arithmetical Investigations) is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. Before the Disquisitiones was published, number theory consisted of a collection of isolated theorems and conjectures. Gauss brought the work of his predecessors together with his own original work into a systematic framework, filled in gaps, corrected unsound proofs, and extended the subject in numerous ways. The Disquisitiones was the starting point for the work of other nineteenth century European mathematicians including Ernst Kummer, Peter Gustav Lejeune Dirichlet and Richard Dedekind. Many of the annotations given by Gauss are in effect announcements of further research of his own, some of which remained unpublished. They must have appeared particularly cryptic to his contemporaries; we can now read them as containing the germs of the theories of L-functions and complex multiplication, in particular. === Dirichlet === In a couple of papers in 1838 and 1839 Peter Gustav Lejeune Dirichlet proved the first class number formula, for quadratic forms (later refined by his student Leopold Kronecker). The formula, which Jacobi called a result "touching the utmost of human acumen", opened the way for similar results regarding more general number fields. Based on his research of the structure of the unit group of quadratic fields, he proved the Dirichlet unit theorem, a fundamental result in algebraic number theory. He first used the pigeonhole principle, a basic counting argument, in the proof of a theorem in diophantine approximation, later named after him Dirichlet's approximation theorem. He published important contributions to Fermat's last theorem, for which he proved the cases n = 5 and n = 14, and to the biquadratic reciprocity law. The Dirichlet divisor problem, for which he found the first results, is still an unsolved problem in number theory despite later contributions by other researchers. === Dedekind === Richard Dedekind's study of Lejeune Dirichlet's work was what led him to his later study of algebraic number fields and ideals. In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that: "Although the book is assuredly based on Dirichlet's lectures, and although Dedekind himself referred to the book throughout his life as Dirichlet's, the book itself was entirely written by Dedekind, for the most part after Dirichlet's death." (Edwards 1983) 1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory. (The word "Ring", introduced later by Hilbert, does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether. Ideals generalize Ernst Eduard Kummer's ideal numbers, devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem. === Hilbert === David Hilbert unified the field of algebraic number theory with his 1897 treatise Zahlbericht (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area. He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi. === Artin === Emil Artin established the Artin reciprocity law in a series of papers (1924; 1927; 1930). This law is a general theorem in number theory that forms a central part of global class field theory. The term "reciprocity law" refers to a long line of more concrete number theoretic statements which it generalized, from the quadratic reciprocity law and the reciprocity laws of Eisenstein and Kummer to Hilbert's product formula for the norm symbol. Artin's result provided a partial solution to Hilbert's ninth problem. === Modern theory === Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama observed a possible link between two apparently completely distinct, branches of mathematics, elliptic curves and modular forms. The resulting modularity theorem (at the time known as the Taniyama–Shimura conjecture) states that every elliptic curve is modular, meaning that it can be associated with a unique modular form. It was initially dismissed as unlikely or highly speculative, but was taken more seriously when number theorist André Weil found evidence supporting it, yet no proof; as a result the "astounding" conjecture was often known as the Taniyama–Shimura-Weil conjecture. It became a part of the Langlands program, a list of important conjectures needing proof or disproof. From 1993 to 1994, Andrew Wiles provided a proof of the modularity theorem for semistable elliptic curves, which, together with Ribet's theorem, provided a proof for Fermat's Last Theorem. Almost every mathematician at the time had previously considered both Fermat's Last Theorem and the Modularity Theorem either impossible or virtually impossible to prove, even given the most cutting-edge developments. Wiles first announced his proof in June 1993 in a version that was soon recognized as having a serious gap at a key point. The proof was corrected by Wiles, partly in collaboration with Richard Taylor, and the final, widely accepted version was released in September 1994, and formally published in 1995. The proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques not available to Fermat. == Basic notions == === Failure of unique factorization === An important property of the ring of integers is that it satisfies the fundamental theorem of arithmetic, that every (positive) integer has a factorization into a product of prime numbers, and this factorization is unique up to the ordering of the factors. This may no longer be true in the ring of integers O of an algebraic number field K. A prime element is an element p of O such that if p divides a product ab, then it divides one of the factors a or b. This property is closely related to primality in the integers, because any positive integer satisfying this property is either 1 or a prime number. However, it is strictly weaker. For example, −2 is not a prime number because it is negative, but it is a prime element. If factorizations into prime elements are permitted, then, even in the integers, there are alternative factorizations such as 6 = 2 ⋅ 3 = ( − 2 ) ⋅ ( − 3 ) . {\displaystyle 6=2\cdot 3=(-2)\cdot (-3).} In general, if u is a unit, meaning a number with a multiplicative inverse in O, and if p is a prime element, then up is also a prime element. Numbers such as p and up are said to be associate. In the integers, the primes p and −p are associate, but only one of these is positive. Requiring that prime numbers be positive selects a unique element from among a set of associated prime elements. When K is not the rational numbers, however, there is no analog of positivity. For example, in the Gaussian integers Z[i], the numbers 1 + 2i and −2 + i are associate because the latter is the product of the former by i, but there is no way to single out one as being more canonical than the other. This leads to equations such as 5 = ( 1 + 2 i ) ( 1 − 2 i ) = ( 2 + i ) ( 2 − i ) , {\displaystyle 5=(1+2i)(1-2i)=(2+i)(2-i),} which prove that in Z[i], it is not true that factorizations are unique up to the order of the factors. For this reason, one adopts the definition of unique factorization used in unique factorization domains (UFDs). In a UFD, the prime elements occurring in a factorization are only expected to be unique up to units and their ordering. However, even with this weaker definition, many rings of integers in algebraic number fields do not admit unique factorization. There is an algebraic obstruction called the ideal class group. When the ideal class group is trivial, the ring is a UFD. When it is not, there is a distinction between a prime element and an irreducible element. An irreducible element x is an element such that if x = yz, then either y or z is a unit. These are the elements that cannot be factored any further. Every element in O admits a factorization into irreducible elements, but it may admit more than one. This is because, while all prime elements are irreducible, some irreducible elements may not be prime. For example, consider the ring Z[√-5]. In this ring, the numbers 3, 2 + √-5 and 2 - √-5 are irreducible. This means that the number 9 has two factorizations into irreducible elements, 9 = 3 2 = ( 2 + − 5 ) ( 2 − − 5 ) . {\displaystyle 9=3^{2}=(2+{\sqrt {-5}})(2-{\sqrt {-5}}).} This equation shows that 3 divides the product (2 + √-5)(2 - √-5) = 9. If 3 were a prime element, then it would divide 2 + √-5 or 2 - √-5, but it does not, because all elements divisible by 3 are of the form 3a + 3b√-5. Similarly, 2 + √-5 and 2 - √-5 divide the product 32, but neither of these elements divides 3 itself, so neither of them are prime. As there is no sense in which the elements 3, 2 + √-5 and 2 - √-5 can be made equivalent, unique factorization fails in Z[√-5]. Unlike the situation with units, where uniqueness could be repaired by weakening the definition, overcoming this failure requires a new perspective. === Factorization into prime ideals === If I is an ideal in O, then there is always a factorization I = p 1 e 1 ⋯ p t e t , {\displaystyle I={\mathfrak {p}}_{1}^{e_{1}}\cdots {\mathfrak {p}}_{t}^{e_{t}},} where each p i {\displaystyle {\mathfrak {p}}_{i}} is a prime ideal, and where this expression is unique up to the order of the factors. In particular, this is true if I is the principal ideal generated by a single element. This is the strongest sense in which the ring of integers of a general number field admits unique factorization. In the language of ring theory, it says that rings of integers are Dedekind domains. When O is a UFD, every prime ideal is generated by a prime element. Otherwise, there are prime ideals which are not generated by prime elements. In Z[√-5], for instance, the ideal (2, 1 + √-5) is a prime ideal which cannot be generated by a single element. Historically, the idea of factoring ideals into prime ideals was preceded by Ernst Kummer's introduction of ideal numbers. These are numbers lying in an extension field E of K. This extension field is now known as the Hilbert class field. By the principal ideal theorem, every prime ideal of O generates a principal ideal of the ring of integers of E. A generator of this principal ideal is called an ideal number. Kummer used these as a substitute for the failure of unique factorization in cyclotomic fields. These eventually led Richard Dedekind to introduce a forerunner of ideals and to prove unique factorization of ideals. An ideal which is prime in the ring of integers in one number field may fail to be prime when extended to a larger number field. Consider, for example, the prime numbers. The corresponding ideals pZ are prime ideals of the ring Z. However, when this ideal is extended to the Gaussian integers to obtain pZ[i], it may or may not be prime. For example, the factorization 2 = (1 + i)(1 − i) implies that 2 Z [ i ] = ( 1 + i ) Z [ i ] ⋅ ( 1 − i ) Z [ i ] = ( ( 1 + i ) Z [ i ] ) 2 ; {\displaystyle 2\mathbf {Z} [i]=(1+i)\mathbf {Z} [i]\cdot (1-i)\mathbf {Z} [i]=((1+i)\mathbf {Z} [i])^{2};} note that because 1 + i = (1 − i) ⋅ i, the ideals generated by 1 + i and 1 − i are the same. A complete answer to the question of which ideals remain prime in the Gaussian integers is provided by Fermat's theorem on sums of two squares. It implies that for an odd prime number p, pZ[i] is a prime ideal if p ≡ 3 (mod 4) and is not a prime ideal if p ≡ 1 (mod 4). This, together with the observation that the ideal (1 + i)Z[i] is prime, provides a complete description of the prime ideals in the Gaussian integers. Generalizing this simple result to more general rings of integers is a basic problem in algebraic number theory. Class field theory accomplishes this goal when K is an abelian extension of Q (that is, a Galois extension with abelian Galois group). === Ideal class group === Unique factorization fails if and only if there are prime ideals that fail to be principal. The object which measures the failure of prime ideals to be principal is called the ideal class group. Defining the ideal class group requires enlarging the set of ideals in a ring of algebraic integers so that they admit a group structure. This is done by generalizing ideals to fractional ideals. A fractional ideal is an additive subgroup J of K which is closed under multiplication by elements of O, meaning that xJ ⊆ J if x ∈ O. All ideals of O are also fractional ideals. If I and J are fractional ideals, then the set IJ of all products of an element in I and an element in J is also a fractional ideal. This operation makes the set of non-zero fractional ideals into a group. The group identity is the ideal (1) = O, and the inverse of J is a (generalized) ideal quotient: J − 1 = ( O : J ) = { x ∈ K : x J ⊆ O } . {\displaystyle J^{-1}=(O:J)=\{x\in K:xJ\subseteq O\}.} The principal fractional ideals, meaning the ones of the form Ox where x ∈ K×, form a subgroup of the group of all non-zero fractional ideals. The quotient of the group of non-zero fractional ideals by this subgroup is the ideal class group. Two fractional ideals I and J represent the same element of the ideal class group if and only if there exists an element x ∈ K such that xI = J. Therefore, the ideal class group makes two fractional ideals equivalent if one is as close to being principal as the other is. The ideal class group is generally denoted Cl K, Cl O, or Pic O (with the last notation identifying it with the Picard group in algebraic geometry). The number of elements in the class group is called the class number of K. The class number of Q(√-5) is 2. This means that there are only two ideal classes, the class of principal fractional ideals, and the class of a non-principal fractional ideal such as (2, 1 + √-5). The ideal class group has another description in terms of divisors. These are formal objects which represent possible factorizations of numbers. The divisor group Div K is defined to be the free abelian group generated by the prime ideals of O. There is a group homomorphism from K×, the non-zero elements of K up to multiplication, to Div K. Suppose that x ∈ K satisfies ( x ) = p 1 e 1 ⋯ p t e t . {\displaystyle (x)={\mathfrak {p}}_{1}^{e_{1}}\cdots {\mathfrak {p}}_{t}^{e_{t}}.} Then div x is defined to be the divisor div ⁡ x = ∑ i = 1 t e i [ p i ] . {\displaystyle \operatorname {div} x=\sum _{i=1}^{t}e_{i}[{\mathfrak {p}}_{i}].} The kernel of div is the group of units in O, while the cokernel is the ideal class group. In the language of homological algebra, this says that there is an exact sequence of abelian groups (written multiplicatively), 1 → O × → K × → div Div ⁡ K → Cl ⁡ K → 1. {\displaystyle 1\to O^{\times }\to K^{\times }{\xrightarrow {\text{div}}}\operatorname {Div} K\to \operatorname {Cl} K\to 1.} === Real and complex embeddings === Some number fields, such as Q(√2), can be specified as subfields of the real numbers. Others, such as Q(√−1), cannot. Abstractly, such a specification corresponds to a field homomorphism K → R or K → C. These are called real embeddings and complex embeddings, respectively. A real quadratic field Q(√a), with a ∈ Q, a > 0, and a not a perfect square, is so-called because it admits two real embeddings but no complex embeddings. These are the field homomorphisms which send √a to √a and to −√a, respectively. Dually, an imaginary quadratic field Q(√−a) admits no real embeddings but admits a conjugate pair of complex embeddings. One of these embeddings sends √−a to √−a, while the other sends it to its complex conjugate, −√−a. Conventionally, the number of real embeddings of K is denoted r1, while the number of conjugate pairs of complex embeddings is denoted r2. The signature of K is the pair (r1, r2). It is a theorem that r1 + 2r2 = d, where d is the degree of K. Considering all embeddings at once determines a function M : K → R r 1 ⊕ C r 2 {\displaystyle M\colon K\to \mathbf {R} ^{r_{1}}\oplus \mathbf {C} ^{r_{2}}} , or equivalently M : K → R r 1 ⊕ R 2 r 2 . {\displaystyle M\colon K\to \mathbf {R} ^{r_{1}}\oplus \mathbf {R} ^{2r_{2}}.} This is called the Minkowski embedding. The subspace of the codomain fixed by complex conjugation is a real vector space of dimension d called Minkowski space. Because the Minkowski embedding is defined by field homomorphisms, multiplication of elements of K by an element x ∈ K corresponds to multiplication by a diagonal matrix in the Minkowski embedding. The dot product on Minkowski space corresponds to the trace form ⟨ x , y ⟩ = Tr ⁡ ( x y ) {\displaystyle \langle x,y\rangle =\operatorname {Tr} (xy)} . The image of O under the Minkowski embedding is a d-dimensional lattice. If B is a basis for this lattice, then det BTB is the discriminant of O. The discriminant is denoted Δ or D. The covolume of the image of O is | Δ | {\displaystyle {\sqrt {|\Delta |}}} . === Places === Real and complex embeddings can be put on the same footing as prime ideals by adopting a perspective based on valuations. Consider, for example, the integers. In addition to the usual absolute value function |·| : Q → R, there are p-adic absolute value functions |·|p : Q → R, defined for each prime number p, which measure divisibility by p. Ostrowski's theorem states that these are all possible absolute value functions on Q (up to equivalence). Therefore, absolute values are a common language to describe both the real embedding of Q and the prime numbers. A place of an algebraic number field is an equivalence class of absolute value functions on K. There are two types of places. There is a p {\displaystyle {\mathfrak {p}}} -adic absolute value for each prime ideal p {\displaystyle {\mathfrak {p}}} of O, and, like the p-adic absolute values, it measures divisibility. These are called finite places. The other type of place is specified using a real or complex embedding of K and the standard absolute value function on R or C. These are infinite places. Because absolute values are unable to distinguish between a complex embedding and its conjugate, a complex embedding and its conjugate determine the same place. Therefore, there are r1 real places and r2 complex places. Because places encompass the primes, places are sometimes referred to as primes. When this is done, finite places are called finite primes and infinite places are called infinite primes. If v is a valuation corresponding to an absolute value, then one frequently writes v ∣ ∞ {\displaystyle v\mid \infty } to mean that v is an infinite place and v ∤ ∞ {\displaystyle v\nmid \infty } to mean that it is a finite place. Considering all the places of the field together produces the adele ring of the number field. The adele ring allows one to simultaneously track all the data available using absolute values. This produces significant advantages in situations where the behavior at one place can affect the behavior at other places, as in the Artin reciprocity law. ==== Places at infinity geometrically ==== There is a geometric analogy for places at infinity which holds on the function fields of curves. For example, let k = F q {\displaystyle k=\mathbb {F} _{q}} and X / k {\displaystyle X/k} be a smooth, projective, algebraic curve. The function field F = k ( X ) {\displaystyle F=k(X)} has many absolute values, or places, and each corresponds to a point on the curve. If X {\displaystyle X} is the projective completion of an affine curve X ^ ⊂ A n {\displaystyle {\hat {X}}\subset \mathbb {A} ^{n}} then the points in X − X ^ {\displaystyle X-{\hat {X}}} correspond to the places at infinity. Then, the completion of F {\displaystyle F} at one of these points gives an analogue of the p {\displaystyle p} -adics. For example, if X = P 1 {\displaystyle X=\mathbb {P} ^{1}} then its function field is isomorphic to k ( t ) {\displaystyle k(t)} where t {\displaystyle t} is an indeterminant and the field F {\displaystyle F} is the field of fractions of polynomials in t {\displaystyle t} . Then, a place v p {\displaystyle v_{p}} at a point p ∈ X {\displaystyle p\in X} measures the order of vanishing or the order of a pole of a fraction of polynomials p ( x ) / q ( x ) {\displaystyle p(x)/q(x)} at the point p {\displaystyle p} . For example, if p = [ 2 : 1 ] {\displaystyle p=[2:1]} , so on the affine chart x 1 ≠ 0 {\displaystyle x_{1}\neq 0} this corresponds to the point 2 ∈ A 1 {\displaystyle 2\in \mathbb {A} ^{1}} , the valuation v 2 {\displaystyle v_{2}} measures the order of vanishing of p ( x ) {\displaystyle p(x)} minus the order of vanishing of q ( x ) {\displaystyle q(x)} at 2 {\displaystyle 2} . The function field of the completion at the place v 2 {\displaystyle v_{2}} is then k ( ( t − 2 ) ) {\displaystyle k((t-2))} which is the field of power series in the variable t − 2 {\displaystyle t-2} , so an element is of the form a − k ( t − 2 ) − k + ⋯ + a − 1 ( t − 2 ) − 1 + a 0 + a 1 ( t − 2 ) + a 2 ( t − 2 ) 2 + ⋯ = ∑ n = − k ∞ a n ( t − 2 ) n {\displaystyle {\begin{aligned}&a_{-k}(t-2)^{-k}+\cdots +a_{-1}(t-2)^{-1}+a_{0}+a_{1}(t-2)+a_{2}(t-2)^{2}+\cdots \\&=\sum _{n=-k}^{\infty }a_{n}(t-2)^{n}\end{aligned}}} for some k ∈ N {\displaystyle k\in \mathbb {N} } . For the place at infinity, this corresponds to the function field k ( ( 1 / t ) ) {\displaystyle k((1/t))} which are power series of the form ∑ n = − k ∞ a n ( 1 / t ) n {\displaystyle \sum _{n=-k}^{\infty }a_{n}(1/t)^{n}} === Units === The integers have only two units, 1 and −1. Other rings of integers may admit more units. The Gaussian integers have four units, the previous two as well as ±i. The Eisenstein integers Z[exp(2πi / 3)] have six units. The integers in real quadratic number fields have infinitely many units. For example, in Z[√3], every power of 2 + √3 is a unit, and all these powers are distinct. In general, the group of units of O, denoted O×, is a finitely generated abelian group. The fundamental theorem of finitely generated abelian groups therefore implies that it is a direct sum of a torsion part and a free part. Reinterpreting this in the context of a number field, the torsion part consists of the roots of unity that lie in O. This group is cyclic. The free part is described by Dirichlet's unit theorem. This theorem says that rank of the free part is r1 + r2 − 1. Thus, for example, the only fields for which the rank of the free part is zero are Q and the imaginary quadratic fields. A more precise statement giving the structure of O× ⊗Z Q as a Galois module for the Galois group of K/Q is also possible. The free part of the unit group can be studied using the infinite places of K. Consider the function { L : K × → R r 1 + r 2 L ( x ) = ( log ⁡ | x | v ) v {\displaystyle {\begin{cases}L:K^{\times }\to \mathbf {R} ^{r_{1}+r_{2}}\\L(x)=(\log |x|_{v})_{v}\end{cases}}} where v varies over the infinite places of K and |·|v is the absolute value associated with v. The function L is a homomorphism from K× to a real vector space. It can be shown that the image of O× is a lattice that spans the hyperplane defined by x 1 + ⋯ + x r 1 + r 2 = 0. {\displaystyle x_{1}+\cdots +x_{r_{1}+r_{2}}=0.} The covolume of this lattice is the regulator of the number field. One of the simplifications made possible by working with the adele ring is that there is a single object, the idele class group, that describes both the quotient by this lattice and the ideal class group. === Zeta function === The Dedekind zeta function of a number field, analogous to the Riemann zeta function, is an analytic object which describes the behavior of prime ideals in K. When K is an abelian extension of Q, Dedekind zeta functions are products of Dirichlet L-functions, with there being one factor for each Dirichlet character. The trivial character corresponds to the Riemann zeta function. When K is a Galois extension, the Dedekind zeta function is the Artin L-function of the regular representation of the Galois group of K, and it has a factorization in terms of irreducible Artin representations of the Galois group. The zeta function is related to the other invariants described above by the class number formula. === Local fields === Completing a number field K at a place w gives a complete field. If the valuation is Archimedean, one obtains R or C, if it is non-Archimedean and lies over a prime p of the rationals, one obtains a finite extension K w / Q p : {\displaystyle K_{w}/\mathbf {Q} _{p}:} a complete, discrete valued field with finite residue field. This process simplifies the arithmetic of the field and allows the local study of problems. For example, the Kronecker–Weber theorem can be deduced easily from the analogous local statement. The philosophy behind the study of local fields is largely motivated by geometric methods. In algebraic geometry, it is common to study varieties locally at a point by localizing to a maximal ideal. Global information can then be recovered by gluing together local data. This spirit is adopted in algebraic number theory. Given a prime in the ring of algebraic integers in a number field, it is desirable to study the field locally at that prime. Therefore, one localizes the ring of algebraic integers to that prime and then completes the fraction field much in the spirit of geometry. == Major results == === Finiteness of the class group === One of the classical results in algebraic number theory is that the ideal class group of an algebraic number field K is finite. This is a consequence of Minkowski's theorem since there are only finitely many Integral ideals with norm less than a fixed positive integer page 78. The order of the class group is called the class number, and is often denoted by the letter h. === Dirichlet's unit theorem === Dirichlet's unit theorem provides a description of the structure of the multiplicative group of units O× of the ring of integers O. Specifically, it states that O× is isomorphic to G × Zr, where G is the finite cyclic group consisting of all the roots of unity in O, and r = r1 + r2 − 1 (where r1 (respectively, r2) denotes the number of real embeddings (respectively, pairs of conjugate non-real embeddings) of K). In other words, O× is a finitely generated abelian group of rank r1 + r2 − 1 whose torsion consists of the roots of unity in O. === Reciprocity laws === In terms of the Legendre symbol, the law of quadratic reciprocity for positive odd primes states ( p q ) ( q p ) = ( − 1 ) p − 1 2 q − 1 2 . {\displaystyle \left({\frac {p}{q}}\right)\left({\frac {q}{p}}\right)=(-1)^{{\frac {p-1}{2}}{\frac {q-1}{2}}}.} A reciprocity law is a generalization of the law of quadratic reciprocity. There are several different ways to express reciprocity laws. The early reciprocity laws found in the 19th century were usually expressed in terms of a power residue symbol (p/q) generalizing the quadratic reciprocity symbol, that describes when a prime number is an nth power residue modulo another prime, and gave a relation between (p/q) and (q/p). Hilbert reformulated the reciprocity laws as saying that a product over p of Hilbert symbols (a,b/p), taking values in roots of unity, is equal to 1. Artin's reformulated reciprocity law states that the Artin symbol from ideals (or ideles) to elements of a Galois group is trivial on a certain subgroup. Several more recent generalizations express reciprocity laws using cohomology of groups or representations of adelic groups or algebraic K-groups, and their relationship with the original quadratic reciprocity law can be hard to see. === Class number formula === The class number formula relates many important invariants of a number field to a special value of its Dedekind zeta function. == Related areas == Algebraic number theory interacts with many other mathematical disciplines. It uses tools from homological algebra. Via the analogy of function fields vs. number fields, it relies on techniques and ideas from algebraic geometry. Moreover, the study of higher-dimensional schemes over Z instead of number rings is referred to as arithmetic geometry. Algebraic number theory is also used in the study of arithmetic hyperbolic 3-manifolds. == See also == Class field theory Kummer theory Locally compact field Tamagawa number == Notes == Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2000), Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323, Berlin: Springer-Verlag, ISBN 978-3-540-66671-4, MR 1737196, Zbl 0948.11001 == Further reading == === Introductory texts === Stein, William (2012), Algebraic Number Theory, A Computational Approach (PDF) Ireland, Kenneth; Rosen, Michael (2013), A classical introduction to modern number theory, vol. 84, Springer, doi:10.1007/978-1-4757-2103-4, ISBN 978-1-4757-2103-4 Stewart, Ian; Tall, David (2015), Algebraic Number Theory and Fermat's Last Theorem, CRC Press, ISBN 978-1-4987-3840-8 === Intermediate texts === Marcus, Daniel A. (2018), Number Fields (2nd ed.), Springer, ISBN 978-3-319-90233-3 === Graduate level texts === Cassels, J. W. S.; Fröhlich, Albrecht, eds. (2010) [1967], Algebraic number theory (2nd ed.), London: 9780950273426, MR 0215665 Fröhlich, Albrecht; Taylor, Martin J. (1993), Algebraic number theory, Cambridge Studies in Advanced Mathematics, vol. 27, Cambridge University Press, ISBN 0-521-43834-9, MR 1215934 Lang, Serge (1994), Algebraic number theory, Graduate Texts in Mathematics, vol. 110 (2 ed.), New York: Springer-Verlag, ISBN 978-0-387-94225-4, MR 1282723 Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021. == External links == Media related to Algebraic number theory at Wikimedia Commons "Algebraic number theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Algebraic_number_theory
In mathematics, the symmetric algebra S(V) (also denoted Sym(V)) on a vector space V over a field K is a commutative algebra over K that contains V, and is, in some sense, minimal for this property. Here, "minimal" means that S(V) satisfies the following universal property: for every linear map f from V to a commutative algebra A, there is a unique algebra homomorphism g : S(V) → A such that f = g ∘ i, where i is the inclusion map of V in S(V). If B is a basis of V, the symmetric algebra S(V) can be identified, through a canonical isomorphism, to the polynomial ring K[B], where the elements of B are considered as indeterminates. Therefore, the symmetric algebra over V can be viewed as a "coordinate free" polynomial ring over V. The symmetric algebra S(V) can be built as the quotient of the tensor algebra T(V) by the two-sided ideal generated by the elements of the form x ⊗ y − y ⊗ x. All these definitions and properties extend naturally to the case where V is a module (not necessarily a free one) over a commutative ring. == Construction == === From tensor algebra === It is possible to use the tensor algebra T(V) to describe the symmetric algebra S(V). In fact, S(V) can be defined as the quotient algebra of T(V) by the two-sided ideal generated by the commutators v ⊗ w − w ⊗ v . {\displaystyle v\otimes w-w\otimes v.} It is straightforward to verify that the resulting algebra satisfies the universal property stated in the introduction. Because of the universal property of the tensor algebra, a linear map f from V to a commutative algebra A extends to an algebra homomorphism T ( V ) → A {\displaystyle T(V)\rightarrow A} , which factors through S(V) because A is commutative. The extension of f to an algebra homomorphism S ( V ) → A {\displaystyle S(V)\rightarrow A} is unique because V generates S(V) as a K-algebra. This results also directly from a general result of category theory, which asserts that the composition of two left adjoint functors is also a left adjoint functor. Here, the forgetful functor from commutative algebras to vector spaces or modules (forgetting the multiplication) is the composition of the forgetful functors from commutative algebras to associative algebras (forgetting commutativity), and from associative algebras to vectors or modules (forgetting the multiplication). As the tensor algebra and the quotient by commutators are left adjoint to these forgetful functors, their composition is left adjoint to the forgetful functor from commutative algebra to vectors or modules, and this proves the desired universal property. === From polynomial ring === The symmetric algebra S(V) can also be built from polynomial rings. If V is a K-vector space or a free K-module, with a basis B, let K[B] be the polynomial ring that has the elements of B as indeterminates. The homogeneous polynomials of degree one form a vector space or a free module that can be identified with V. It is straightforward to verify that this makes K[B] a solution to the universal problem stated in the introduction. This implies that K[B] and S(V) are canonically isomorphic, and can therefore be identified. This results also immediately from general considerations of category theory, since free modules and polynomial rings are free objects of their respective categories. If V is a module that is not free, it can be written V = L / M , {\displaystyle V=L/M,} where L is a free module, and M is a submodule of L. In this case, one has S ( V ) = S ( L / M ) = S ( L ) / ⟨ M ⟩ , {\displaystyle S(V)=S(L/M)=S(L)/\langle M\rangle ,} where ⟨ M ⟩ {\displaystyle \langle M\rangle } is the ideal generated by M. (Here, equals signs mean equality up to a canonical isomorphism.) Again this can be proved by showing that one has a solution of the universal property, and this can be done either by a straightforward but boring computation, or by using category theory, and more specifically, the fact that a quotient is the solution of the universal problem for morphisms that map to zero a given subset. (Depending on the case, the kernel is a normal subgroup, a submodule or an ideal, and the usual definition of quotients can be viewed as a proof of the existence of a solution of the universal problem.) == Grading == The symmetric algebra is a graded algebra. That is, it is a direct sum S ( V ) = ⨁ n = 0 ∞ S n ( V ) , {\displaystyle S(V)=\bigoplus _{n=0}^{\infty }S^{n}(V),} where S n ( V ) , {\displaystyle S^{n}(V),} called the nth symmetric power of V, is the vector subspace or submodule generated by the products of n elements of V. (The second symmetric power S 2 ( V ) {\displaystyle S^{2}(V)} is sometimes called the symmetric square of V). This can be proved by various means. One follows from the tensor-algebra construction: since the tensor algebra is graded, and the symmetric algebra is its quotient by a homogeneous ideal: the ideal generated by all x ⊗ y − y ⊗ x , {\displaystyle x\otimes y-y\otimes x,} where x and y are in V, that is, homogeneous of degree one. In the case of a vector space or a free module, the gradation is the gradation of the polynomials by the total degree. A non-free module can be written as L / M, where L is a free module of base B; its symmetric algebra is the quotient of the (graded) symmetric algebra of L (a polynomial ring) by the homogeneous ideal generated by the elements of M, which are homogeneous of degree one. One can also define S n ( V ) {\displaystyle S^{n}(V)} as the solution of the universal problem for n-linear symmetric functions from V into a vector space or a module, and then verify that the direct sum of all S n ( V ) {\displaystyle S^{n}(V)} satisfies the universal problem for the symmetric algebra. == Relationship with symmetric tensors == As the symmetric algebra of a vector space is a quotient of the tensor algebra, an element of the symmetric algebra is not a tensor, and, in particular, is not a symmetric tensor. However, symmetric tensors are strongly related to the symmetric algebra. A symmetric tensor of degree n is an element of Tn(V) that is invariant under the action of the symmetric group S n . {\displaystyle {\mathcal {S}}_{n}.} More precisely, given σ ∈ S n , {\displaystyle \sigma \in {\mathcal {S}}_{n},} the transformation v 1 ⊗ ⋯ ⊗ v n ↦ v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( n ) {\displaystyle v_{1}\otimes \cdots \otimes v_{n}\mapsto v_{\sigma (1)}\otimes \cdots \otimes v_{\sigma (n)}} defines a linear endomorphism of Tn(V). A symmetric tensor is a tensor that is invariant under all these endomorphisms. The symmetric tensors of degree n form a vector subspace (or module) Symn(V) ⊂ Tn(V). The symmetric tensors are the elements of the direct sum ⨁ n = 0 ∞ Sym n ⁡ ( V ) , {\displaystyle \textstyle \bigoplus _{n=0}^{\infty }\operatorname {Sym} ^{n}(V),} which is a graded vector space (or a graded module). It is not an algebra, as the tensor product of two symmetric tensors is not symmetric in general. Let π n {\displaystyle \pi _{n}} be the restriction to Symn(V) of the canonical surjection T n ( V ) → S n ( V ) . {\displaystyle T^{n}(V)\to S^{n}(V).} If n! is invertible in the ground field (or ring), then π n {\displaystyle \pi _{n}} is an isomorphism. This is always the case with a ground field of characteristic zero. The inverse isomorphism is the linear map defined (on products of n vectors) by the symmetrization v 1 ⋯ v n ↦ 1 n ! ∑ σ ∈ S n v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( n ) . {\displaystyle v_{1}\cdots v_{n}\mapsto {\frac {1}{n!}}\sum _{\sigma \in S_{n}}v_{\sigma (1)}\otimes \cdots \otimes v_{\sigma (n)}.} The map π n {\displaystyle \pi _{n}} is not injective if the characteristic is less than n+1; for example π n ( x ⊗ y + y ⊗ x ) = 2 x y {\displaystyle \pi _{n}(x\otimes y+y\otimes x)=2xy} is zero in characteristic two. Over a ring of characteristic zero, π n {\displaystyle \pi _{n}} can be non surjective; for example, over the integers, if x and y are two linearly independent elements of V = S1(V) that are not in 2V, then x y ∉ π n ( Sym 2 ⁡ ( V ) ) , {\displaystyle xy\not \in \pi _{n}(\operatorname {Sym} ^{2}(V)),} since 1 2 ( x ⊗ y + y ⊗ x ) ∉ Sym 2 ⁡ ( V ) . {\displaystyle {\frac {1}{2}}(x\otimes y+y\otimes x)\not \in \operatorname {Sym} ^{2}(V).} In summary, over a field of characteristic zero, the symmetric tensors and the symmetric algebra form two isomorphic graded vector spaces. They can thus be identified as far as only the vector space structure is concerned, but they cannot be identified as soon as products are involved. Moreover, this isomorphism does not extend to the cases of fields of positive characteristic and rings that do not contain the rational numbers. == Categorical properties == Given a module V over a commutative ring K, the symmetric algebra S(V) can be defined by the following universal property: For every K-linear map f from V to a commutative K-algebra A, there is a unique K-algebra homomorphism g : S ( V ) → A {\displaystyle g:S(V)\to A} such that f = g ∘ i , {\displaystyle f=g\circ i,} where i is the inclusion of V in S(V). As for every universal property, as soon as a solution exists, this defines uniquely the symmetric algebra, up to a canonical isomorphism. It follows that all properties of the symmetric algebra can be deduced from the universal property. This section is devoted to the main properties that belong to category theory. The symmetric algebra is a functor from the category of K-modules to the category of K-commutative algebra, since the universal property implies that every module homomorphism f : V → W {\displaystyle f:V\to W} can be uniquely extended to an algebra homomorphism S ( f ) : S ( V ) → S ( W ) . {\displaystyle S(f):S(V)\to S(W).} The universal property can be reformulated by saying that the symmetric algebra is a left adjoint to the forgetful functor that sends a commutative algebra to its underlying module. == Symmetric algebra of an affine space == One can analogously construct the symmetric algebra on an affine space. The key difference is that the symmetric algebra of an affine space is not a graded algebra, but a filtered algebra: one can determine the degree of a polynomial on an affine space, but not its homogeneous parts. For instance, given a linear polynomial on a vector space, one can determine its constant part by evaluating at 0. On an affine space, there is no distinguished point, so one cannot do this (choosing a point turns an affine space into a vector space). == Analogy with exterior algebra == The Sk are functors comparable to the exterior powers; here, though, the dimension grows with k; it is given by dim ⁡ ( S k ( V ) ) = ( n + k − 1 k ) {\displaystyle \operatorname {dim} (S^{k}(V))={\binom {n+k-1}{k}}} where n is the dimension of V. This binomial coefficient is the number of n-variable monomials of degree k. In fact, the symmetric algebra and the exterior algebra appear as the isotypical components of the trivial and sign representation of the action of S n {\displaystyle S_{n}} acting on the tensor product V ⊗ n {\displaystyle V^{\otimes n}} (for example over the complex field) == As a Hopf algebra == The symmetric algebra can be given the structure of a Hopf algebra. See Tensor algebra for details. == As a universal enveloping algebra == The symmetric algebra S(V) is the universal enveloping algebra of an abelian Lie algebra, i.e. one in which the Lie bracket is identically 0. == See also == exterior algebra, the alternating algebra analog graded-symmetric algebra, a common generalization of a symmetric algebra and an exterior algebra Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form Clifford algebra, a quantum deformation of the exterior algebra by a quadratic form Proj construction § Proj of a quasi-coherent sheaf, an application of symmetric algebras in algebraic geometry == References == Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9
Wikipedia/Symmetric_algebra
In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f {\displaystyle f} , is a member x {\displaystyle x} of the domain of f {\displaystyle f} such that f ( x ) {\displaystyle f(x)} vanishes at x {\displaystyle x} ; that is, the function f {\displaystyle f} attains the value of 0 at x {\displaystyle x} , or equivalently, x {\displaystyle x} is a solution to the equation f ( x ) = 0 {\displaystyle f(x)=0} . A "zero" of a function is thus an input value that produces an output of 0. A root of a polynomial is a zero of the corresponding polynomial function. The fundamental theorem of algebra shows that any non-zero polynomial has a number of roots at most equal to its degree, and that the number of roots and the degree are equal when one considers the complex roots (or more generally, the roots in an algebraically closed extension) counted with their multiplicities. For example, the polynomial f {\displaystyle f} of degree two, defined by f ( x ) = x 2 − 5 x + 6 = ( x − 2 ) ( x − 3 ) {\displaystyle f(x)=x^{2}-5x+6=(x-2)(x-3)} has the two roots (or zeros) that are 2 and 3. f ( 2 ) = 2 2 − 5 × 2 + 6 = 0 and f ( 3 ) = 3 2 − 5 × 3 + 6 = 0. {\displaystyle f(2)=2^{2}-5\times 2+6=0{\text{ and }}f(3)=3^{2}-5\times 3+6=0.} If the function maps real numbers to real numbers, then its zeros are the x {\displaystyle x} -coordinates of the points where its graph meets the x-axis. An alternative name for such a point ( x , 0 ) {\displaystyle (x,0)} in this context is an x {\displaystyle x} -intercept. == Solution of an equation == Every equation in the unknown x {\displaystyle x} may be rewritten as f ( x ) = 0 {\displaystyle f(x)=0} by regrouping all the terms in the left-hand side. It follows that the solutions of such an equation are exactly the zeros of the function f {\displaystyle f} . In other words, a "zero of a function" is precisely a "solution of the equation obtained by equating the function to 0", and the study of zeros of functions is exactly the same as the study of solutions of equations. == Polynomial roots == Every real polynomial of odd degree has an odd number of real roots (counting multiplicities); likewise, a real polynomial of even degree must have an even number of real roots. Consequently, real odd polynomials must have at least one real root (because the smallest odd whole number is 1), whereas even polynomials may have none. This principle can be proven by reference to the intermediate value theorem: since polynomial functions are continuous, the function value must cross zero, in the process of changing from negative to positive or vice versa (which always happens for odd functions). === Fundamental theorem of algebra === The fundamental theorem of algebra states that every polynomial of degree n {\displaystyle n} has n {\displaystyle n} complex roots, counted with their multiplicities. The non-real roots of polynomials with real coefficients come in conjugate pairs. Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. == Computing roots == There are many methods for computing accurate approximations of roots of functions, the best being Newton's method, see Root-finding algorithm. For polynomials, there are specialized algorithms that are more efficient and may provide all roots or all real roots; see Polynomial root-finding and Real-root isolation. Some polynomial, including all those of degree no greater than 4, can have all their roots expressed algebraically in terms of their coefficients; see Solution in radicals. == Zero set == In various areas of mathematics, the zero set of a function is the set of all its zeros. More precisely, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function (or, more generally, a function taking values in some additive group), its zero set is f − 1 ( 0 ) {\displaystyle f^{-1}(0)} , the inverse image of { 0 } {\displaystyle \{0\}} in X {\displaystyle X} . Under the same hypothesis on the codomain of the function, a level set of a function f {\displaystyle f} is the zero set of the function f − c {\displaystyle f-c} for some c {\displaystyle c} in the codomain of f . {\displaystyle f.} The zero set of a linear map is also known as its kernel. The cozero set of the function f : X → R {\displaystyle f:X\to \mathbb {R} } is the complement of the zero set of f {\displaystyle f} (i.e., the subset of X {\displaystyle X} on which f {\displaystyle f} is nonzero). === Applications === In algebraic geometry, the first definition of an algebraic variety is through zero sets. Specifically, an affine algebraic set is the intersection of the zero sets of several polynomials, in a polynomial ring k [ x 1 , … , x n ] {\displaystyle k\left[x_{1},\ldots ,x_{n}\right]} over a field. In this context, a zero set is sometimes called a zero locus. In analysis and geometry, any closed subset of R n {\displaystyle \mathbb {R} ^{n}} is the zero set of a smooth function defined on all of R n {\displaystyle \mathbb {R} ^{n}} . This extends to any smooth manifold as a corollary of paracompactness. In differential geometry, zero sets are frequently used to define manifolds. An important special case is the case that f {\displaystyle f} is a smooth function from R p {\displaystyle \mathbb {R} ^{p}} to R n {\displaystyle \mathbb {R} ^{n}} . If zero is a regular value of f {\displaystyle f} , then the zero set of f {\displaystyle f} is a smooth manifold of dimension m = p − n {\displaystyle m=p-n} by the regular value theorem. For example, the unit m {\displaystyle m} -sphere in R m + 1 {\displaystyle \mathbb {R} ^{m+1}} is the zero set of the real-valued function f ( x ) = ‖ x ‖ 2 − 1 {\displaystyle f(x)=\Vert x\Vert ^{2}-1} . == See also == Root-finding algorithm Bolzano's theorem, a continuous function that takes opposite signs at the end points of an interval has at least a zero in the interval. Gauss–Lucas theorem, the complex zeros of the derivative of a polynomial lie inside the convex hull of the roots of the polynomial. Marden's theorem, a refinement of Gauss–Lucas theorem for polynomials of degree three Sendov's conjecture, a conjectured refinement of Gauss-Lucas theorem zero at infinity Zero crossing, property of the graph of a function near a zero Zeros and poles of holomorphic functions == References == == Further reading == Weisstein, Eric W. "Root". MathWorld.
Wikipedia/Zero_of_a_function
In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng, pronounced like rung (IPA: ), is meant to suggest that it is a ring without i, that is, without the requirement for an identity element. There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see Ring (mathematics) § History). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity. A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space. Rngs appear in the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields == Definition == Formally, a rng is a set R with two binary operations (+, ·) called addition and multiplication such that (R, +) is an abelian group, (R, ·) is a semigroup, Multiplication distributes over addition. A rng homomorphism is a function f: R → S from one rng to another such that f(x + y) = f(x) + f(y) f(x · y) = f(x) · f(y) for all x and y in R. If R and S are rings, then a ring homomorphism R → S is the same as a rng homomorphism R → S that maps 1 to 1. == Examples == All rings are rngs. A simple example of a rng that is not a ring is given by the even integers with the ordinary addition and multiplication of integers. Another example is given by the set of all 3-by-3 real matrices whose bottom row is zero. Both of these examples are instances of the general fact that every (one- or two-sided) ideal is a rng. Rngs often appear naturally in functional analysis when linear operators on infinite-dimensional vector spaces are considered. Take for instance any infinite-dimensional vector space V and consider the set of all linear operators f : V → V with finite rank (i.e. dim f(V) < ∞). Together with addition and composition of operators, this is a rng, but not a ring. Another example is the rng of all real sequences that converge to 0, with component-wise operations. Also, many test function spaces occurring in the theory of distributions consist of functions decreasing to zero at infinity, like e.g. Schwartz space. Thus, the function everywhere equal to one, which would be the only possible identity element for pointwise multiplication, cannot exist in such spaces, which therefore are rngs (for pointwise addition and multiplication). In particular, the real-valued continuous functions with compact support defined on some topological space, together with pointwise addition and multiplication, form a rng; this is not a ring unless the underlying space is compact. === Example: even integers === The set 2Z of even integers is closed under addition and multiplication and has an additive identity, 0, so it is a rng, but it does not have a multiplicative identity, so it is not a ring. In 2Z, the only multiplicative idempotent is 0, the only nilpotent is 0, and the only element with a reflexive inverse is 0. === Example: finite quinary sequences === The direct sum T = ⨁ i = 1 ∞ Z / 5 Z {\textstyle {\mathcal {T}}=\bigoplus _{i=1}^{\infty }\mathbf {Z} /5\mathbf {Z} } equipped with coordinate-wise addition and multiplication is a rng with the following properties: Its idempotent elements form a lattice with no upper bound. Every element x has a reflexive inverse, namely an element y such that xyx = x and yxy = y. For every finite subset of T {\displaystyle {\mathcal {T}}} , there exists an idempotent in T {\displaystyle {\mathcal {T}}} that acts as an identity for the entire subset: the sequence with a one at every position where a sequence in the subset has a non-zero element at that position, and zero in every other position. == Properties == == Adjoining an identity element (Dorroh extension) == Every rng R can be enlarged to a ring R^ by adjoining an identity element. A general way in which to do this is to formally add an identity element 1 and let R^ consist of integral linear combinations of 1 and elements of R with the premise that none of its nonzero integral multiples coincide or are contained in R. That is, elements of R^ are of the form where n is an integer and r ∈ R. Multiplication is defined by linearity: More formally, we can take R^ to be the cartesian product Z × R and define addition and multiplication by The multiplicative identity of R^ is then (1, 0). There is a natural rng homomorphism j : R → R^ defined by j(r) = (0, r). This map has the following universal property: The map g can be defined by g(n, r) = n · 1S + f(r). There is a natural surjective ring homomorphism R^ → Z which sends (n, r) to n. The kernel of this homomorphism is the image of R in R^. Since j is injective, we see that R is embedded as a (two-sided) ideal in R^ with the quotient ring R^/R isomorphic to Z. It follows that Note that j is never surjective. So, even when R already has an identity element, the ring R^ will be a larger one with a different identity. The ring R^ is often called the Dorroh extension of R after the American mathematician Joe Lee Dorroh, who first constructed it. The process of adjoining an identity element to a rng can be formulated in the language of category theory. If we denote the category of all rings and ring homomorphisms by Ring and the category of all rngs and rng homomorphisms by Rng, then Ring is a (nonfull) subcategory of Rng. The construction of R^ given above yields a left adjoint to the inclusion functor I : Ring → Rng. Notice that Ring is not a reflective subcategory of Rng because the inclusion functor is not full. == Properties weaker than having an identity == There are several properties that have been considered in the literature that are weaker than having an identity element, but not so general. For example: Rings with enough idempotents: A rng R is said to be a ring with enough idempotents when there exists a subset E of R given by orthogonal (i.e. ef = 0 for all e ≠ f in E) idempotents (i.e. e2 = e for all e in E) such that R = ⊕e∈E eR = ⊕e∈E Re. Rings with local units: A rng R is said to be a ring with local units in case for every finite set r1, r2, ..., rt in R we can find e in R such that e2 = e and eri = ri = rie for every i. s-unital rings: A rng R is said to be s-unital in case for every finite set r1, r2, ..., rt in R we can find s in R such that sri = ri = ris for every i. Firm rings: A rng R is said to be firm if the canonical homomorphism R ⊗R R → R given by r ⊗ s ↦ rs is an isomorphism. Idempotent rings: A rng R is said to be idempotent (or an irng) in case R2 = R, that is, for every element r of R we can find elements ri and si in R such that r = ∑ i r i s i {\textstyle r=\sum _{i}r_{i}s_{i}} . It is not difficult to check that each of these properties is weaker than having an identity element and weaker than the property preceding it. Rings are rings with enough idempotents, using E = {1}. A ring with enough idempotents that has no identity is for example the ring of infinite matrices over a field with just a finite number of nonzero entries. Those matrices with a 1 in precisely one entry of the main diagonal and 0's in all other entries are the orthogonal idempotents. Rings with enough idempotents are rings with local units as can be seen by taking finite sums of the orthogonal idempotents to satisfy the definition. Rings with local units are in particular s-unital; s-unital rings are firm and firm rings are idempotent. == Rng of square zero == A rng of square zero is a rng R such that xy = 0 for all x and y in R. Any abelian group can be made a rng of square zero by defining the multiplication so that xy = 0 for all x and y; thus every abelian group is the additive group of some rng. The only rng of square zero with a multiplicative identity is the zero ring {0}. Any additive subgroup of a rng of square zero is an ideal. Thus a rng of square zero is simple if and only if its additive group is a simple abelian group, i.e., a cyclic group of prime order. == Unital homomorphism == Given two unital algebras A and B, an algebra homomorphism is unital if it maps the identity element of A to the identity element of B. If the associative algebra A over the field K is not unital, one can adjoin an identity element as follows: take A × K as underlying K-vector space and define multiplication ∗ by for x, y in A and r, s in K. Then ∗ is an associative operation with identity element (0, 1). The old algebra A is contained in the new one, and in fact A × K is the "most general" unital algebra containing A, in the sense of universal constructions. == See also == Semiring == Citations == == References ==
Wikipedia/Rng_(algebra)
In algebra, a sextic (or hexic) polynomial is a polynomial of degree six. A sextic equation is a polynomial equation of degree six—that is, an equation whose left hand side is a sextic polynomial and whose right hand side is zero. More precisely, it has the form: a x 6 + b x 5 + c x 4 + d x 3 + e x 2 + f x + g = 0 , {\displaystyle ax^{6}+bx^{5}+cx^{4}+dx^{3}+ex^{2}+fx+g=0,\,} where a ≠ 0 and the coefficients a, b, c, d, e, f, g may be integers, rational numbers, real numbers, complex numbers or, more generally, members of any field. A sextic function is a function defined by a sextic polynomial. Because they have an even degree, sextic functions appear similar to quartic functions when graphed, except they may possess an additional local maximum and local minimum each. The derivative of a sextic function is a quintic function. Since a sextic function is defined by a polynomial with even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If the leading coefficient a is positive, then the function increases to positive infinity at both sides and thus the function has a global minimum. Likewise, if a is negative, the sextic function decreases to negative infinity and has a global maximum. == Solvable sextics == Some sixth degree equations, such as ax6 + dx3 + g = 0, can be solved by factorizing into radicals, but other sextics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory. It follows from Galois theory that a sextic equation is solvable in terms of radicals if and only if its Galois group is contained either in the group of order 48 which stabilizes a partition of the set of the roots into three subsets of two roots or in the group of order 72 which stabilizes a partition of the set of the roots into two subsets of three roots. There are formulas to test either case, and, if the equation is solvable, compute the roots in term of radicals. == Examples == Watt's curve, which arose in the context of early work on the steam engine, is a sextic in two variables. One method of solving the cubic equation involves transforming variables to obtain a sextic equation having terms only of degrees 6, 3, and 0, which can be solved as a quadratic equation in the cube of the variable. == Etymology == The describer "sextic" comes from the Latin stem for 6 or 6th ("sex-t-"), and the Greek suffix meaning "pertaining to" ("-ic"). The much less common "hexic" uses Greek for both its stem (hex- 6) and its suffix (-ik-). In both cases, the prefix refers to the degree of the function. Often, these type of functions will simply be referred to as "6th degree functions". == See also == Cayley's sextic Cubic function Septic equation == References ==
Wikipedia/Sextic_equation
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals. Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects. == Definition == Informally, a field is a set, along with two operations defined on that set: an addition operation a + b and a multiplication operation a ⋅ b, both of which behave similarly as they do for rational numbers and real numbers. This includes the existence of an additive inverse −a for all elements a and of a multiplicative inverse b−1 for every nonzero element b. This allows the definition of the so-called inverse operations, subtraction a − b and division a / b, as a − b = a + (−b) and a / b = a ⋅ b−1. Often the product a ⋅ b is represented by juxtaposition, as ab. === Classic definition === Formally, a field is a set F together with two binary operations on F called addition and multiplication. A binary operation on F is a mapping F × F → F, that is, a correspondence that associates with each ordered pair of elements of F a uniquely determined element of F. The result of the addition of a and b is called the sum of a and b, and is denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, and is denoted a ⋅ b. These operations are required to satisfy the following properties, referred to as field axioms. These axioms are required to hold for all elements a, b, c of the field F: Associativity of addition and multiplication: a + (b + c) = (a + b) + c, and a ⋅ (b ⋅ c) = (a ⋅ b) ⋅ c. Commutativity of addition and multiplication: a + b = b + a, and a ⋅ b = b ⋅ a. Additive and multiplicative identity: there exist two distinct elements 0 and 1 in F such that a + 0 = a and a ⋅ 1 = a. Additive inverses: for every a in F, there exists an element in F, denoted −a, called the additive inverse of a, such that a + (−a) = 0. Multiplicative inverses: for every a ≠ 0 in F, there exists an element in F, denoted by a−1 or 1/a, called the multiplicative inverse of a, such that a ⋅ a−1 = 1. Distributivity of multiplication over addition: a ⋅ (b + c) = (a ⋅ b) + (a ⋅ c). An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with 0 as the additive identity; the nonzero elements form a group under multiplication with 1 as the multiplicative identity; and multiplication distributes over addition. Even more succinctly: a field is a commutative ring where 0 ≠ 1 and all nonzero elements are invertible under multiplication. === Alternative definition === Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants 0 and 1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants 1 and −1, since 0 = 1 + (−1) and −a = (−1)a. == Examples == === Rational numbers === Rational numbers have been widely used a long time before the elaboration of the concept of field. They are numbers that can be written as fractions a/b, where a and b are integers, and b ≠ 0. The additive inverse of such a fraction is −a/b, and the multiplicative inverse (provided that a ≠ 0) is b/a, which can be seen as follows: b a ⋅ a b = b a a b = 1. {\displaystyle {\frac {b}{a}}\cdot {\frac {a}{b}}={\frac {ba}{ab}}=1.} The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows: a b ⋅ ( c d + e f ) = a b ⋅ ( c d ⋅ f f + e f ⋅ d d ) = a b ⋅ ( c f d f + e d f d ) = a b ⋅ c f + e d d f = a ( c f + e d ) b d f = a c f b d f + a e d b d f = a c b d + a e b f = a b ⋅ c d + a b ⋅ e f . {\displaystyle {\begin{aligned}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}+{\frac {e}{f}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}\cdot {\frac {f}{f}}+{\frac {e}{f}}\cdot {\frac {d}{d}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {cf}{df}}+{\frac {ed}{fd}}\right)={\frac {a}{b}}\cdot {\frac {cf+ed}{df}}\\[6pt]={}&{\frac {a(cf+ed)}{bdf}}={\frac {acf}{bdf}}+{\frac {aed}{bdf}}={\frac {ac}{bd}}+{\frac {ae}{bf}}\\[6pt]={}&{\frac {a}{b}}\cdot {\frac {c}{d}}+{\frac {a}{b}}\cdot {\frac {e}{f}}.\end{aligned}}} === Real and complex numbers === The real numbers R, with the usual operations of addition and multiplication, also form a field. The complex numbers C consist of expressions a + bi, with a, b real, where i is the imaginary unit, i.e., a (non-real) number satisfying i2 = −1. Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for C. For example, the distributive law enforces (a + bi)(c + di) = ac + bci + adi + bdi2 = (ac − bd) + (bc + ad)i. It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines. === Constructible numbers === In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field Q of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within Q. Using the labeling in the illustration, construct the segments AB, BD, and a semicircle over AD (center at the midpoint C), which intersects the perpendicular line through B in a point F, at a distance of exactly h = p {\displaystyle h={\sqrt {p}}} from B when BD has length one. Not all real numbers are constructible. It can be shown that 2 3 {\displaystyle {\sqrt[{3}]{2}}} is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks. === A field with four elements === In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called O, I, A, and B. The notation is chosen such that O plays the role of the additive identity element (denoted 0 in the axioms above), and I is the multiplicative identity (denoted 1 in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example, A ⋅ (B + A) = A ⋅ I = A, which equals A ⋅ B + A ⋅ A = I + B = A, as required by the distributivity. This field is called a finite field or Galois field with four elements, and is denoted F4 or GF(4). The subset consisting of O and I (highlighted in red in the tables at the right) is also a field, known as the binary field F2 or GF(2). == Elementary notions == In this section, F denotes an arbitrary field and a and b are arbitrary elements of F. === Consequences of the definition === One has a ⋅ 0 = 0 and −a = (−1) ⋅ a. In particular, one may deduce the additive inverse of every element as soon as one knows −1. If ab = 0 then a or b must be 0, since, if a ≠ 0, then b = (a−1a)b = a−1(ab) = a−1 ⋅ 0 = 0. This means that every field is an integral domain. In addition, the following properties are true for any elements a and b: −0 = 0 1−1 = 1 (−(−a)) = a (−a) ⋅ b = a ⋅ (−b) = −(a ⋅ b) (a−1)−1 = a if a ≠ 0 === Additive and multiplicative groups of a field === The axioms of a field F imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by (F, +) when denoting it simply as F could be confusing. Similarly, the nonzero elements of F form an abelian group under multiplication, called the multiplicative group, and denoted by ( F ∖ { 0 } , ⋅ ) {\displaystyle (F\smallsetminus \{0\},\cdot )} or just F ∖ { 0 } {\displaystyle F\smallsetminus \{0\}} , or F×. A field may thus be defined as set F equipped with two operations denoted as an addition and a multiplication such that F is an abelian group under addition, F ∖ { 0 } {\displaystyle F\smallsetminus \{0\}} is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses −a and a−1 are uniquely determined by a. The requirement 1 ≠ 0 is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields. Every finite subgroup of the multiplicative group of a field is cyclic (see Root of unity § Cyclic groups). === Characteristic === In addition to the multiplication of two elements of F, it is possible to define the product n ⋅ a of an arbitrary element a of F by a positive integer n to be the n-fold sum a + a + ... + a (which is an element of F.) If there is no positive integer such that n ⋅ 1 = 0, then F is said to have characteristic 0. For example, the field of rational numbers Q has characteristic 0 since no positive integer n is zero. Otherwise, if there is a positive integer n satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by p and the field is said to have characteristic p then. For example, the field F4 has characteristic 2 since (in the notation of the above addition table) I + I = O. If F has characteristic p, then p ⋅ a = 0 for all a in F. This implies that (a + b)p = ap + bp, since all other binomial coefficients appearing in the binomial formula are divisible by p. Here, ap := a ⋅ a ⋅ ⋯ ⋅ a (p factors) is the pth power, i.e., the p-fold product of the element a. Therefore, the Frobenius map F → F : x ↦ xp is compatible with the addition in F (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic p quite different from fields of characteristic 0. === Subfields and prime fields === A subfield E of a field F is a subset of F that is a field with respect to the field operations of F. Equivalently E is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that 1 ∊ E, that for all a, b ∊ E both a + b and a ⋅ b are in E, and that for all a ≠ 0 in E, both −a and 1/a are in E. Field homomorphisms are maps φ: E → F between two fields such that φ(e1 + e2) = φ(e1) + φ(e2), φ(e1e2) = φ(e1) φ(e2), and φ(1E) = 1F, where e1 and e2 are arbitrary elements of E. All field homomorphisms are injective. If φ is also surjective, it is called an isomorphism (or the fields E and F are called isomorphic). A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field F contains a prime field. If the characteristic of F is p (a prime number), the prime field is isomorphic to the finite field Fp introduced below. Otherwise the prime field is isomorphic to Q. == Finite fields == Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example F4 is a field with four elements. Its subfield F2 is the smallest field, because by definition a field has at least two distinct elements, 0 and 1. The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer n, arithmetic "modulo n" means to work with the numbers Z/nZ = {0, 1, ..., n − 1}. The addition and multiplication on this set are done by performing the operation in question in the set Z of integers, dividing by n and taking the remainder as result. This construction yields a field precisely if n is a prime number. For example, taking the prime n = 2 results in the above-mentioned field F2. For n = 4 and more generally, for any composite number (i.e., any number n which can be expressed as a product n = r ⋅ s of two strictly smaller natural numbers), Z/nZ is not a field: the product of two non-zero elements is zero since r ⋅ s = 0 in Z/nZ, which, as was explained above, prevents Z/nZ from being a field. The field Z/pZ with p elements (p being prime) constructed in this way is usually denoted by Fp. Every finite field F has q = pn elements, where p is prime and n ≥ 1. This statement holds since F may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say n, which implies the asserted statement. A field with q = pn elements can be constructed as the splitting field of the polynomial f(x) = xq − x. Such a splitting field is an extension of Fp in which the polynomial f has q zeros. This means f has as many zeros as possible since the degree of f is q. For q = 22 = 4, it can be checked case by case using the above multiplication table that all four elements of F4 satisfy the equation x4 = x, so they are zeros of f. By contrast, in F2, f has only two zeros (namely 0 and 1), so f does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with q elements, denoted by Fq or GF(q). == History == Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros x1, x2, x3 of a cubic polynomial in the expression (x1 + ωx2 + ω2x3)3 (with ω being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown x to a quadratic equation for x3. Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation x p = 1 for a prime p and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular p-gon can be constructed if p = 22k + 1. Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree 5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by Moore (1893). By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system. In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q(X). Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of e and π, respectively. The first clear definition of an abstract field is due to Weber (1893). In particular, Heinrich Martin Weber's notion included the field Fp. Giuseppe Veronese (1891) studied the field of formal power series, which led Hensel (1904) to introduce the field of p-adic numbers. Steinitz (1910) synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. Artin & Schreier (1927) linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem. == Constructing fields == === Constructing fields from rings === A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses a−1. For example, the integers Z form a commutative ring, but not a field: the reciprocal of an integer n is not itself an integer, unless n = ±1. In the hierarchy of algebraic structures fields can be characterized as the commutative rings R in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, (0) and R. Fields are also precisely the commutative rings in which (0) is the only prime ideal. Given a commutative ring R, there are two ways to construct a field related to R, i.e., two ways of modifying R such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of Z is Q, the rationals, while the residue fields of Z are the finite fields Fp. ==== Field of fractions ==== Given an integral domain R, its field of fractions Q(R) is built with the fractions of two elements of R exactly as Q is constructed from the integers. More precisely, the elements of Q(R) are the fractions a/b where a and b are in R, and b ≠ 0. Two fractions a/b and c/d are equal if and only if ad = bc. The operation on the fractions work exactly as for rational numbers. For example, a b + c d = a d + b c b d . {\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.} It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field. The field F(x) of the rational fractions over a field (or an integral domain) F is the field of fractions of the polynomial ring F[x]. The field F((x)) of Laurent series ∑ i = k ∞ a i x i ( k ∈ Z , a i ∈ F ) {\displaystyle \sum _{i=k}^{\infty }a_{i}x^{i}\ (k\in \mathbb {Z} ,a_{i}\in F)} over a field F is the field of fractions of the ring F[[x]] of formal power series (in which k ≥ 0). Since any Laurent series is a fraction of a power series divided by a power of x (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though. ==== Residue fields ==== In addition to the field of fractions, which embeds R injectively into a field, a field can be obtained from a commutative ring R by means of a surjective map onto a field F. Any field obtained in this way is a quotient R / m, where m is a maximal ideal of R. If R has only one maximal ideal m, this field is called the residue field of R. The ideal generated by a single polynomial f in the polynomial ring R = E[X] (over a field E) is maximal if and only if f is irreducible in E, i.e., if f cannot be expressed as the product of two polynomials in E[X] of smaller degree. This yields a field F = E[X] / (f(X)). This field F contains an element x (namely the residue class of X) which satisfies the equation f(x) = 0. For example, C is obtained from R by adjoining the imaginary unit symbol i, which satisfies f(i) = 0, where f(X) = X2 + 1. Moreover, f is irreducible over R, which implies that the map that sends a polynomial f(X) ∊ R[X] to f(i) yields an isomorphism R [ X ] / ( X 2 + 1 ) ⟶ ≅ C . {\displaystyle \mathbf {R} [X]/\left(X^{2}+1\right)\ {\stackrel {\cong }{\longrightarrow }}\ \mathbf {C} .} === Constructing fields within a bigger field === Fields can be constructed inside a given bigger container field. Suppose given a field E, and a field F containing E as a subfield. For any element x of F, there is a smallest subfield of F containing E and x, called the subfield of F generated by x and denoted E(x). The passage from E to E(x) is referred to by adjoining an element to E. More generally, for a subset S ⊂ F, there is a minimal subfield of F containing E and S, denoted by E(S). The compositum of two subfields E and E′ of some field F is the smallest subfield of F containing both E and E′. The compositum can be used to construct the biggest subfield of F satisfying a certain property, for example the biggest subfield of F, which is, in the language introduced below, algebraic over E. === Field extensions === The notion of a subfield E ⊂ F can also be regarded from the opposite point of view, by referring to F being a field extension (or just extension) of E, denoted by F / E, and read "F over E". A basic datum of a field extension is its degree [F : E], i.e., the dimension of F as an E-vector space. It satisfies the formula [G : E] = [G : F] [F : E]. Extensions whose degree is finite are referred to as finite extensions. The extensions C / R and F4 / F2 are of degree 2, whereas R / Q is an infinite extension. ==== Algebraic extensions ==== A pivotal notion in the study of field extensions F / E are algebraic elements. An element x ∈ F is algebraic over E if it is a root of a polynomial with coefficients in E, that is, if it satisfies a polynomial equation en xn + en−1xn−1 + ⋯ + e1x + e0 = 0, with en, ..., e0 in E, and en ≠ 0. For example, the imaginary unit i in C is algebraic over R, and even over Q, since it satisfies the equation i2 + 1 = 0. A field extension in which every element of F is algebraic over E is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula. The subfield E(x) generated by an element x, as above, is an algebraic extension of E if and only if x is an algebraic element. That is to say, if x is algebraic, all other elements of E(x) are necessarily algebraic as well. Moreover, the degree of the extension E(x) / E, i.e., the dimension of E(x) as an E-vector space, equals the minimal degree n such that there is a polynomial equation involving x, as above. If this degree is n, then the elements of E(x) have the form ∑ k = 0 n − 1 a k x k , a k ∈ E . {\displaystyle \sum _{k=0}^{n-1}a_{k}x^{k},\ \ a_{k}\in E.} For example, the field Q(i) of Gaussian rationals is the subfield of C consisting of all numbers of the form a + bi where both a and b are rational numbers: summands of the form i2 (and similarly for higher exponents) do not have to be considered here, since a + bi + ci2 can be simplified to a − c + bi. ==== Transcendence bases ==== The above-mentioned field of rational fractions E(X), where X is an indeterminate, is not an algebraic extension of E since there is no polynomial equation with coefficients in E whose zero is X. Elements, such as X, which are not algebraic are called transcendental. Informally speaking, the indeterminate X and its powers do not interact with elements of E. A similar construction can be carried out with a set of indeterminates, instead of just one. Once again, the field extension E(x) / E discussed above is a key example: if x is not algebraic (i.e., x is not a root of a polynomial with coefficients in E), then E(x) is isomorphic to E(X). This isomorphism is obtained by substituting x to X in rational fractions. A subset S of a field F is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over E and if F is an algebraic extension of E(S). Any field extension F / E has a transcendence basis. Thus, field extensions can be split into ones of the form E(S) / E (purely transcendental extensions) and algebraic extensions. === Closure operations === A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation fn xn + fn−1xn−1 + ⋯ + f1x + f0 = 0, with coefficients fn, ..., f0 ∈ F, n > 0, has a solution x ∊ F. By the fundamental theorem of algebra, C is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation x2 + 1 = 0 does not have any rational or real solution. A field containing F is called an algebraic closure of F if it is algebraic over F (roughly speaking, not too big compared to F) and is algebraically closed (big enough to contain solutions of all polynomial equations). By the above, C is an algebraic closure of R. The situation that the algebraic closure is a finite extension of the field F is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily 2, and F is elementarily equivalent to R. Such fields are also known as real closed fields. Any field F has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted F. For example, the algebraic closure Q of Q is called the field of algebraic numbers. The field F is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of Fq, is exceptionally simple. It is the union of the finite fields containing Fq (the ones of order qn). For any algebraically closed field F of characteristic 0, the algebraic closure of the field F((t)) of Laurent series is the field of Puiseux series, obtained by adjoining roots of t. == Fields with additional structure == Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas. === Ordered fields === A field F is called an ordered field if any two elements can be compared, so that x + y ≥ 0 and xy ≥ 0 whenever x ≥ 0 and y ≥ 0. For example, the real numbers form an ordered field, with the usual ordering ≥. The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation x 1 2 + x 2 2 + ⋯ + x n 2 = 0 {\displaystyle x_{1}^{2}+x_{2}^{2}+\dots +x_{n}^{2}=0} only has the solution x1 = x2 = ⋯ = xn = 0. The set of all possible orders on a fixed field F is isomorphic to the set of ring homomorphisms from the Witt ring W(F) of quadratic forms over F, to Z. An Archimedean field is an ordered field such that for each element there exists a finite expression 1 + 1 + ⋯ + 1 whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of R. An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of F is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence 1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit. Since every proper subfield of the reals also contains such gaps, R is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals. The hyperreals R* form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis. === Topological fields === Another refinement of the notion of a field is a topological field, in which the set F is a topological space, such that all operations of the field (addition, multiplication, the maps a ↦ −a and a ↦ a−1) are continuous maps with respect to the topology of the space. The topology of all the fields discussed below is induced from a metric, i.e., a function d : F × F → R, that measures a distance between any two elements of F. The completion of F is another field in which, informally speaking, the "gaps" in the original field F are filled, if there are any. For example, any irrational number x, such as x = √2, is a "gap" in the rationals Q in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers p/q, in the sense that distance of x and p/q given by the absolute value |x − p/q| is as small as desired. The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for n → ∞) is zero. The field Qp is used in number theory and p-adic analysis. The algebraic closure Qp carries a unique norm extending the one on Qp, but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by Cp. ==== Local fields ==== The following topological fields are called local fields: finite extensions of Qp (local fields of characteristic zero) finite extensions of Fp((t)), the field of Laurent series over Fp (local fields of characteristic p). These two types of local fields share some fundamental similarities. In this relation, the elements p ∈ Qp and t ∈ Fp((t)) (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in Fp. (However, since the addition in Qp is done using carrying, which is not the case in Fp((t)), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper: Any first-order statement that is true for almost all Qp is also true for almost all Fp((t)). An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in Qp. Tamely ramified extensions of both fields are in bijection to one another. Adjoining arbitrary p-power roots of p (in Qp), respectively of t (in Fp((t))), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields: Gal ⁡ ( Q p ( p 1 / p ∞ ) ) ≅ Gal ⁡ ( F p ( ( t ) ) ( t 1 / p ∞ ) ) . {\displaystyle \operatorname {Gal} \left(\mathbf {Q} _{p}\left(p^{1/p^{\infty }}\right)\right)\cong \operatorname {Gal} \left(\mathbf {F} _{p}((t))\left(t^{1/p^{\infty }}\right)\right).} === Differential fields === Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field R(X), together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations. == Galois theory == Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions F / E, which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form F = E[X] / f(X), where f is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of f are contained in F and that f has only simple zeros. The latter condition is always satisfied if E has characteristic 0. For a finite Galois extension, the Galois group Gal(F/E) is the group of field automorphisms of F that are trivial on E (i.e., the bijections σ : F → F that preserve addition and multiplication and that send elements of E to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of Gal(F/E) and the set of intermediate extensions of the extension F/E. By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of f cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving n {\displaystyle {\sqrt[{n}]{~}}} . For example, the symmetric groups Sn is not solvable for n ≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem: f(X) = X5 − 4X + 2 (and E = Q), f(X) = Xn + an−1Xn−1 + ⋯ + a0 (where f is regarded as a polynomial in E(a0, ..., an−1), for some indeterminates ai, E is any field, and n ≥ 5). The tensor product of fields is not usually a field. For example, a finite extension F / E of degree n is a Galois extension if and only if there is an isomorphism of F-algebras F ⊗E F ≅ Fn. This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects. == Invariants of fields == Basic invariants of a field F include the characteristic and the transcendence degree of F over its prime field. The latter is defined as the maximal number of elements in F that are algebraically independent over the prime field. Two algebraically closed fields E and F are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, Qp, Cp and C are isomorphic (but not isomorphic as topological fields). === Model theory of fields === In model theory, a branch of mathematical logic, two fields E and F are called elementarily equivalent if every mathematical statement that is true for E is also true for F and conversely. The mathematical statements in question are required to be first-order sentences (involving 0, 1, the addition and multiplication). A typical example, for n > 0, n an integer, is φ(E) = "any polynomial of degree n in E has a zero in E" The set of such formulas for all n expresses that E is algebraically closed. The Lefschetz principle states that C is elementarily equivalent to any algebraically closed field F of characteristic zero. Moreover, any fixed statement φ holds in C if and only if it holds in any algebraically closed field of sufficiently high characteristic. If U is an ultrafilter on a set I, and Fi is a field for every i in I, the ultraproduct of the Fi with respect to U is a field. It is denoted by ulimi→∞ Fi, since it behaves in several ways as a limit of the fields Fi: Łoś's theorem states that any first order statement that holds for all but finitely many Fi, also holds for the ultraproduct. Applied to the above sentence φ, this shows that there is an isomorphism ulim p → ∞ ⁡ F ¯ p ≅ C . {\displaystyle \operatorname {ulim} _{p\to \infty }{\overline {\mathbf {F} }}_{p}\cong \mathbf {C} .} The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes p) ulimp Qp ≅ ulimp Fp((t)). In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function exp : F → F×). === Absolute Galois group === For fields that are not algebraically closed (or not separably closed), the absolute Galois group Gal(F) is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of F. By elementary means, the group Gal(Fq) can be shown to be the Prüfer group, the profinite completion of Z. This statement subsumes the fact that the only algebraic extensions of Gal(Fq) are the fields Gal(Fqn) for n > 0, and that the Galois groups of these finite extensions are given by Gal(Fqn / Fq) = Z/nZ. A description in terms of generators and relations is also known for the Galois groups of p-adic number fields (finite extensions of Qp). Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple F-algebras, can be reinterpreted as a Galois cohomology group, namely Br(F) = H2(F, Gm). === K-theory === Milnor K-theory is defined as K n M ( F ) = F × ⊗ ⋯ ⊗ F × / ⟨ x ⊗ ( 1 − x ) ∣ x ∈ F ∖ { 0 , 1 } ⟩ . {\displaystyle K_{n}^{M}(F)=F^{\times }\otimes \cdots \otimes F^{\times }/\left\langle x\otimes (1-x)\mid x\in F\smallsetminus \{0,1\}\right\rangle .} The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism K n M ( F ) / p = H n ( F , μ l ⊗ n ) . {\displaystyle K_{n}^{M}(F)/p=H^{n}(F,\mu _{l}^{\otimes n}).} Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism K1(F) = F×. Matsumoto's theorem shows that K2(F) agrees with K2M(F). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general. == Applications == === Linear algebra and commutative algebra === If a ≠ 0, then the equation ax = b has a unique solution x in a field F, namely x = a − 1 b . {\displaystyle x=a^{-1}b.} This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis. The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring Z of the integers. === Finite fields: cryptography and coding theory === A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing an = a ⋅ a ⋅ ⋯ ⋅ a (n factors, for an integer n ≥ 1) in a (large) finite field Fq can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution n to an equation an = b. In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form y2 = x3 + ax + b. Finite fields are also used in coding theory and combinatorics. === Geometry: field of functions === Functions on a suitable topological space X into a field F can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain: (f ⋅ g)(x) = f(x) ⋅ g(x). This makes these functions a F-commutative algebra. For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form f ( x ) g ( x ) , {\displaystyle {\frac {f(x)}{g(x)}},} form a field, called field of functions. This occurs in two main cases. When X is a complex manifold X. In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on X. The function field of an algebraic variety X (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the n-dimensional space over a field F is F(x1, ..., xn), i.e., the field consisting of ratios of polynomials in n indeterminates. The function field of X is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing X by a (slightly) smaller subvariety. The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of F(X), is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field F(X) is very close to X: if X is smooth and proper (the analogue of being compact), X can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about X. The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field. === Number theory: global fields === Global fields are in the limelight in algebraic number theory and arithmetic geometry. They are, by definition, number fields (finite extensions of Q) or function fields over Fq (finite extensions of Fq(t)). As for local fields, these two types of fields share several similar features, even though they are of characteristic 0 and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne). Cyclotomic fields are among the most intensely studied number fields. They are of the form Q(ζn), where ζn is a primitive nth root of unity, i.e., a complex number ζ that satisfies ζn = 1 and ζm ≠ 1 for all 0 < m < n. For n being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation xn + yn = zn. Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of Q, a global field, are the local fields Qp and R. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in R and Qp, whose solutions can easily be described. Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group Gal(F/Q) for some number field F. Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian Qab extension of Q: it is the field Q(ζn, n ≥ 2) obtained by adjoining all primitive nth roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of Fab of general number fields F. For imaginary quadratic fields, F = Q ( − d ) {\displaystyle F=\mathbf {Q} ({\sqrt {-d}})} , d > 0, the theory of complex multiplication describes Fab using elliptic curves. For general number fields, no such explicit description is known. == Related notions == In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field 0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields Fp, as p tends to 1. In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields. There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well. === Division rings === Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". The only division rings that are finite-dimensional R-vector spaces are R itself, C (which is a field), and the quaternions H (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions O, for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor. Wedderburn's little theorem states that all finite division rings are fields. == Notes == == Citations == == References == == External links ==
Wikipedia/Field_theory_(mathematics)
In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems in field theory to group theory, which makes them simpler and easier to understand. Galois introduced the subject for studying roots of polynomials. This allowed him to characterize the polynomial equations that are solvable by radicals in terms of properties of the permutation group of their roots—an equation is by definition solvable by radicals if its roots may be expressed by a formula involving only integers, nth roots, and the four basic arithmetic operations. This widely generalizes the Abel–Ruffini theorem, which asserts that a general polynomial of degree at least five cannot be solved by radicals. Galois theory has been used to solve classic problems including showing that two problems of antiquity cannot be solved as they were stated (doubling the cube and trisecting the angle), and characterizing the regular polygons that are constructible (this characterization was previously given by Gauss but without the proof that the list of constructible polygons was complete; all known proofs that this characterization is complete require Galois theory). Galois' work was published by Joseph Liouville fourteen years after his death. The theory took longer to become popular among mathematicians and to be well understood. Galois theory has been generalized to Galois connections and Grothendieck's Galois theory. == Application to classical problems == The birth and development of Galois theory was caused by the following question, which was one of the main open mathematical questions until the beginning of 19th century: Does there exist a formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)? The Abel–Ruffini theorem provides a counterexample proving that there are polynomial equations for which such a formula cannot exist. Galois' theory provides a much more complete answer to this question, by explaining why it is possible to solve some equations, including all those of degree four or lower, in the above manner, and why it is not possible for most equations of degree five or higher. Furthermore, it provides a means of determining whether a particular equation can be solved that is both conceptually clear and easily expressed as an algorithm. Galois' theory also gives a clear insight into questions concerning problems in compass and straightedge construction. It gives an elegant characterization of the ratios of lengths that can be constructed with this method. Using this, it becomes relatively easy to answer such classical problems of geometry as Which regular polygons are constructible? Why is it not possible to trisect every angle using a compass and a straightedge? Why is doubling the cube not possible with the same method? == History == === Pre-history === Galois' theory originated in the study of symmetric functions – the coefficients of a monic polynomial are (up to sign) the elementary symmetric polynomials in the roots. For instance, (x – a)(x – b) = x2 – (a + b)x + ab, where 1, a + b and ab are the elementary polynomials of degree 0, 1 and 2 in two variables. This was first formalized by the 16th-century French mathematician François Viète, in Viète's formulas, for the case of positive real roots. In the opinion of the 18th-century British mathematician Charles Hutton, the expression of coefficients of a polynomial in terms of the roots (not only for positive roots) was first understood by the 17th-century French mathematician Albert Girard; Hutton writes: ...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation. In this vein, the discriminant is a symmetric function in the roots that reflects properties of the roots – it is zero if and only if the polynomial has a multiple root, and for quadratic and cubic polynomials it is positive if and only if all roots are real and distinct, and negative if and only if there is a pair of distinct complex conjugate roots. See Discriminant § Low degrees for details. The cubic was first partly solved by the 15–16th-century Italian mathematician Scipione del Ferro, who did not however publish his results; this method, though, only solved one type of cubic equation. This solution was then rediscovered independently in 1535 by Niccolò Fontana Tartaglia, who shared it with Gerolamo Cardano, asking him to not publish it. Cardano then extended this to numerous other cases, using similar arguments; see more details at Cardano's method. After the discovery of del Ferro's work, he felt that Tartaglia's method was no longer secret, and thus he published his solution in his 1545 Ars Magna. His student Lodovico Ferrari solved the quartic polynomial; his solution was also included in Ars Magna. In this book, however, Cardano did not provide a "general formula" for the solution of a cubic equation, as he had neither complex numbers at his disposal, nor the algebraic notation to be able to describe a general cubic equation. With the benefit of modern notation and complex numbers, the formulae in this book do work in the general case, but Cardano did not know this. It was Rafael Bombelli who managed to understand how to work with complex numbers in order to solve all forms of cubic equation. A further step was the 1770 paper Réflexions sur la résolution algébrique des équations by the French-Italian mathematician Joseph Louis Lagrange, in his method of Lagrange resolvents, where he analyzed Cardano's and Ferrari's solution of cubics and quartics by considering them in terms of permutations of the roots, which yielded an auxiliary polynomial of lower degree, providing a unified understanding of the solutions and laying the groundwork for group theory and Galois' theory. Crucially, however, he did not consider composition of permutations. Lagrange's method did not extend to quintic equations or higher, because the resolvent had higher degree. The quintic was almost proven to have no general solutions by radicals by Paolo Ruffini in 1799, whose key insight was to use permutation groups, not just a single permutation. His solution contained a gap, which Cauchy considered minor, though this was not patched until the work of the Norwegian mathematician Niels Henrik Abel, who published a proof in 1824, thus establishing the Abel–Ruffini theorem. While Ruffini and Abel established that the general quintic could not be solved, some particular quintics can be solved, such as x5 - 1 = 0, and the precise criterion by which a given quintic or higher polynomial could be determined to be solvable or not was given by Évariste Galois, who showed that whether a polynomial was solvable or not was equivalent to whether or not the permutation group of its roots – in modern terms, its Galois group – had a certain structure – in modern terms, whether or not it was a solvable group. This group was always solvable for polynomials of degree four or less, but not always so for polynomials of degree five and greater, which explains why there is no general solution in higher degrees. === Galois' writings === In 1830 Galois (at the age of 18) submitted to the Paris Academy of Sciences a memoir on his theory of solvability by radicals; Galois' paper was ultimately rejected in 1831 as being too sketchy and for giving a condition in terms of the roots of the equation instead of its coefficients. Galois then died in a duel in 1832, and his paper, "Mémoire sur les conditions de résolubilité des équations par radicaux", remained unpublished until 1846 when it was published by Joseph Liouville accompanied by some of his own explanations. Prior to this publication, Liouville announced Galois' result to the Academy in a speech he gave on 4 July 1843. According to Allan Clark, Galois's characterization "dramatically supersedes the work of Abel and Ruffini." === Aftermath === Galois' theory was notoriously difficult for his contemporaries to understand, especially to the level where they could expand on it. For example, in his 1846 commentary, Liouville completely missed the group-theoretic core of Galois' method. Joseph Alfred Serret who attended some of Liouville's talks, included Galois' theory in his 1866 (third edition) of his textbook Cours d'algèbre supérieure. Serret's pupil, Camille Jordan, had an even better understanding reflected in his 1870 book Traité des substitutions et des équations algébriques. Outside France, Galois' theory remained more obscure for a longer period. In Britain, Cayley failed to grasp its depth and popular British algebra textbooks did not even mention Galois' theory until well after the turn of the century. In Germany, Kronecker's writings focused more on Abel's result. Dedekind wrote little about Galois' theory, but lectured on it at Göttingen in 1858, showing a very good understanding. Eugen Netto's books of the 1880s, based on Jordan's Traité, made Galois theory accessible to a wider German and American audience as did Heinrich Martin Weber's 1895 algebra textbook. == Permutation group approach == Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that A2 + 5B3 = 7. The central idea of Galois' theory is to consider permutations (or rearrangements) of the roots such that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. Originally, the theory had been developed for algebraic equations whose coefficients are rational numbers. It extends naturally to equations with coefficients in any field, but this will not be considered in the simple examples below. These permutations together form a permutation group, also called the Galois group of the polynomial, which is explicitly described in the following examples. === Quadratic equation === Consider the quadratic equation x 2 − 4 x + 1 = 0. {\displaystyle x^{2}-4x+1=0.} By using the quadratic formula, we find that the two roots are A = 2 + 3 , B = 2 − 3 . {\displaystyle {\begin{aligned}A&=2+{\sqrt {3}},\\B&=2-{\sqrt {3}}.\end{aligned}}} Examples of algebraic equations satisfied by A and B include A + B = 4 , {\displaystyle A+B=4,} and A B = 1. {\displaystyle AB=1.} If we exchange A and B in either of the last two equations we obtain another true statement. For example, the equation A + B = 4 becomes B + A = 4. It is more generally true that this holds for every possible algebraic relation between A and B such that all coefficients are rational; that is, in any such relation, swapping A and B yields another true relation. This results from the theory of symmetric polynomials, which, in this case, may be replaced by formula manipulations involving the binomial theorem. One might object that A and B are related by the algebraic equation A − B − 2√3 = 0, which does not remain true when A and B are exchanged. However, this relation is not considered here, because it has the coefficient −2√3 which is not rational. We conclude that the Galois group of the polynomial x2 − 4x + 1 consists of two permutations: the identity permutation which leaves A and B untouched, and the transposition permutation which exchanges A and B. As all groups with two elements are isomorphic, this Galois group is isomorphic to the multiplicative group {1, −1}. A similar discussion applies to any quadratic polynomial ax2 + bx + c, where a, b and c are rational numbers. If the polynomial has rational roots, for example x2 − 4x + 4 = (x − 2)2, or x2 − 3x + 2 = (x − 2)(x − 1), then the Galois group is trivial; that is, it contains only the identity permutation. In this example, if A = 2 and B = 1 then A − B = 1 is no longer true when A and B are swapped. If it has two irrational roots, for example x2 − 2, then the Galois group contains two permutations, just as in the above example. === Quartic equation === Consider the polynomial x 4 − 10 x 2 + 1. {\displaystyle x^{4}-10x^{2}+1.} Completing the square in an unusual way, it can also be written as ( x 2 − 1 ) 2 − 8 x 2 = ( x 2 − 1 − 2 x 2 ) ( x 2 − 1 + 2 x 2 ) . {\displaystyle (x^{2}-1)^{2}-8x^{2}=(x^{2}-1-2x{\sqrt {2}})(x^{2}-1+2x{\sqrt {2}}).} By applying the quadratic formula to each factor, one sees that the four roots are A = 2 + 3 , B = 2 − 3 , C = − 2 + 3 , D = − 2 − 3 . {\displaystyle {\begin{aligned}A&={\sqrt {2}}+{\sqrt {3}},\\B&={\sqrt {2}}-{\sqrt {3}},\\C&=-{\sqrt {2}}+{\sqrt {3}},\\D&=-{\sqrt {2}}-{\sqrt {3}}.\end{aligned}}} Among the 24 possible permutations of these four roots, four are particularly simple, those consisting in the sign change of 0, 1, or 2 square roots. They form a group that is isomorphic to the Klein four-group. Galois theory implies that, since the polynomial is irreducible, the Galois group has at least four elements. For proving that the Galois group consists of these four permutations, it suffices thus to show that every element of the Galois group is determined by the image of A, which can be shown as follows. The members of the Galois group must preserve any algebraic equation with rational coefficients involving A, B, C and D. Among these equations, we have: A B = − 1 A C = 1 A + D = 0 {\displaystyle {\begin{aligned}AB&=-1\\AC&=1\\A+D&=0\end{aligned}}} It follows that, if φ is a permutation that belongs to the Galois group, we must have: φ ( B ) = − 1 φ ( A ) , φ ( C ) = 1 φ ( A ) , φ ( D ) = − φ ( A ) . {\displaystyle {\begin{aligned}\varphi (B)&={\frac {-1}{\varphi (A)}},\\\varphi (C)&={\frac {1}{\varphi (A)}},\\\varphi (D)&=-\varphi (A).\end{aligned}}} This implies that the permutation is well defined by the image of A, and that the Galois group has 4 elements, which are: (A, B, C, D) → (A, B, C, D) (identity) (A, B, C, D) → (B, A, D, C) (change of sign of 3 {\displaystyle {\sqrt {3}}} ) (A, B, C, D) → (C, D, A, B) (change of sign of 2 {\displaystyle {\sqrt {2}}} ) (A, B, C, D) → (D, C, B, A) (change of sign of both square roots) This implies that the Galois group is isomorphic to the Klein four-group. == Modern approach by field theory == In the modern approach, one starts with a field extension L/K (read "L over K"), and examines the group of automorphisms of L that fix K. See the article on Galois groups for further explanation and examples. The connection between the two approaches is as follows. The coefficients of the polynomial in question should be chosen from the base field K. The top field L should be the field obtained by adjoining the roots of the polynomial in question to the base field K. Any permutation of the roots which respects algebraic equations as described above gives rise to an automorphism of L/K, and vice versa. In the first example above, we were studying the extension Q(√3)/Q, where Q is the field of rational numbers, and Q(√3) is the field obtained from Q by adjoining √3. In the second example, we were studying the extension Q(A,B,C,D)/Q. There are several advantages to the modern approach over the permutation group approach. It permits a far simpler statement of the fundamental theorem of Galois theory. The use of base fields other than Q is crucial in many areas of mathematics. For example, in algebraic number theory, one often does Galois theory using number fields, finite fields or local fields as the base field. It allows one to more easily study infinite extensions. Again this is important in algebraic number theory, where for example one often discusses the absolute Galois group of Q, defined to be the Galois group of K/Q where K is an algebraic closure of Q. It allows for consideration of inseparable extensions. This issue does not arise in the classical framework, since it was always implicitly assumed that arithmetic took place in characteristic zero, but nonzero characteristic arises frequently in number theory and in algebraic geometry. It removes the rather artificial reliance on chasing roots of polynomials. That is, different polynomials may yield the same extension fields, and the modern approach recognizes the connection between these polynomials. == Solvable groups and solution by radicals == The notion of a solvable group in group theory allows one to determine whether a polynomial is solvable in radicals, depending on whether its Galois group has the property of solvability. In essence, each field extension L/K corresponds to a factor group in a composition series of the Galois group. If a factor group in the composition series is cyclic of order n, and if in the corresponding field extension L/K the field K already contains a primitive nth root of unity, then it is a radical extension and the elements of L can then be expressed using the nth root of some element of K. If all the factor groups in its composition series are cyclic, the Galois group is called solvable, and all of the elements of the corresponding field can be found by repeatedly taking roots, products, and sums of elements from the base field (usually Q). One of the great triumphs of Galois Theory was the proof that for every n > 4, there exist polynomials of degree n which are not solvable by radicals (this was proven independently, using a similar method, by Niels Henrik Abel a few years before, and is the Abel–Ruffini theorem), and a systematic way for testing whether a specific polynomial is solvable by radicals. The Abel–Ruffini theorem results from the fact that for n > 4 the symmetric group Sn contains a simple, noncyclic, normal subgroup, namely the alternating group An. === A non-solvable quintic example === Van der Waerden cites the polynomial f(x) = x5 − x − 1. By the rational root theorem, this has no rational zeroes. Neither does it have linear factors modulo 2 or 3. The Galois group of f(x) modulo 2 is cyclic of order 6, because f(x) modulo 2 factors into polynomials of orders 2 and 3, (x2 + x + 1)(x3 + x2 + 1). f(x) modulo 3 has no linear or quadratic factor, and hence is irreducible. Thus its modulo 3 Galois group contains an element of order 5. It is known that a Galois group modulo a prime is isomorphic to a subgroup of the Galois group over the rationals. A permutation group on 5 objects with elements of orders 6 and 5 must be the symmetric group S5, which is therefore the Galois group of f(x). This is one of the simplest examples of a non-solvable quintic polynomial. According to Serge Lang, Emil Artin was fond of this example. == Inverse Galois problem == The inverse Galois problem is to find a field extension with a given Galois group. As long as one does not also specify the ground field, the problem is not very difficult, and all finite groups do occur as Galois groups. For showing this, one may proceed as follows. Choose a field K and a finite group G. Cayley's theorem says that G is (up to isomorphism) a subgroup of the symmetric group S on the elements of G. Choose indeterminates {xα}, one for each element α of G, and adjoin them to K to get the field F = K({xα}). Contained within F is the field L of symmetric rational functions in the {xα}. The Galois group of F/L is S, by a basic result of Emil Artin. G acts on F by restriction of action of S. If the fixed field of this action is M, then, by the fundamental theorem of Galois theory, the Galois group of F/M is G. On the other hand, it is an open problem whether every finite group is the Galois group of a field extension of the field Q of the rational numbers. Igor Shafarevich proved that every solvable finite group is the Galois group of some extension of Q. Various people have solved the inverse Galois problem for selected non-Abelian simple groups. Existence of solutions has been shown for all but possibly one (Mathieu group M23) of the 26 sporadic simple groups. There is even a polynomial with integral coefficients whose Galois group is the Monster group. == Inseparable extensions == In the form mentioned above, including in particular the fundamental theorem of Galois theory, the theory only considers Galois extensions, which are in particular separable. General field extensions can be split into a separable, followed by a purely inseparable field extension. For a purely inseparable extension F / K, there is a Galois theory where the Galois group is replaced by the vector space of derivations, D e r K ( F , F ) {\displaystyle Der_{K}(F,F)} , i.e., K-linear endomorphisms of F satisfying the Leibniz rule. In this correspondence, an intermediate field E is assigned D e r E ( F , F ) ⊂ D e r K ( F , F ) {\displaystyle Der_{E}(F,F)\subset Der_{K}(F,F)} . Conversely, a subspace V ⊂ D e r K ( F , F ) {\displaystyle V\subset Der_{K}(F,F)} satisfying appropriate further conditions is mapped to { x ∈ F , f ( x ) = 0 ∀ f ∈ V } {\displaystyle \{x\in F,f(x)=0\ \forall f\in V\}} . Under the assumption F p ⊂ K {\displaystyle F^{p}\subset K} , Jacobson (1944) showed that this establishes a one-to-one correspondence. The condition imposed by Jacobson has been removed by Brantner & Waldron (2020), by giving a correspondence using notions of derived algebraic geometry. == See also == Galois group for more examples Fundamental theorem of Galois theory Differential Galois theory for a Galois theory of differential equations Grothendieck's Galois theory for a vast generalization of Galois theory Topological Galois theory Artin–Schreier theory, a sub-field of Galois theory == Notes == == References == Artin, Emil (1998) [1944]. Galois Theory. Dover. ISBN 0-486-62342-4. Bewersdorff, Jörg (2006). Galois Theory for Beginners: A Historical Perspective. The Student Mathematical Library. Vol. 35. American Mathematical Society. doi:10.1090/stml/035. ISBN 0-8218-3817-2. S2CID 118256821. Brantner, Lukas; Waldron, Joe (2020), Purely Inseparable Galois theory I: The Fundamental Theorem, arXiv:2010.15707 Cardano, Gerolamo (1545). Artis Magnæ (PDF) (in Latin). Archived from the original (PDF) on 2008-06-26. Retrieved 2015-01-10. Edwards, Harold M. (1984). Galois Theory. Springer-Verlag. ISBN 0-387-90980-X. (Galois' original paper, with extensive background and commentary.) Funkhouser, H. Gray (1930). "A short account of the history of symmetric functions of roots of equations". American Mathematical Monthly. 37 (7): 357–365. doi:10.2307/2299273. JSTOR 2299273. "Galois theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Jacobson, Nathan (1944), "Galois theory of purely inseparable fields of exponent one", Amer. J. Math., 66 (4): 645–648, doi:10.2307/2371772, JSTOR 2371772 Jacobson, Nathan (1985). Basic Algebra I (2nd ed.). W. H. Freeman. ISBN 0-7167-1480-9. (Chapter 4 gives an introduction to the field-theoretic approach to Galois theory.) Janelidze, G.; Borceux, Francis (2001). Galois Theories. Cambridge University Press. ISBN 978-0-521-80309-0. (This book introduces the reader to the Galois theory of Grothendieck, and some generalisations, leading to Galois groupoids.) Lang, Serge (1994). Algebraic Number Theory. Berlin, New York: Springer-Verlag. ISBN 978-0-387-94225-4. Postnikov, M. M. (2004). Foundations of Galois Theory. Dover Publications. ISBN 0-486-43518-0. Rotman, Joseph (1998). Galois Theory (2nd ed.). Springer. ISBN 0-387-98541-7. Völklein, Helmut (1996). Groups as Galois groups: an introduction. Cambridge University Press. ISBN 978-0-521-56280-5. van der Waerden, Bartel Leendert (1931). Moderne Algebra (in German). Berlin: Springer.. English translation (of 2nd revised edition): Modern Algebra. New York: Frederick Ungar. 1949. (Later republished in English by Springer under the title "Algebra".) == External links == The dictionary definition of Galois theory at Wiktionary Media related to Galois theory at Wikimedia Commons
Wikipedia/Galois_theory
In mathematics, the tensor algebra of a vector space V, denoted T(V) or T•(V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. It is the free algebra on V, in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V, in the sense of the corresponding universal property (see below). The tensor algebra is important because many other algebras arise as quotient algebras of T(V). These include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bi-algebra, but does lead to the concept of a cofree coalgebra, and a more complicated one, which yields a bialgebra, and can be extended by giving an antipode to create a Hopf algebra structure. Note: In this article, all algebras are assumed to be unital and associative. The unit is explicitly required to define the coproduct. == Construction == Let V be a vector space over a field K. For any nonnegative integer k, we define the kth tensor power of V to be the tensor product of V with itself k times: T k V = V ⊗ k = V ⊗ V ⊗ ⋯ ⊗ V . {\displaystyle T^{k}V=V^{\otimes k}=V\otimes V\otimes \cdots \otimes V.} That is, TkV consists of all tensors on V of order k. By convention T0V is the ground field K (as a one-dimensional vector space over itself). We then construct T(V) as the direct sum of TkV for k = 0,1,2,… T ( V ) = ⨁ k = 0 ∞ T k V = K ⊕ V ⊕ ( V ⊗ V ) ⊕ ( V ⊗ V ⊗ V ) ⊕ ⋯ . {\displaystyle T(V)=\bigoplus _{k=0}^{\infty }T^{k}V=K\oplus V\oplus (V\otimes V)\oplus (V\otimes V\otimes V)\oplus \cdots .} The multiplication in T(V) is determined by the canonical isomorphism T k V ⊗ T ℓ V → T k + ℓ V {\displaystyle T^{k}V\otimes T^{\ell }V\to T^{k+\ell }V} given by the tensor product, which is then extended by linearity to all of T(V). This multiplication rule implies that the tensor algebra T(V) is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z-grading by appending subspaces T k V = { 0 } {\displaystyle T^{k}V=\{0\}} for negative integers k. The construction generalizes in a straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a non-commutative ring, one can still perform the construction for any R-R bimodule M. (It does not work for ordinary R-modules because the iterated tensor products cannot be formed.) == Adjunction and universal property == The tensor algebra T(V) is also called the free algebra on the vector space V, and is functorial; this means that the map V ↦ T ( V ) {\displaystyle V\mapsto T(V)} extends to linear maps for forming a functor from the category of K-vector spaces to the category of associative algebras. Similarly with other free constructions, the functor T is left adjoint to the forgetful functor that sends each associative K-algebra to its underlying vector space. Explicitly, the tensor algebra satisfies the following universal property, which formally expresses the statement that it is the most general algebra containing V: Any linear map f : V → A {\displaystyle f:V\to A} from V to an associative algebra A over K can be uniquely extended to an algebra homomorphism from T(V) to A as indicated by the following commutative diagram: Here i is the canonical inclusion of V into T(V). As for other universal properties, the tensor algebra T(V) can be defined as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but this definition requires to prove that an object satisfying this property exists. The above universal property implies that T is a functor from the category of vector spaces over K, to the category of K-algebras. This means that any linear map between K-vector spaces U and W extends uniquely to a K-algebra homomorphism from T(U) to T(W). == Non-commutative polynomials == If V has finite dimension n, another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V, those become non-commuting variables (or indeterminates) in T(V), subject to no constraints beyond associativity, the distributive law and K-linearity. Note that the algebra of polynomials on V is not T ( V ) {\displaystyle T(V)} , but rather T ( V ∗ ) {\displaystyle T(V^{*})} : a (homogeneous) linear function on V is an element of V ∗ , {\displaystyle V^{*},} for example coordinates x 1 , … , x n {\displaystyle x^{1},\dots ,x^{n}} on a vector space are covectors, as they take in a vector and give out a scalar (the given coordinate of the vector). == Quotients == Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T(V). Examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. == Coalgebra == The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra, further down. The development provided below can be equally well applied to the exterior algebra, using the wedge symbol ∧ {\displaystyle \wedge } in place of the tensor symbol ⊗ {\displaystyle \otimes } ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure. Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product ⊗ {\displaystyle \otimes } by the symmetrized tensor product ⊗ S y m {\displaystyle \otimes _{\mathrm {Sym} }} , i.e. that product where v ⊗ S y m w = w ⊗ S y m v . {\displaystyle v\otimes _{\mathrm {Sym} }w=w\otimes _{\mathrm {Sym} }v.} In each case, this is possible because the alternating product ∧ {\displaystyle \wedge } and the symmetric product ⊗ S y m {\displaystyle \otimes _{\mathrm {Sym} }} obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes through; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure. In the language of category theory, one says that there is a functor T from the category of K-vector spaces to the category of K-associative algebras. But there is also a functor Λ taking vector spaces to the category of exterior algebras, and a functor Sym taking vector spaces to symmetric algebras. There is a natural map from T to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural. === Coproduct === The coalgebra is obtained by defining a coproduct or diagonal operator Δ : T V → T V ⊠ T V {\displaystyle \Delta :TV\to TV\boxtimes TV} Here, T V {\displaystyle TV} is used as a short-hand for T ( V ) {\displaystyle T(V)} to avoid an explosion of parentheses. The ⊠ {\displaystyle \boxtimes } symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product ⊗ {\displaystyle \otimes } , which is already being used to denote multiplication in the tensor algebra (see the section Multiplication, below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace ⊗ {\displaystyle \otimes } by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the ⊗ {\displaystyle \otimes } symbol to be used in place of the ⊠ {\displaystyle \boxtimes } symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend. The definition of the operator Δ {\displaystyle \Delta } is most easily built up in stages, first by defining it for elements v ∈ V ⊂ T V {\displaystyle v\in V\subset TV} and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then Δ : v ↦ v ⊠ 1 + 1 ⊠ v {\displaystyle \Delta :v\mapsto v\boxtimes 1+1\boxtimes v} and Δ : 1 ↦ 1 ⊠ 1 {\displaystyle \Delta :1\mapsto 1\boxtimes 1} where 1 ∈ K = T 0 V ⊂ T V {\displaystyle 1\in K=T^{0}V\subset TV} is the unit of the field K {\displaystyle K} . By linearity, one obviously has Δ ( k ) = k ( 1 ⊠ 1 ) = k ⊠ 1 = 1 ⊠ k {\displaystyle \Delta (k)=k(1\boxtimes 1)=k\boxtimes 1=1\boxtimes k} for all k ∈ K . {\displaystyle k\in K.} It is straightforward to verify that this definition satisfies the axioms of a coalgebra: that is, that ( i d T V ⊠ Δ ) ∘ Δ = ( Δ ⊠ i d T V ) ∘ Δ {\displaystyle (\mathrm {id} _{TV}\boxtimes \Delta )\circ \Delta =(\Delta \boxtimes \mathrm {id} _{TV})\circ \Delta } where i d T V : x ↦ x {\displaystyle \mathrm {id} _{TV}:x\mapsto x} is the identity map on T V {\displaystyle TV} . Indeed, one gets ( ( i d T V ⊠ Δ ) ∘ Δ ) ( v ) = v ⊠ 1 ⊠ 1 + 1 ⊠ v ⊠ 1 + 1 ⊠ 1 ⊠ v {\displaystyle ((\mathrm {id} _{TV}\boxtimes \Delta )\circ \Delta )(v)=v\boxtimes 1\boxtimes 1+1\boxtimes v\boxtimes 1+1\boxtimes 1\boxtimes v} and likewise for the other side. At this point, one could invoke a lemma, and say that Δ {\displaystyle \Delta } extends trivially, by linearity, to all of T V {\displaystyle TV} , because T V {\displaystyle TV} is a free object and V {\displaystyle V} is a generator of the free algebra, and Δ {\displaystyle \Delta } is a homomorphism. However, it is insightful to provide explicit expressions. So, for v ⊗ w ∈ T 2 V {\displaystyle v\otimes w\in T^{2}V} , one has (by definition) the homomorphism Δ : v ⊗ w ↦ Δ ( v ) ⊗ Δ ( w ) {\displaystyle \Delta :v\otimes w\mapsto \Delta (v)\otimes \Delta (w)} Expanding, one has Δ ( v ⊗ w ) = ( v ⊠ 1 + 1 ⊠ v ) ⊗ ( w ⊠ 1 + 1 ⊠ w ) = ( v ⊗ w ) ⊠ 1 + v ⊠ w + w ⊠ v + 1 ⊠ ( v ⊗ w ) {\displaystyle {\begin{aligned}\Delta (v\otimes w)&=(v\boxtimes 1+1\boxtimes v)\otimes (w\boxtimes 1+1\boxtimes w)\\&=(v\otimes w)\boxtimes 1+v\boxtimes w+w\boxtimes v+1\boxtimes (v\otimes w)\end{aligned}}} In the above expansion, there is no need to ever write 1 ⊗ v {\displaystyle 1\otimes v} as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that 1 ⊗ v = 1 ⋅ v = v . {\displaystyle 1\otimes v=1\cdot v=v.} The extension above preserves the algebra grading. That is, Δ : T 2 V → ⨁ k = 0 2 T k V ⊠ T 2 − k V {\displaystyle \Delta :T^{2}V\to \bigoplus _{k=0}^{2}T^{k}V\boxtimes T^{2-k}V} Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m: Δ ( v 1 ⊗ ⋯ ⊗ v m ) = Δ ( v 1 ) ⊗ ⋯ ⊗ Δ ( v m ) = ∑ p = 0 m ( v 1 ⊗ ⋯ ⊗ v p ) ω ( v p + 1 ⊗ ⋯ ⊗ v m ) = ∑ p = 0 m ∑ σ ∈ S h ( p , m − p ) ( v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( p ) ) ⊠ ( v σ ( p + 1 ) ⊗ ⋯ ⊗ v σ ( m ) ) {\displaystyle {\begin{aligned}\Delta (v_{1}\otimes \cdots \otimes v_{m})&=\Delta (v_{1})\otimes \cdots \otimes \Delta (v_{m})\\&=\sum _{p=0}^{m}\left(v_{1}\otimes \cdots \otimes v_{p}\right)\;\omega \;\left(v_{p+1}\otimes \cdots \otimes v_{m}\right)\\&=\sum _{p=0}^{m}\;\sum _{\sigma \in \mathrm {Sh} (p,m-p)}\;\left(v_{\sigma (1)}\otimes \dots \otimes v_{\sigma (p)}\right)\boxtimes \left(v_{\sigma (p+1)}\otimes \dots \otimes v_{\sigma (m)}\right)\end{aligned}}} where the ω {\displaystyle \omega } symbol, which should appear as ш, the sha, denotes the shuffle product. This is expressed in the second summation, which is taken over all (p, m − p)-shuffles. The shuffle is Sh ⁡ ( p , q ) = { σ : { 1 , … , p + q } → { 1 , … , p + q } ∣ σ is bijective , σ ( 1 ) < σ ( 2 ) < ⋯ < σ ( p ) , and σ ( p + 1 ) < σ ( p + 2 ) < ⋯ < σ ( m ) } . {\displaystyle {\begin{aligned}\operatorname {Sh} (p,q)=\{\sigma :\{1,\dots ,p+q\}\to \{1,\dots ,p+q\}\;\mid \;&\sigma {\text{ is bijective}},\;\sigma (1)<\sigma (2)<\cdots <\sigma (p),\\&{\text{and }}\;\sigma (p+1)<\sigma (p+2)<\cdots <\sigma (m)\}.\end{aligned}}} By convention, one takes that Sh(m,0) and Sh(0,m) equals {id: {1, ..., m} → {1, ..., m}}. It is also convenient to take the pure tensor products v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( p ) {\displaystyle v_{\sigma (1)}\otimes \dots \otimes v_{\sigma (p)}} and v σ ( p + 1 ) ⊗ ⋯ ⊗ v σ ( m ) {\displaystyle v_{\sigma (p+1)}\otimes \dots \otimes v_{\sigma (m)}} to equal 1 for p = 0 and p = m, respectively (the empty product in T V {\displaystyle TV} ). The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements v k {\displaystyle v_{k}} is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right. Equivalently, Δ ( v 1 ⊗ ⋯ ⊗ v n ) = ∑ S ⊆ { 1 , … , n } ( ∏ k = 1 k ∈ S n v k ) ⊠ ( ∏ k = 1 k ∉ S n v k ) , {\displaystyle \Delta (v_{1}\otimes \cdots \otimes v_{n})=\sum _{S\subseteq \{1,\dots ,n\}}\left(\prod _{k=1 \atop k\in S}^{n}v_{k}\right)\boxtimes \left(\prod _{k=1 \atop k\notin S}^{n}v_{k}\right)\!,} where the products are in T V {\displaystyle TV} , and where the sum is over all subsets of { 1 , … , n } {\displaystyle \{1,\dots ,n\}} . As before, the algebra grading is preserved: Δ : T m V → ⨁ k = 0 m T k V ⊠ T ( m − k ) V {\displaystyle \Delta :T^{m}V\to \bigoplus _{k=0}^{m}T^{k}V\boxtimes T^{(m-k)}V} === Counit === The counit ϵ : T V → K {\displaystyle \epsilon :TV\to K} is given by the projection of the field component out from the algebra. This can be written as ϵ : v ↦ 0 {\displaystyle \epsilon :v\mapsto 0} for v ∈ V {\displaystyle v\in V} and ϵ : k ↦ k {\displaystyle \epsilon :k\mapsto k} for k ∈ K = T 0 V {\displaystyle k\in K=T^{0}V} . By homomorphism under the tensor product ⊗ {\displaystyle \otimes } , this extends to ϵ : x ↦ 0 {\displaystyle \epsilon :x\mapsto 0} for all x ∈ T 1 V ⊕ T 2 V ⊕ ⋯ {\displaystyle x\in T^{1}V\oplus T^{2}V\oplus \cdots } It is a straightforward matter to verify that this counit satisfies the needed axiom for the coalgebra: ( i d ⊠ ϵ ) ∘ Δ = i d = ( ϵ ⊠ i d ) ∘ Δ . {\displaystyle (\mathrm {id} \boxtimes \epsilon )\circ \Delta =\mathrm {id} =(\epsilon \boxtimes \mathrm {id} )\circ \Delta .} Working this explicitly, one has ( ( i d ⊠ ϵ ) ∘ Δ ) ( x ) = ( i d ⊠ ϵ ) ( 1 ⊠ x + x ⊠ 1 ) = 1 ⊠ ϵ ( x ) + x ⊠ ϵ ( 1 ) = 0 + x ⊠ 1 ≅ x {\displaystyle {\begin{aligned}((\mathrm {id} \boxtimes \epsilon )\circ \Delta )(x)&=(\mathrm {id} \boxtimes \epsilon )(1\boxtimes x+x\boxtimes 1)\\&=1\boxtimes \epsilon (x)+x\boxtimes \epsilon (1)\\&=0+x\boxtimes 1\\&\cong x\end{aligned}}} where, for the last step, one has made use of the isomorphism T V ⊠ K ≅ T V {\displaystyle TV\boxtimes K\cong TV} , as is appropriate for the defining axiom of the counit. == Bialgebra == A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible. === Multiplication === Multiplication is given by an operator ∇ : T V ⊠ T V → T V {\displaystyle \nabla :TV\boxtimes TV\to TV} which, in this case, was already given as the "internal" tensor product. That is, ∇ : x ⊠ y ↦ x ⊗ y {\displaystyle \nabla :x\boxtimes y\mapsto x\otimes y} That is, ∇ ( x ⊠ y ) = x ⊗ y . {\displaystyle \nabla (x\boxtimes y)=x\otimes y.} The above should make it clear why the ⊠ {\displaystyle \boxtimes } symbol needs to be used: the ⊗ {\displaystyle \otimes } was actually one and the same thing as ∇ {\displaystyle \nabla } ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product ⊗ {\displaystyle \otimes } of the tensor algebra corresponds to the multiplication ∇ {\displaystyle \nabla } used in the definition of an algebra, whereas the tensor product ⊠ {\displaystyle \boxtimes } is the one required in the definition of comultiplication in a coalgebra. These two tensor products are not the same thing! === Unit === The unit for the algebra η : K → T V {\displaystyle \eta :K\to TV} is just the embedding, so that η : k ↦ k {\displaystyle \eta :k\mapsto k} That the unit is compatible with the tensor product ⊗ {\displaystyle \otimes } is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, k ⊗ x = k x {\displaystyle k\otimes x=kx} for field element k and any x ∈ T V . {\displaystyle x\in TV.} More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams): ∇ ∘ ( η ⊠ i d T V ) = η ⊗ i d T V = η ⋅ i d T V {\displaystyle \nabla \circ (\eta \boxtimes \mathrm {id} _{TV})=\eta \otimes \mathrm {id} _{TV}=\eta \cdot \mathrm {id} _{TV}} on K ⊠ T V {\displaystyle K\boxtimes TV} , and that symmetrically, on T V ⊠ K {\displaystyle TV\boxtimes K} , that ∇ ∘ ( i d T V ⊠ η ) = i d T V ⊗ η = i d T V ⋅ η {\displaystyle \nabla \circ (\mathrm {id} _{TV}\boxtimes \eta )=\mathrm {id} _{TV}\otimes \eta =\mathrm {id} _{TV}\cdot \eta } where the right-hand side of these equations should be understood as the scalar product. === Compatibility === The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that ϵ ∘ η = i d K . {\displaystyle \epsilon \circ \eta =\mathrm {id} _{K}.} Similarly, the unit is compatible with comultiplication: Δ ∘ η = η ⊠ η ≅ η {\displaystyle \Delta \circ \eta =\eta \boxtimes \eta \cong \eta } The above requires the use of the isomorphism K ⊠ K ≅ K {\displaystyle K\boxtimes K\cong K} in order to work; without this, one loses linearity. Component-wise, ( Δ ∘ η ) ( k ) = Δ ( k ) = k ( 1 ⊠ 1 ) ≅ k {\displaystyle (\Delta \circ \eta )(k)=\Delta (k)=k(1\boxtimes 1)\cong k} with the right-hand side making use of the isomorphism. Multiplication and the counit are compatible: ( ϵ ∘ ∇ ) ( x ⊠ y ) = ϵ ( x ⊗ y ) = 0 {\displaystyle (\epsilon \circ \nabla )(x\boxtimes y)=\epsilon (x\otimes y)=0} whenever x or y are not elements of K {\displaystyle K} , and otherwise, one has scalar multiplication on the field: k 1 ⊗ k 2 = k 1 k 2 . {\displaystyle k_{1}\otimes k_{2}=k_{1}k_{2}.} The most difficult to verify is the compatibility of multiplication and comultiplication: Δ ∘ ∇ = ( ∇ ⊠ ∇ ) ∘ ( i d ⊠ τ ⊠ i d ) ∘ ( Δ ⊠ Δ ) {\displaystyle \Delta \circ \nabla =(\nabla \boxtimes \nabla )\circ (\mathrm {id} \boxtimes \tau \boxtimes \mathrm {id} )\circ (\Delta \boxtimes \Delta )} where τ ( x ⊠ y ) = y ⊠ x {\displaystyle \tau (x\boxtimes y)=y\boxtimes x} exchanges elements. The compatibility condition only needs to be verified on V ⊂ T V {\displaystyle V\subset TV} ; the full compatibility follows as a homomorphic extension to all of T V . {\displaystyle TV.} The verification is verbose but straightforward; it is not given here, except for the final result: ( Δ ∘ ∇ ) ( v ⊠ w ) = Δ ( v ⊗ w ) {\displaystyle (\Delta \circ \nabla )(v\boxtimes w)=\Delta (v\otimes w)} For v , w ∈ V , {\displaystyle v,w\in V,} an explicit expression for this was given in the coalgebra section, above. == Hopf algebra == The Hopf algebra adds an antipode to the bialgebra axioms. The antipode S {\displaystyle S} on k ∈ K = T 0 V {\displaystyle k\in K=T^{0}V} is given by S ( k ) = k {\displaystyle S(k)=k} This is sometimes called the "anti-identity". The antipode on v ∈ V = T 1 V {\displaystyle v\in V=T^{1}V} is given by S ( v ) = − v {\displaystyle S(v)=-v} and on v ⊗ w ∈ T 2 V {\displaystyle v\otimes w\in T^{2}V} by S ( v ⊗ w ) = S ( w ) ⊗ S ( v ) = w ⊗ v {\displaystyle S(v\otimes w)=S(w)\otimes S(v)=w\otimes v} This extends homomorphically to S ( v 1 ⊗ ⋯ ⊗ v m ) = S ( v m ) ⊗ ⋯ ⊗ S ( v 1 ) = ( − 1 ) m v m ⊗ ⋯ ⊗ v 1 {\displaystyle {\begin{aligned}S(v_{1}\otimes \cdots \otimes v_{m})&=S(v_{m})\otimes \cdots \otimes S(v_{1})\\&=(-1)^{m}v_{m}\otimes \cdots \otimes v_{1}\end{aligned}}} === Compatibility === Compatibility of the antipode with multiplication and comultiplication requires that ∇ ∘ ( S ⊠ i d ) ∘ Δ = η ∘ ϵ = ∇ ∘ ( i d ⊠ S ) ∘ Δ {\displaystyle \nabla \circ (S\boxtimes \mathrm {id} )\circ \Delta =\eta \circ \epsilon =\nabla \circ (\mathrm {id} \boxtimes S)\circ \Delta } This is straightforward to verify componentwise on k ∈ K {\displaystyle k\in K} : ( ∇ ∘ ( S ⊠ i d ) ∘ Δ ) ( k ) = ( ∇ ∘ ( S ⊠ i d ) ) ( 1 ⊠ k ) = ∇ ( 1 ⊠ k ) = 1 ⊗ k = k {\displaystyle {\begin{aligned}(\nabla \circ (S\boxtimes \mathrm {id} )\circ \Delta )(k)&=(\nabla \circ (S\boxtimes \mathrm {id} ))(1\boxtimes k)\\&=\nabla (1\boxtimes k)\\&=1\otimes k\\&=k\end{aligned}}} Similarly, on v ∈ V {\displaystyle v\in V} : ( ∇ ∘ ( S ⊠ i d ) ∘ Δ ) ( v ) = ( ∇ ∘ ( S ⊠ i d ) ) ( v ⊠ 1 + 1 ⊠ v ) = ∇ ( − v ⊠ 1 + 1 ⊠ v ) = − v ⊗ 1 + 1 ⊗ v = − v + v = 0 {\displaystyle {\begin{aligned}(\nabla \circ (S\boxtimes \mathrm {id} )\circ \Delta )(v)&=(\nabla \circ (S\boxtimes \mathrm {id} ))(v\boxtimes 1+1\boxtimes v)\\&=\nabla (-v\boxtimes 1+1\boxtimes v)\\&=-v\otimes 1+1\otimes v\\&=-v+v\\&=0\end{aligned}}} Recall that ( η ∘ ϵ ) ( k ) = η ( k ) = k {\displaystyle (\eta \circ \epsilon )(k)=\eta (k)=k} and that ( η ∘ ϵ ) ( x ) = η ( 0 ) = 0 {\displaystyle (\eta \circ \epsilon )(x)=\eta (0)=0} for any x ∈ T V {\displaystyle x\in TV} that is not in K . {\displaystyle K.} One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on T 2 V {\displaystyle T^{2}V} and proceeding by induction. == Cofree cocomplete coalgebra == One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by Δ ( v 1 ⊗ ⋯ ⊗ v k ) := ∑ j = 0 k ( v 0 ⊗ ⋯ ⊗ v j ) ⊠ ( v j + 1 ⊗ ⋯ ⊗ v k + 1 ) {\displaystyle \Delta (v_{1}\otimes \dots \otimes v_{k}):=\sum _{j=0}^{k}(v_{0}\otimes \dots \otimes v_{j})\boxtimes (v_{j+1}\otimes \dots \otimes v_{k+1})} Here, as before, one uses the notational trick v 0 = v k + 1 = 1 ∈ K {\displaystyle v_{0}=v_{k+1}=1\in K} (recalling that v ⊗ 1 = v {\displaystyle v\otimes 1=v} trivially). This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T(V∗), where V∗ denotes the dual vector space of linear maps V → F. In the same way that the tensor algebra is a free algebra, the corresponding coalgebra is termed cocomplete co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product v i ⋅ v j = ( i , j ) v i + j {\displaystyle v_{i}\cdot v_{j}=(i,j)v_{i+j}} where (i,j) denotes the binomial coefficient for ( i + j i ) {\displaystyle {\tbinom {i+j}{i}}} . This bialgebra is known as the divided power Hopf algebra. The difference between this, and the other coalgebra is most easily seen in the T 2 V {\displaystyle T^{2}V} term. Here, one has that Δ ( v ⊗ w ) = 1 ⊠ ( v ⊗ w ) + v ⊠ w + ( v ⊗ w ) ⊠ 1 {\displaystyle \Delta (v\otimes w)=1\boxtimes (v\otimes w)+v\boxtimes w+(v\otimes w)\boxtimes 1} for v , w ∈ V {\displaystyle v,w\in V} , which is clearly missing a shuffled term, as compared to before. == See also == Braided vector space Braided Hopf algebra Monoidal category Multilinear algebra Fock space == References == Bourbaki, Nicolas (1989). Algebra I. Chapters 1-3. Elements of Mathematics. Springer-Verlag. ISBN 3-540-64243-9. (See Chapter 3 §5) Serge Lang (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (3rd ed.), Springer Verlag, ISBN 978-0-387-95385-4
Wikipedia/Tensor_algebra
The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero. Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed. The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division. Despite its name, it is not fundamental for modern algebra; it was named when algebra was synonymous with the theory of equations. == History == Peter Roth, in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", where "incomplete" means that at least one coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation x 4 = 4 x − 3 , {\displaystyle x^{4}=4x-3,} although incomplete, has four solutions (counting multiplicities): 1 (twice), − 1 + i 2 , {\displaystyle -1+i{\sqrt {2}},} and − 1 − i 2 . {\displaystyle -1-i{\sqrt {2}}.} As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type x4 + a4 (with a real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial x4 − 4x3 + 2x2 + 4x + 4, but he got a letter from Euler in 1742 in which it was shown that this polynomial is equal to ( x 2 − ( 2 + α ) x + 1 + 7 + α ) ( x 2 − ( 2 − α ) x + 1 + 7 − α ) , {\displaystyle \left(x^{2}-(2+\alpha )x+1+{\sqrt {7}}+\alpha \right)\left(x^{2}-(2-\alpha )x+1+{\sqrt {7}}-\alpha \right),} with α = 4 + 2 7 . {\displaystyle \alpha ={\sqrt {4+2{\sqrt {7}}}}.} Euler also pointed out that x 4 + a 4 = ( x 2 + a 2 ⋅ x + a 2 ) ( x 2 − a 2 ⋅ x + a 2 ) . {\displaystyle x^{4}+a^{4}=\left(x^{2}+a{\sqrt {2}}\cdot x+a^{2}\right)\left(x^{2}-a{\sqrt {2}}\cdot x+a^{2}\right).} A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z). At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981). The first rigorous proof was published by Argand, an amateur mathematician, in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849. The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it. None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981. Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). However, Fred Richman proved a reformulated version of the theorem that does work. == Equivalent statements == There are several equivalent formulations of the theorem: Every univariate polynomial of positive degree with real coefficients has at least one complex root. Every univariate polynomial of positive degree with complex coefficients has at least one complex root. This implies immediately the previous assertion, as real numbers are also complex numbers. The converse results from the fact that one gets a polynomial with real coefficients by taking the product of a polynomial and its complex conjugate (obtained by replacing each coefficient with its complex conjugate). A root of this product is either a root of the given polynomial, or of its conjugate; in the latter case, the conjugate of this root is a root of the given polynomial. Every univariate polynomial of positive degree n with complex coefficients can be factorized as c ( x − r 1 ) ⋯ ( x − r n ) , {\displaystyle c(x-r_{1})\cdots (x-r_{n}),} where c , r 1 , … , r n {\displaystyle c,r_{1},\ldots ,r_{n}} are complex numbers. The n complex numbers r 1 , … , r n {\displaystyle r_{1},\ldots ,r_{n}} are the roots of the polynomial. If a root appears in several factors, it is a multiple root, and the number of its occurrences is, by definition, the multiplicity of the root. The proof that this statement results from the previous ones is done by recursion on n: when a root r 1 {\displaystyle r_{1}} has been found, the polynomial division by x − r 1 {\displaystyle x-r_{1}} provides a polynomial of degree n − 1 {\displaystyle n-1} whose roots are the other roots of the given polynomial. The next two statements are equivalent to the previous ones, although they do not involve any nonreal complex number. These statements can be proved from previous factorizations by remarking that, if r is a non-real root of a polynomial with real coefficients, its complex conjugate r ¯ {\displaystyle {\overline {r}}} is also a root, and ( x − r ) ( x − r ¯ ) {\displaystyle (x-r)(x-{\overline {r}})} is a polynomial of degree two with real coefficients (this is the complex conjugate root theorem). Conversely, if one has a factor of degree two, the quadratic formula gives a root. Every univariate polynomial with real coefficients of degree larger than two has a factor of degree two with real coefficients. Every univariate polynomial with real coefficients of positive degree can be factored as c p 1 ⋯ p k , {\displaystyle cp_{1}\cdots p_{k},} where c is a real number and each p i {\displaystyle p_{i}} is a monic polynomial of degree at most two with real coefficients. Moreover, one can suppose that the factors of degree two do not have any real root. == Proofs == All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra. Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial p with complex coefficients, the polynomial q = p p ¯ , {\displaystyle q=p{\overline {p}},} has only real coefficients, and, if z is a root of q, then either z or its conjugate is a root of p. Here, p ¯ {\displaystyle {\overline {p}}} is the polynomial obtained by replacing each coefficient of p with its complex conjugate; the roots of p ¯ {\displaystyle {\overline {p}}} are exactly the complex conjugates of the roots of p. Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p(z) of degree n whose dominant coefficient is 1 behaves like zn when |z| is large enough. More precisely, there is some positive real number R such that 1 2 | z n | < | p ( z ) | < 3 2 | z n | {\displaystyle {\tfrac {1}{2}}|z^{n}|<|p(z)|<{\tfrac {3}{2}}|z^{n}|} when |z| > R. === Real-analytic proofs === Even without using complex numbers, it is possible to show that a real-valued polynomial p(x): p(0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients. In other words, for some real-valued a and b, the coefficients of the linear remainder on dividing p(x) by x2 − ax − b simultaneously become zero. p ( x ) = ( x 2 − a x − b ) q ( x ) + x R p ( x ) ( a , b ) + S p ( x ) ( a , b ) , {\displaystyle p(x)=(x^{2}-ax-b)q(x)+x\,R_{p(x)}(a,b)+S_{p(x)}(a,b),} where q(x) is a polynomial of degree n − 2. The coefficients Rp(x)(a, b) and Sp(x)(a, b) are independent of x and completely defined by the coefficients of p(x). In terms of representation, Rp(x)(a, b) and Sp(x)(a, b) are bivariate polynomials in a and b. In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b, all the roots of both Rp(x)(a, b) and Sp(x)(a, b) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain Rp(x)(a, b) and Sp(x)(a, b) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As Sp(a, b = 0) = p(0) has no roots, interlacing of Rp(x)(a, b) and Sp(x)(a, b) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of Rp(x)(a, b) and Sp(x)(a, b) must intersect for some real-valued a and b < 0. === Complex-analytic proofs === Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The maximum modulus principle applied to 1/p(z) implies that p(z0) = 0. In other words, z0 is a zero of p(z). A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0, we can write p ( z ) = a + c k ( z − z 0 ) k + c k + 1 ( z − z 0 ) k + 1 + ⋯ + c n ( z − z 0 ) n . {\displaystyle p(z)=a+c_{k}(z-z_{0})^{k}+c_{k+1}(z-z_{0})^{k+1}+\cdots +c_{n}(z-z_{0})^{n}.} Here, the cj are simply the coefficients of the polynomial z → p(z + z0) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z0 this function has behavior asymptotically similar to the simpler polynomial q ( z ) = a + c k ( z − z 0 ) k {\displaystyle q(z)=a+c_{k}(z-z_{0})^{k}} . More precisely, the function | p ( z ) − q ( z ) ( z − z 0 ) k + 1 | ≤ M {\displaystyle \left|{\frac {p(z)-q(z)}{(z-z_{0})^{k+1}}}\right|\leq M} for some positive constant M in some neighborhood of z0. Therefore, if we define θ 0 = ( arg ⁡ ( a ) + π − arg ⁡ ( c k ) ) / k {\displaystyle \theta _{0}=(\arg(a)+\pi -\arg(c_{k}))/k} and let z = z 0 + r e i θ 0 {\displaystyle z=z_{0}+re^{i\theta _{0}}} tracing a circle of radius r > 0 around z, then for any sufficiently small r (so that the bound M holds), we see that | p ( z ) | ≤ | q ( z ) | + r k + 1 | p ( z ) − q ( z ) r k + 1 | ≤ | a + ( − 1 ) c k r k e i ( arg ⁡ ( a ) − arg ⁡ ( c k ) ) | + M r k + 1 = | a | − | c k | r k + M r k + 1 {\displaystyle {\begin{aligned}|p(z)|&\leq |q(z)|+r^{k+1}\left|{\frac {p(z)-q(z)}{r^{k+1}}}\right|\\[4pt]&\leq \left|a+(-1)c_{k}r^{k}e^{i(\arg(a)-\arg(c_{k}))}\right|+Mr^{k+1}\\[4pt]&=|a|-|c_{k}|r^{k}+Mr^{k+1}\end{aligned}}} When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, contradicting the definition of z0. Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|. Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0. Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number 1 2 π i ∫ c ( r ) p ′ ( z ) p ( z ) d z , {\displaystyle {\frac {1}{2\pi i}}\int _{c(r)}{\frac {p'(z)}{p(z)}}\,dz,} where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is 1 2 π i ∫ c ( r ) ( p ′ ( z ) p ( z ) − n z ) d z = 1 2 π i ∫ c ( r ) z p ′ ( z ) − n p ( z ) z p ( z ) d z . {\displaystyle {\frac {1}{2\pi i}}\int _{c(r)}\left({\frac {p'(z)}{p(z)}}-{\frac {n}{z}}\right)dz={\frac {1}{2\pi i}}\int _{c(r)}{\frac {zp'(z)-np(z)}{zp(z)}}\,dz.} The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n. Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue. The proof of the latter statement is by contradiction. Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function R ( z ) = ( z I n − A ) − 1 , {\displaystyle R(z)=(zI_{n}-A)^{-1},} which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that ∫ c ( r ) R ( z ) d z = 0. {\displaystyle \int _{c(r)}R(z)\,dz=0.} On the other hand, R(z) expanded as a geometric series gives: R ( z ) = z − 1 ( I n − z − 1 A ) − 1 = z − 1 ∑ k = 0 ∞ 1 z k A k ⋅ {\displaystyle R(z)=z^{-1}(I_{n}-z^{-1}A)^{-1}=z^{-1}\sum _{k=0}^{\infty }{\frac {1}{z^{k}}}A^{k}\cdot } This formula is valid outside the closed disc of radius ‖ A ‖ {\displaystyle \|A\|} (the operator norm of A). Let r > ‖ A ‖ . {\displaystyle r>\|A\|.} Then ∫ c ( r ) R ( z ) d z = ∑ k = 0 ∞ ∫ c ( r ) d z z k + 1 A k = 2 π i I n {\displaystyle \int _{c(r)}R(z)dz=\sum _{k=0}^{\infty }\int _{c(r)}{\frac {dz}{z^{k+1}}}A^{k}=2\pi iI_{n}} (in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue. Finally, Rouché's theorem gives perhaps the shortest proof of the theorem. === Topological proofs === Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and: p ( z ) = p ( z 0 ) + c k ( z − z 0 ) k + c k + 1 ( z − z 0 ) k + 1 + ⋯ + c n ( z − z 0 ) n . {\displaystyle p(z)=p(z_{0})+c_{k}(z-z_{0})^{k}+c_{k+1}(z-z_{0})^{k+1}+\cdots +c_{n}(z-z_{0})^{n}.} If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D. For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words, | z n | > | a n − 1 z n − 1 + ⋯ + a 0 | . {\displaystyle \left|z^{n}\right|>\left|a_{n-1}z^{n-1}+\cdots +a_{0}\right|.} When z traverses the circle R e i θ {\displaystyle Re^{i\theta }} once counter-clockwise ( 0 ≤ θ ≤ 2 π ) , {\displaystyle (0\leq \theta \leq 2\pi ),} then z n = R n e i n θ {\displaystyle z^{n}=R^{n}e^{in\theta }} winds n times counter-clockwise ( 0 ≤ θ ≤ 2 π n ) {\displaystyle (0\leq \theta \leq 2\pi n)} around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero. === Algebraic proofs === These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases): every polynomial with an odd degree and real coefficients has some real root; every non-negative real number has a square root. The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R(√−1) is algebraically closed. ==== By induction ==== As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that p ( z ) = a ( z − z 1 ) ( z − z 2 ) ⋯ ( z − z n ) . {\displaystyle p(z)=a(z-z_{1})(z-z_{2})\cdots (z-z_{n}).} If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define: q t ( z ) = ∏ 1 ≤ i < j ≤ n ( z − z i − z j − t z i z j ) . {\displaystyle q_{t}(z)=\prod _{1\leq i<j\leq n}\left(z-z_{i}-z_{j}-tz_{i}z_{j}\right).} Then the coefficients of qt(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, ..., n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 − (zi + zj)z + zizj. Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since (x2 + 1)kf(x) has a root, where k is chosen so that deg(f) + 2k ∈ I). ==== From Galois theory ==== Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension. Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof. === Geometric proofs === There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat. A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, the Gauss–Bonnet theorem, when applied to the sphere S2, claims that ∫ S 2 K g = 4 π , {\displaystyle \int _{\mathbf {S} ^{2}}K_{g}=4\pi ,} which proves that the sphere is not flat. Let us now assume that n > 0 and p ( z ) = a 0 + a 1 z + ⋯ + a n z n ≠ 0 {\displaystyle p(z)=a_{0}+a_{1}z+\cdots +a_{n}z^{n}\neq 0} for each complex number z. Let us define p ∗ ( z ) = z n p ( 1 z ) = a 0 z n + a 1 z n − 1 + ⋯ + a n . {\displaystyle p^{*}(z)=z^{n}p\left({\tfrac {1}{z}}\right)=a_{0}z^{n}+a_{1}z^{n-1}+\cdots +a_{n}.} Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore, f ( 1 w ) = p ( 1 w ) p ∗ ( 1 w ) = w − 2 n p ∗ ( w ) p ( w ) = w − 2 n f ( w ) . {\displaystyle f({\tfrac {1}{w}})=p\left({\tfrac {1}{w}}\right)p^{*}\left({\tfrac {1}{w}}\right)=w^{-2n}p^{*}(w)p(w)=w^{-2n}f(w).} We can use this functional equation to prove that g, given by g = 1 | f ( w ) | 2 n | d w | 2 {\displaystyle g={\frac {1}{|f(w)|^{\frac {2}{n}}}}\,|dw|^{2}} for w in C, and g = 1 | f ( 1 w ) | 2 n | d ( 1 w ) | 2 {\displaystyle g={\frac {1}{\left|f\left({\tfrac {1}{w}}\right)\right|^{\frac {2}{n}}}}\left|d\left({\tfrac {1}{w}}\right)\right|^{2}} for w ∈ S2\{0}, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ {∞}). Now, a simple computation shows that ∀ w ∈ C : 1 | f ( w ) | 1 n K g = 1 n Δ log ⁡ | f ( w ) | = 1 n Δ Re ( log ⁡ f ( w ) ) = 0 , {\displaystyle \forall w\in \mathbf {C} :\qquad {\frac {1}{|f(w)|^{\frac {1}{n}}}}K_{g}={\frac {1}{n}}\Delta \log |f(w)|={\frac {1}{n}}\Delta {\text{Re}}(\log f(w))=0,} since the real part of an analytic function is harmonic. This proves that Kg = 0. == Corollaries == Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers: The field of complex numbers is the algebraic closure of the field of real numbers. Every polynomial in one variable z with complex coefficients is the product of a complex constant and polynomials of the form z + a with a complex. Every polynomial in one variable x with real coefficients can be uniquely written as the product of a constant, polynomials of the form x + a with a real, and polynomials of the form x2 + ax + b with a and b real and a2 − 4b < 0 (which is the same thing as saying that the polynomial x2 + ax + b has no real roots). (By the Abel–Ruffini theorem, the real numbers a and b are not necessarily expressible in terms of the coefficients of the polynomial, the basic arithmetic operations and the extraction of n-th roots.) This implies that the number of non-real complex roots is always even and remains even when counted with their multiplicity. Every rational function in one variable x, with real coefficients, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n (where n is a natural number, and a and b are real numbers), and rational functions of the form (ax + b)/(x2 + cx + d)n (where n is a natural number, and a, b, c, and d are real numbers such that c2 − 4d < 0). A corollary of this is that every rational function in one variable and real coefficients has an elementary primitive. Every algebraic extension of the real field is isomorphic either to the real field or to the complex field. == Bounds on the zeros of a polynomial == While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simplest result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial z n + a n − 1 z n − 1 + ⋯ + a 1 z + a 0 {\displaystyle z^{n}+a_{n-1}z^{n-1}+\cdots +a_{1}z+a_{0}} satisfy an inequality |ζ| ≤ R∞, where R ∞ := 1 + max { | a 0 | , … , | a n − 1 | } . {\displaystyle R_{\infty }:=1+\max\{|a_{0}|,\ldots ,|a_{n-1}|\}.} As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R∞. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients a := ( a 0 , a 1 , … , a n − 1 ) , {\displaystyle a:=(a_{0},a_{1},\ldots ,a_{n-1}),} that is |ζ| ≤ Rp, where Rp is precisely the q-norm of the 2-vector ( 1 , ‖ a ‖ p ) , {\displaystyle (1,\|a\|_{p}),} q being the conjugate exponent of p, 1 p + 1 q = 1 , {\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,} for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by R 1 := max { 1 , ∑ 0 ≤ k < n | a k | } , {\displaystyle R_{1}:=\max \left\{1,\sum _{0\leq k<n}|a_{k}|\right\},} R p := [ 1 + ( ∑ 0 ≤ k < n | a k | p ) q p ] 1 q , {\displaystyle R_{p}:=\left[1+\left(\sum _{0\leq k<n}|a_{k}|^{p}\right)^{\frac {q}{p}}\right]^{\frac {1}{q}},} for 1 < p < ∞, and in particular R 2 := ∑ 0 ≤ k ≤ n | a k | 2 {\displaystyle R_{2}:={\sqrt {\sum _{0\leq k\leq n}|a_{k}|^{2}}}} (where we define an to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n, P ( z ) := a n z n + a n − 1 z n − 1 + ⋯ + a 1 z + a 0 , {\displaystyle P(z):=a_{n}z^{n}+a_{n-1}z^{n-1}+\cdots +a_{1}z+a_{0},} is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on 1 ζ {\displaystyle {\tfrac {1}{\zeta }}} , that is, the roots of a 0 z n + a 1 z n − 1 + ⋯ + a n − 1 z + a n . {\displaystyle a_{0}z^{n}+a_{1}z^{n-1}+\cdots +a_{n-1}z+a_{n}.} Finally, the distance | ζ − ζ 0 | {\displaystyle |\zeta -\zeta _{0}|} from the roots ζ to any point ζ 0 {\displaystyle \zeta _{0}} can be estimated from below and above, seeing ζ − ζ 0 {\displaystyle \zeta -\zeta _{0}} as zeros of the polynomial P ( z + ζ 0 ) {\displaystyle P(z+\zeta _{0})} , whose coefficients are the Taylor expansion of P(z) at z = ζ 0 . {\displaystyle z=\zeta _{0}.} Let ζ be a root of the polynomial z n + a n − 1 z n − 1 + ⋯ + a 1 z + a 0 ; {\displaystyle z^{n}+a_{n-1}z^{n-1}+\cdots +a_{1}z+a_{0};} in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as − ζ n = a n − 1 ζ n − 1 + ⋯ + a 1 ζ + a 0 , {\displaystyle -\zeta ^{n}=a_{n-1}\zeta ^{n-1}+\cdots +a_{1}\zeta +a_{0},} and using the Hölder's inequality we find | ζ | n ≤ ‖ a ‖ p ‖ ( ζ n − 1 , … , ζ , 1 ) ‖ q . {\displaystyle |\zeta |^{n}\leq \|a\|_{p}\left\|\left(\zeta ^{n-1},\ldots ,\zeta ,1\right)\right\|_{q}.} Now, if p = 1, this is | ζ | n ≤ ‖ a ‖ 1 max { | ζ | n − 1 , … , | ζ | , 1 } = ‖ a ‖ 1 | ζ | n − 1 , {\displaystyle |\zeta |^{n}\leq \|a\|_{1}\max \left\{|\zeta |^{n-1},\ldots ,|\zeta |,1\right\}=\|a\|_{1}|\zeta |^{n-1},} thus | ζ | ≤ max { 1 , ‖ a ‖ 1 } . {\displaystyle |\zeta |\leq \max\{1,\|a\|_{1}\}.} In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have | ζ | n ≤ ‖ a ‖ p ( | ζ | q ( n − 1 ) + ⋯ + | ζ | q + 1 ) 1 q = ‖ a ‖ p ( | ζ | q n − 1 | ζ | q − 1 ) 1 q ≤ ‖ a ‖ p ( | ζ | q n | ζ | q − 1 ) 1 q , {\displaystyle |\zeta |^{n}\leq \|a\|_{p}\left(|\zeta |^{q(n-1)}+\cdots +|\zeta |^{q}+1\right)^{\frac {1}{q}}=\|a\|_{p}\left({\frac {|\zeta |^{qn}-1}{|\zeta |^{q}-1}}\right)^{\frac {1}{q}}\leq \|a\|_{p}\left({\frac {|\zeta |^{qn}}{|\zeta |^{q}-1}}\right)^{\frac {1}{q}},} thus | ζ | n q ≤ ‖ a ‖ p q | ζ | q n | ζ | q − 1 {\displaystyle |\zeta |^{nq}\leq \|a\|_{p}^{q}{\frac {|\zeta |^{qn}}{|\zeta |^{q}-1}}} and simplifying, | ζ | q ≤ 1 + ‖ a ‖ p q . {\displaystyle |\zeta |^{q}\leq 1+\|a\|_{p}^{q}.} Therefore | ζ | ≤ ‖ ( 1 , ‖ a ‖ p ) ‖ q = R p {\displaystyle |\zeta |\leq \left\|\left(1,\|a\|_{p}\right)\right\|_{q}=R_{p}} holds, for all 1 ≤ p ≤ ∞. == See also == Weierstrass factorization theorem, a generalization of the theorem to other entire functions Eilenberg–Niven theorem, a generalization of the theorem to polynomials with quaternionic coefficients and variables Hilbert's Nullstellensatz, a generalization to several variables of the assertion that complex roots exist Bézout's theorem, a generalization to several variables of the assertion on the number of roots. == References == === Citations === === Historic sources === Cauchy, Augustin-Louis (1821), Cours d'Analyse de l'École Royale Polytechnique, 1ère partie: Analyse Algébrique, Paris: Éditions Jacques Gabay (published 1992), ISBN 978-2-87647-053-8 {{citation}}: ISBN / Date incompatibility (help) (tr. Course on Analysis of the Royal Polytechnic Academy, part 1: Algebraic Analysis) Euler, Leonhard (1751), "Recherches sur les racines imaginaires des équations", Histoire de l'Académie Royale des Sciences et des Belles-Lettres de Berlin, vol. 5, Berlin, pp. 222–288, archived from the original on 2008-12-24, retrieved 2008-01-28. English translation: Euler, Leonhard (1751), "Investigations on the Imaginary Roots of Equations" (PDF), Histoire de l'Académie Royale des Sciences et des Belles-Lettres de Berlin, vol. 5, Berlin, pp. 222–288 Gauss, Carl Friedrich (1799), Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse, Helmstedt: C. G. Fleckeisen (tr. New proof of the theorem that every integral rational algebraic function of one variable can be resolved into real factors of the first or second degree). Gauss, Carl Friedrich (1866), Carl Friedrich Gauss Werke, vol. Band III, Königlichen Gesellschaft der Wissenschaften zu Göttingen Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse (1799), pp. 1–31., p. 1, at Google Books – first proof. Demonstratio nova altera theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse (1815 Dec), pp. 32–56., p. 32, at Google Books – second proof. Theorematis de resolubilitate functionum algebraicarum integrarum in factores reales demonstratio tertia Supplementum commentationis praecedentis (1816 Jan), pp. 57–64., p. 57, at Google Books – third proof. Beiträge zur Theorie der algebraischen Gleichungen (1849 Juli), pp. 71–103., p. 71, at Google Books – fourth proof. Kneser, Hellmuth (1940), "Der Fundamentalsatz der Algebra und der Intuitionismus", Mathematische Zeitschrift, vol. 46, pp. 287–302, doi:10.1007/BF01181442, ISSN 0025-5874, S2CID 120861330 (The Fundamental Theorem of Algebra and Intuitionism). Kneser, Martin (1981), "Ergänzung zu einer Arbeit von Hellmuth Kneser über den Fundamentalsatz der Algebra", Mathematische Zeitschrift, vol. 177, no. 2, pp. 285–287, doi:10.1007/BF01214206, ISSN 0025-5874, S2CID 122310417 (tr. An extension of a work of Hellmuth Kneser on the Fundamental Theorem of Algebra). Ostrowski, Alexander (1920), "Über den ersten und vierten Gaußschen Beweis des Fundamental-Satzes der Algebra", Carl Friedrich Gauss Werke Band X Abt. 2 (tr. On the first and fourth Gaussian proofs of the Fundamental Theorem of Algebra). Weierstraß, Karl (1891), "Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen", Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin, pp. 1085–1101 (tr. New proof of the theorem that every integral rational function of one variable can be represented as a product of linear functions of the same variable). === Recent literature === Almira, José María; Romero, Alfonso (2007), "Yet another application of the Gauss-Bonnet Theorem for the sphere", Bulletin of the Belgian Mathematical Society, vol. 14, pp. 341–342, MR 2341569 Almira, José María; Romero, Alfonso (2012), "Some Riemannian geometric proofs of the Fundamental Theorem of Algebra" (PDF), Differential Geometry – Dynamical Systems, vol. 14, pp. 1–4, MR 2914638 de Oliveira, Oswaldo Rio Branco (2011), "The Fundamental Theorem of Algebra: an elementary and direct proof", The Mathematical Intelligencer, vol. 33, no. 2, pp. 1–2, doi:10.1007/s00283-011-9199-2, MR 2813254, S2CID 5243991 de Oliveira, Oswaldo Rio Branco (2012), "The Fundamental Theorem of Algebra: from the four basic operations", The American Mathematical Monthly, vol. 119, no. 9, pp. 753–758, arXiv:1110.0165, doi:10.4169/amer.math.monthly.119.09.753, MR 2990933, S2CID 218548926 Fine, Benjamin; Rosenberger, Gerhard (1997), The Fundamental Theorem of Algebra, Undergraduate Texts in Mathematics, Berlin: Springer-Verlag, ISBN 978-0-387-94657-3, MR 1454356 Gersten, Stephen M.; Stallings, John R. (1988), "On Gauss's First Proof of the Fundamental Theorem of Algebra", Proceedings of the American Mathematical Society, vol. 103, no. 1, pp. 331–332, doi:10.1090/S0002-9939-1988-0938691-3, ISSN 0002-9939, JSTOR 2047574, MR 0938691 Gilain, Christian (1991), "Sur l'histoire du théorème fondamental de l'algèbre: théorie des équations et calcul intégral", Archive for History of Exact Sciences, vol. 42, no. 2, pp. 91–136, doi:10.1007/BF00496870, ISSN 0003-9519, S2CID 121468210 (tr. On the history of the fundamental theorem of algebra: theory of equations and integral calculus.) Netto, Eugen; Le Vavasseur, Raymond (1916), "Les fonctions rationnelles §80–88: Le théorème fondamental", in Meyer, François; Molk, Jules (eds.), Encyclopédie des Sciences Mathématiques Pures et Appliquées, tome I, vol. 2, Éditions Jacques Gabay (published 1992), ISBN 978-2-87647-101-6 {{citation}}: ISBN / Date incompatibility (help) (tr. The rational functions §80–88: the fundamental theorem). Remmert, Reinhold (1991), "The Fundamental Theorem of Algebra", in Ebbinghaus, Heinz-Dieter; Hermes, Hans; Hirzebruch, Friedrich (eds.), Numbers, Graduate Texts in Mathematics 123, Berlin: Springer-Verlag, ISBN 978-0-387-97497-2 Shipman, Joseph (2007), "Improving the Fundamental Theorem of Algebra", Mathematical Intelligencer, vol. 29, no. 4, pp. 9–14, doi:10.1007/BF02986170, ISSN 0343-6993, S2CID 123089882 Smale, Steve (1981), "The Fundamental Theorem of Algebra and Complexity Theory", Bulletin of the American Mathematical Society, New Series, 4 (1): 1–36, doi:10.1090/S0273-0979-1981-14858-8 [3] Smith, David Eugene (1959), A Source Book in Mathematics, Dover, ISBN 978-0-486-64690-9 {{citation}}: ISBN / Date incompatibility (help) Smithies, Frank (2000), "A forgotten paper on the fundamental theorem of algebra", Notes & Records of the Royal Society, vol. 54, no. 3, pp. 333–341, doi:10.1098/rsnr.2000.0116, ISSN 0035-9149, S2CID 145593806 Taylor, Paul (2 June 2007), Gauss's second proof of the fundamental theorem of algebra – English translation of Gauss's second proof. van der Waerden, Bartel Leendert (2003), Algebra, vol. I (7th ed.), Springer-Verlag, ISBN 978-0-387-40624-4 == External links == Algebra, fundamental theorem of at Encyclopaedia of Mathematics Fundamental Theorem of Algebra — a collection of proofs From the Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path Gauss's first proof (in Latin) at Google Books Gauss's first proof (in Latin) at Google Books Mizar system proof: http://mizar.org/version/current/html/polynom5.html#T74 Prime Factorization Method — Prime Factorization Method explained in detail with Example.
Wikipedia/Fundamental_theorem_of_algebra
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) {\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}} is a better approximation of the root than x0. Geometrically, (x1, 0) is the x-intercept of the tangent of the graph of f at (x0, f(x0)): that is, the improved guess, x1, is the unique root of the linear approximation of f at the initial guess, x0. The process is repeated as x n + 1 = x n − f ( x n ) f ′ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}} until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations. == Description == The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by its tangent line near the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can be iterated. The best linear approximation to an arbitrary differentiable function f ( x ) {\displaystyle f(x)} near the point x = x n {\displaystyle x=x_{n}} is the tangent line to the curve, with equation f ( x ) ≈ f ( x n ) + f ′ ( x n ) ( x − x n ) . {\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).} The root of this linear function, the place where it intercepts the ⁠ x {\displaystyle x} ⁠-axis, can be taken as a closer approximate root ⁠ x n + 1 {\displaystyle x_{n+1}} ⁠: x n + 1 = x n − f ( x n ) f ′ ( x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.} The process can be started with any arbitrary initial guess ⁠ x 0 {\displaystyle x_{0}} ⁠, though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if ⁠ f ′ ( x 0 ) ≠ 0 {\displaystyle f'(x_{0})\neq 0} ⁠. Furthermore, for a root of multiplicity 1, the convergence is at least quadratic (see Rate of convergence) in some sufficiently small neighbourhood of the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in § Analysis below. Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if ⁠ f {\displaystyle f} ⁠ or its derivatives are computationally expensive to evaluate. == History == In the Old Babylonian period (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method, described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found in Heron of Alexandria's Metrica (1st–2nd century CE), so is often called Heron's method. Jamshīd al-Kāshī used a method to solve xP − N = 0 to find roots of N, a method that was algebraically equivalent to Newton's method, and in which a similar method was found in Trigonometria Britannica, published by Henry Briggs in 1633. The method first appeared roughly in Isaac Newton's work in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing Taylor series in the latter case. Newton may have derived his method from a similar, less precise method by mathematician François Viète, however, the two methods are not the same. The essence of Viète's own method can be found in the work of the mathematician Sharaf al-Din al-Tusi. The Japanese mathematician Seki Kōwa used a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing. Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero. Arthur Cayley in 1879 in The Newton–Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions. == Practical considerations == Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then the convergence is at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method. === Difficulty in calculating the derivative of a function === Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method. === Failure of the method to converge to the root === It is important to review the proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For situations where the method fails to converge, it is because the assumptions made in this proof are not met. For example, in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by using successive over-relaxation, or the speed of convergence can be increased by using the same method. In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method. === Slow convergence for roots of multiplicity greater than 1 === If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity m of the root is known, the following modified algorithm preserves the quadratic convergence rate: x n + 1 = x n − m f ( x n ) f ′ ( x n ) . {\displaystyle x_{n+1}=x_{n}-m{\frac {f(x_{n})}{f'(x_{n})}}.} This is equivalent to using successive over-relaxation. On the other hand, if the multiplicity m of the root is not known, it is possible to estimate m after carrying out one or two iterations, and then use that value to increase the rate of convergence. If the multiplicity m of the root is finite then g(x) = ⁠f(x)/f′(x)⁠ will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of g(x) recovers quadratic convergence in many cases although it generally involves the second derivative of f(x). In a particularly simple case, if f(x) = xm then g(x) = ⁠x/m⁠ and Newton's method finds the root in a single iteration with x n + 1 = x n − g ( x n ) g ′ ( x n ) = x n − x n m 1 m = 0 . {\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}=x_{n}-{\frac {\;{\frac {x_{n}}{m}}\;}{\frac {1}{m}}}=0\,.} === Slow convergence === The function f(x) = x2 has a root at 0. Since f is continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivative f ′ is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = 1 2 x n . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {1}{2}}x_{n}.} It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved. The function f(x) = x + x4/3 also has a root at 0, where it is continuously differentiable. Although the first derivative f ′ is nonzero at the root, the second derivative f ′′ is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n 4 / 3 3 + 4 x n 1 / 3 ≈ x n ⋅ x n 1 / 3 3 . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {x_{n}^{4/3}}{3+4x_{n}^{1/3}}}\approx x_{n}\cdot {\frac {x_{n}^{1/3}}{3}}.} From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the above f(x) = x + x4/3 and the right of which shows Newton's method applied to f(x) = x + x2. The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...). The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the function f(x) = x20 − 1 has a root at 1. Since f ′(1) ≠ 0 and f is smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root. === Convergence dependent on initialization === The function f(x) = x(1 + x2)−1/2 has a root at 0. The Newton iteration is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n − x n ( 1 + x n 2 ) − 1 / 2 ( 1 + x n 2 ) − 3 / 2 = − x n 3 . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}(1+x_{n}^{2})^{-1/2}}{(1+x_{n}^{2})^{-3/2}}}=-x_{n}^{3}.} From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between ±1, the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at 1 or −1, the Newton iteration will oscillate endlessly between ±1; if initialized anywhere else, the Newton iteration will diverge. This same trichotomy occurs for f(x) = arctan x. In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the function f(x) = x(x2 − 1)(x − 3)e−(x − 1)2/2 has roots at −1, 0, 1, and 3. If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to ∞; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to −∞; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to 1. This kind of subtle dependence on initialization is not uncommon; it is frequently studied in the complex plane in the form of the Newton fractal. === Divergence even when initialization is close to the root === Consider the problem of finding a root of f(x) = x1/3. The Newton iteration is x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n − x n 1 / 3 1 3 x n − 2 / 3 = − 2 x n . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{1/3}}{{\frac {1}{3}}x_{n}^{-2/3}}}=-2x_{n}.} Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this case f is not differentiable at its root. In the above example, failure of convergence is reflected by the failure of f(xn) to get closer to zero as n increases, as well as by the fact that successive iterates are growing further and further apart. However, the function f(x) = x1/3e−x2 also has a root at 0. The Newton iteration is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n ( 1 − 3 1 − 6 x n 2 ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}\left(1-{\frac {3}{1-6x_{n}^{2}}}\right).} In this example, where again f is not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with both xn + 1 − xn and f(xn) converging to zero. This is seen in the following table showing the iterates with initialization 1: Although the convergence of xn + 1 − xn in this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness of xn + 1 − xn and f(xn) might falsely identify a root. === Oscillatory behavior === It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a function f to oscillate between 0 and 1, it is only necessary that the tangent line to f at 0 intersects the x-axis at 1 and that the tangent line to f at 1 intersects the x-axis at 0. This is the case, for example, if f(x) = x3 − 2x + 2. For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 will asymptotically oscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root of f approximately equal to −1.76929. === Undefinedness of Newton's method === In some cases, it is not even possible to perform the Newton iteration. For example, if f(x) = x2 − 1, then the Newton iteration is defined by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n − x n 2 − 1 2 x n = x n 2 + 1 2 x n . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-1}{2x_{n}}}={\frac {x_{n}^{2}+1}{2x_{n}}}.} So Newton's method cannot be initialized at 0, since this would make x1 undefined. Geometrically, this is because the tangent line to f at 0 is horizontal (i.e. f ′(0) = 0), never intersecting the x-axis. Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued. If f has an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration. For example, the natural logarithm function f(x) = ln x has a root at 1, and is defined only for positive x. Newton's iteration in this case is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n ( 1 − ln ⁡ x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}(1-\ln x_{n}).} So if the iteration is initialized at e, the next iterate is 0; if the iteration is initialized at a value larger than e, then the next iterate is negative. In either case, the method cannot be continued. == Analysis == Suppose that the function f has a zero at α, i.e., f(α) = 0, and f is differentiable in a neighborhood of α. If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence (xn) will converge to α. If f is continuously differentiable, its derivative is nonzero at α, and it has a second derivative at α, then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then: Δ x i + 1 = f ″ ( α ) 2 f ′ ( α ) ( Δ x i ) 2 + O ( Δ x i ) 3 , {\displaystyle \Delta x_{i+1}={\frac {f''(\alpha )}{2f'(\alpha )}}\left(\Delta x_{i}\right)^{2}+O\left(\Delta x_{i}\right)^{3}\,,} where Δ x i ≜ x i − α . {\displaystyle \Delta x_{i}\triangleq x_{i}-\alpha \,.} If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if f is twice continuously differentiable, f′(α) = 0 and f″(α) ≠ 0, then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly, with rate ⁠1/2⁠. Alternatively, if f′(α) = 0 and f′(x) ≠ 0 for x ≠ α, x in a neighborhood U of α, α being a zero of multiplicity r, and if f ∈ Cr(U), then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly. However, even linear convergence is not guaranteed in pathological situations. In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood U+ of α, if f is twice differentiable in U+ and if f′ ≠ 0, f · f″ > 0 in U+, then, for each x0 in U+ the sequence xk is monotonically decreasing to α. === Proof of quadratic convergence for Newton's iterative method === According to Taylor's theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is α. Then the expansion of f(α) about xn is: where the Lagrange form of the Taylor series expansion remainder is R 1 = 1 2 ! f ″ ( ξ n ) ( α − x n ) 2 , {\displaystyle R_{1}={\frac {1}{2!}}f''(\xi _{n})\left(\alpha -x_{n}\right)^{2}\,,} where ξn is in between xn and α. Since α is the root, (1) becomes: Dividing equation (2) by f′(xn) and rearranging gives Remembering that xn + 1 is defined by one finds that α − x n + 1 ⏟ ε n + 1 = − f ″ ( ξ n ) 2 f ′ ( x n ) ( α − x n ⏟ ε n ) 2 . {\displaystyle \underbrace {\alpha -x_{n+1}} _{\varepsilon _{n+1}}={\frac {-f''(\xi _{n})}{2f'(x_{n})}}{(\,\underbrace {\alpha -x_{n}} _{\varepsilon _{n}}\,)}^{2}\,.} That is, Taking the absolute value of both sides gives Equation (6) shows that the order of convergence is at least quadratic if the following conditions are satisfied: f′(x) ≠ 0; for all x ∈ I, where I is the interval [α − |ε0|, α + |ε0|]; f″(x) is continuous, for all x ∈ I; M |ε0| < 1 where M is given by M = 1 2 ( sup x ∈ I | f ″ ( x ) | ) ( sup x ∈ I 1 | f ′ ( x ) | ) . {\displaystyle M={\frac {1}{2}}\left(\sup _{x\in I}\vert f''(x)\vert \right)\left(\sup _{x\in I}{\frac {1}{\vert f'(x)\vert }}\right).\,} If these conditions hold, | ε n + 1 | ≤ M ⋅ ε n 2 . {\displaystyle \vert \varepsilon _{n+1}\vert \leq M\cdot \varepsilon _{n}^{2}\,.} === Fourier conditions === Suppose that f(x) is a concave function on an interval, which is strictly increasing. If it is negative at the left endpoint and positive at the right endpoint, the intermediate value theorem guarantees that there is a zero ζ of f somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration xi starting at the left endpoint is monotonically increasing and convergent, necessarily to ζ. Joseph Fourier introduced a modification of Newton's method starting at the right endpoint: y i + 1 = y i − f ( y i ) f ′ ( x i ) . {\displaystyle y_{i+1}=y_{i}-{\frac {f(y_{i})}{f'(x_{i})}}.} This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of yi must also be the zero ζ. So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If f is twice continuously differentiable, it can be proved using Taylor's theorem that lim i → ∞ y i + 1 − x i + 1 ( y i − x i ) 2 = − 1 2 f ″ ( ζ ) f ′ ( ζ ) , {\displaystyle \lim _{i\to \infty }{\frac {y_{i+1}-x_{i+1}}{(y_{i}-x_{i})^{2}}}=-{\frac {1}{2}}{\frac {f''(\zeta )}{f'(\zeta )}},} showing that this difference in locations converges quadratically to zero. All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of monotonicity and concavity are more subtle to formulate. In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of f. However, in this generalization, Newton's iteration is modified so as to be based on Taylor polynomials rather than the tangent line. In the case of concavity, this modification coincides with the standard Newton method. === Error for n>1 variables === If we seek the root of a single function f : R n → R {\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} } then the error ϵ n = x n − α {\displaystyle \epsilon _{n}=x_{n}-\alpha } is a vector such that its components obey ϵ k ( n + 1 ) = 1 2 ( ϵ ( n ) ) T Q k ϵ ( n ) + O ( ‖ ϵ ( n ) ‖ 3 ) {\displaystyle \epsilon _{k}^{(n+1)}={\frac {1}{2}}(\epsilon ^{(n)})^{T}Q_{k}\epsilon ^{(n)}+O(\|\epsilon ^{(n)}\|^{3})} where Q k {\displaystyle Q_{k}} is a quadratic form: ( Q k ) i , j = ∑ ℓ ( ( D 2 f ) − 1 ) i , ℓ ∂ 3 f ∂ x j ∂ x k ∂ x ℓ {\displaystyle (Q_{k})_{i,j}=\sum _{\ell }((D^{2}f)^{-1})_{i,\ell }{\frac {\partial ^{3}f}{\partial x_{j}\partial x_{k}\partial x_{\ell }}}} evaluated at the root α {\displaystyle \alpha } (where D 2 f {\displaystyle D^{2}f} is the 2nd derivative Hessian matrix). == Examples == === Use of Newton's method to compute square roots === Newton's method is one of many known methods of computing square roots. Given a positive number a, the problem of finding a number x such that x2 = a is equivalent to finding a root of the function f(x) = x2 − a. The Newton iteration defined by this function is given by x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n − x n 2 − a 2 x n = 1 2 ( x n + a x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-a}{2x_{n}}}={\frac {1}{2}}\left(x_{n}+{\frac {a}{x_{n}}}\right).} This happens to coincide with the "Babylonian" method of finding square roots, which consists of replacing an approximate root xn by the arithmetic mean of xn and a⁄xn. By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basic arithmetic operations. The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "xn" column is obtained by applying the preceding formula to the entry above it, for instance 306.5 = 1 2 ( 1 + 612 1 ) . {\displaystyle 306.5={\frac {1}{2}}\left(1+{\frac {612}{1}}\right).} The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of 1. When computing any nonzero square root, the first derivative of f must be nonzero at the root, and that f is a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration. === Solution of cos(x) = x3 using Newton's method === Consider the problem of finding the positive number x with cos x = x3. We can rephrase that as finding the zero of f(x) = cos(x) − x3. We have f′(x) = −sin(x) − 3x2. Since cos(x) ≤ 1 for all x and x3 > 1 for x > 1, we know that our solution lies between 0 and 1. A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guess x0 = 0.5, the sequence given by Newton's method is: x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) = 0.5 − cos ⁡ 0.5 − 0.5 3 − sin ⁡ 0.5 − 3 × 0.5 2 = 1.112 141 637 097 … x 2 = x 1 − f ( x 1 ) f ′ ( x 1 ) = ⋮ = 0. _ 909 672 693 736 … x 3 = ⋮ = ⋮ = 0.86 _ 7 263 818 209 … x 4 = ⋮ = ⋮ = 0.865 47 _ 7 135 298 … x 5 = ⋮ = ⋮ = 0.865 474 033 1 _ 11 … x 6 = ⋮ = ⋮ = 0.865 474 033 102 _ … {\displaystyle {\begin{matrix}x_{1}&=&x_{0}-{\dfrac {f(x_{0})}{f'(x_{0})}}&=&0.5-{\dfrac {\cos 0.5-0.5^{3}}{-\sin 0.5-3\times 0.5^{2}}}&=&1.112\,141\,637\,097\dots \\x_{2}&=&x_{1}-{\dfrac {f(x_{1})}{f'(x_{1})}}&=&\vdots &=&{\underline {0.}}909\,672\,693\,736\dots \\x_{3}&=&\vdots &=&\vdots &=&{\underline {0.86}}7\,263\,818\,209\dots \\x_{4}&=&\vdots &=&\vdots &=&{\underline {0.865\,47}}7\,135\,298\dots \\x_{5}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,1}}11\dots \\x_{6}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,102}}\dots \end{matrix}}} The correct digits are underlined in the above example. In particular, x6 is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for x3) to 5 and 10, illustrating the quadratic convergence. == Multidimensional formulations == === Systems of equations === ==== k variables, k functions ==== One may also use Newton's method to solve systems of k equations, which amounts to finding the (simultaneous) zeroes of k continuously differentiable functions f : R k → R . {\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} .} This is equivalent to finding the zeroes of a single vector-valued function F : R k → R k . {\displaystyle F:\mathbb {R} ^{k}\to \mathbb {R} ^{k}.} In the formulation given above, the scalars xn are replaced by vectors xn and instead of dividing the function f(xn) by its derivative f′(xn) one instead has to left multiply the function F(xn) by the inverse of its k × k Jacobian matrix JF(xn). This results in the expression x n + 1 = x n − J F ( x n ) − 1 F ( x n ) . {\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-J_{F}(\mathbf {x} _{n})^{-1}F(\mathbf {x} _{n}).} or, by solving the system of linear equations J F ( x n ) ( x n + 1 − x n ) = − F ( x n ) {\displaystyle J_{F}(\mathbf {x} _{n})(\mathbf {x} _{n+1}-\mathbf {x} _{n})=-F(\mathbf {x} _{n})} for the unknown xn + 1 − xn. ==== k variables, m equations, with m > k ==== The k-dimensional variant of Newton's method can be used to solve systems of greater than k (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix J+ = (JTJ)−1JT instead of the inverse of J. If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information. ==== Example ==== For example, the following set of equations needs to be solved for vector of points [ x 1 , x 2 ] , {\displaystyle \ [\ x_{1},x_{2}\ ]\ ,} given the vector of known values [ 2 , 3 ] . {\displaystyle \ [\ 2,3\ ]~.} 5 x 1 2 + x 1 x 2 2 + sin 2 ⁡ ( 2 x 2 ) = 2 e 2 x 1 − x 2 + 4 x 2 = 3 {\displaystyle {\begin{array}{lcr}5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})&=\quad 2\\e^{2\ x_{1}-x_{2}}+4\ x_{2}&=\quad 3\end{array}}} the function vector, F ( X k ) , {\displaystyle \ F(X_{k})\ ,} and Jacobian Matrix, J ( X k ) {\displaystyle \ J(X_{k})\ } for iteration k, and the vector of known values, Y , {\displaystyle \ Y\ ,} are defined below. F ( X k ) = [ f 1 ( X k ) f 2 ( X k ) ] = [ 5 x 1 2 + x 1 x 2 2 + sin 2 ⁡ ( 2 x 2 ) e 2 x 1 − x 2 + 4 x 2 ] k J ( X k ) = [ ∂ f 1 ( X ) ∂ x 1 , ∂ f 1 ( X ) ∂ x 2 ∂ f 2 ( X ) ∂ x 1 , ∂ f 2 ( X ) ∂ x 2 ] k = [ 10 x 1 + x 2 2 , 2 x 1 x 2 + 4 sin ⁡ ( 2 x 2 ) cos ⁡ ( 2 x 2 ) 2 e 2 x 1 − x 2 , − e 2 x 1 − x 2 + 4 ] k Y = [ 2 3 ] {\displaystyle {\begin{aligned}~&F(X_{k})~=~{\begin{bmatrix}{\begin{aligned}~&f_{1}(X_{k})\\~&f_{2}(X_{k})\end{aligned}}\end{bmatrix}}~=~{\begin{bmatrix}{\begin{aligned}~&5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})\\~&e^{2\ x_{1}-x_{2}}+4\ x_{2}\end{aligned}}\end{bmatrix}}_{k}\\~&J(X_{k})={\begin{bmatrix}~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{2}}}}~\\~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{2}}}}~\end{bmatrix}}_{k}~=~{\begin{bmatrix}{\begin{aligned}~&10\ x_{1}+x_{2}^{2}\ ,&&2\ x_{1}\ x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})\\~&2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4\end{aligned}}\end{bmatrix}}_{k}\\~&Y={\begin{bmatrix}~2~\\~3~\end{bmatrix}}\end{aligned}}} Note that F ( X k ) {\displaystyle \ F(X_{k})\ } could have been rewritten to absorb Y , {\displaystyle \ Y\ ,} and thus eliminate Y {\displaystyle Y} from the equations. The equation to solve for each iteration are [ 10 x 1 + x 2 2 , 2 x 1 x 2 + 4 sin ⁡ ( 2 x 2 ) cos ⁡ ( 2 x 2 ) 2 e 2 x 1 − x 2 , − e 2 x 1 − x 2 + 4 ] k [ c 1 c 2 ] k + 1 = [ 5 x 1 2 + x 1 x 2 2 + sin 2 ⁡ ( 2 x 2 ) − 2 e 2 x 1 − x 2 + 4 x 2 − 3 ] k {\displaystyle {\begin{aligned}{\begin{bmatrix}{\begin{aligned}~&~10\ x_{1}+x_{2}^{2}\ ,&&2x_{1}x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})~\\~&~2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4~\end{aligned}}\end{bmatrix}}_{k}{\begin{bmatrix}~c_{1}~\\~c_{2}~\end{bmatrix}}_{k+1}={\begin{bmatrix}~5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})-2~\\~e^{2\ x_{1}-x_{2}}+4\ x_{2}-3~\end{bmatrix}}_{k}\end{aligned}}} and X k + 1 = X k − C k + 1 {\displaystyle X_{k+1}~=~X_{k}-C_{k+1}} The iterations should be repeated until [ ∑ i = 1 i = 2 | f ( x i ) k − ( y i ) k | ] < E , {\displaystyle \ {\Bigg [}\sum _{i=1}^{i=2}{\Bigl |}f(x_{i})_{k}-(y_{i})_{k}{\Bigr |}{\Bigg ]}<E\ ,} where E {\displaystyle \ E\ } is a value acceptably small enough to meet application requirements. If vector X 0 {\displaystyle \ X_{0}\ } is initially chosen to be [ 1 1 ] , {\displaystyle \ {\begin{bmatrix}~1~&~1~\end{bmatrix}}\ ,} that is, x 1 = 1 , {\displaystyle \ x_{1}=1\ ,} and x 2 = 1 , {\displaystyle \ x_{2}=1\ ,} and E , {\displaystyle \ E\ ,} is chosen to be 1.10−3, then the example converges after four iterations to a value of X 4 = [ 0.567297 , − 0.309442 ] . {\displaystyle \ X_{4}=\left[~0.567297,\ -0.309442~\right]~.} ==== Iterations ==== The following iterations were made during the course of the solution. === Complex functions === When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are fractals. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of x2 + 1, all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case almost all real initial conditions lead to chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length. Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least. === In a Banach space === Another generalization is Newton's method to find a root of a functional F defined in a Banach space. In this case the formulation is X n + 1 = X n − ( F ′ ( X n ) ) − 1 F ( X n ) , {\displaystyle X_{n+1}=X_{n}-{\bigl (}F'(X_{n}){\bigr )}^{-1}F(X_{n}),\,} where F′(Xn) is the Fréchet derivative computed at Xn. One needs the Fréchet derivative to be boundedly invertible at each Xn in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem. ==== Nash–Moser iteration ==== In the 1950s, John Nash developed a version of the Newton's method to apply to the problem of constructing isometric embeddings of general Riemannian manifolds in Euclidean space. The loss of derivatives problem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction of smoothing operators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving an implicit function theorem for isometric embeddings. In the 1960s, Jürgen Moser showed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly in celestial mechanics. Since then, a number of mathematicians, including Mikhael Gromov and Richard Hamilton, have found generalized abstract versions of the Nash–Moser theory. In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certain Fréchet spaces. == Modifications == === Quasi-Newton methods === When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used. === Chebyshev's third-order method === Since higher-order Taylor expansions offer more accurate local approximations of a function f, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations. === Over p-adic numbers === In p-adic analysis, the standard method to show a polynomial equation in one variable has a p-adic root is Hensel's lemma, which uses the recursion from Newton's method on the p-adic numbers. Because of the more stable behavior of addition and multiplication in the p-adic numbers compared to the real numbers (specifically, the unit ball in the p-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. === q-analog === Newton's method can be generalized with the q-analog of the usual derivative. === Modified Newton methods === ==== Maehly's procedure ==== A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of f ( x ) = 0 {\displaystyle f(x)=0} , then the next root can be found by applying Newton's method to the next equation: F ( x ) = f ( x ) ∏ i = 1 N ( x − x i ) = 0. {\displaystyle F(x)={\frac {f(x)}{\prod _{i=1}^{N}(x-x_{i})}}=0.} This method is applied to obtain zeros of the Bessel function of the second kind. ==== Hirano's modified Newton method ==== Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials. ==== Interval Newton's method ==== Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial). Consider f → C1(X), where X is a real interval, and suppose that we have an interval extension F′ of f′, meaning that F′ takes as input an interval Y ⊆ X and outputs an interval F′(Y) such that: F ′ ( [ y , y ] ) = { f ′ ( y ) } F ′ ( Y ) ⊇ { f ′ ( y ) ∣ y ∈ Y } . {\displaystyle {\begin{aligned}F'([y,y])&=\{f'(y)\}\\[5pt]F'(Y)&\supseteq \{f'(y)\mid y\in Y\}.\end{aligned}}} We also assume that 0 ∉ F′(X), so in particular f has at most one root in X. We then define the interval Newton operator by: N ( Y ) = m − f ( m ) F ′ ( Y ) = { m − f ( m ) z | z ∈ F ′ ( Y ) } {\displaystyle N(Y)=m-{\frac {f(m)}{F'(Y)}}=\left\{\left.m-{\frac {f(m)}{z}}~\right|~z\in F'(Y)\right\}} where m ∈ Y. Note that the hypothesis on F′ implies that N(Y) is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence: X 0 = X X k + 1 = N ( X k ) ∩ X k . {\displaystyle {\begin{aligned}X_{0}&=X\\X_{k+1}&=N(X_{k})\cap X_{k}.\end{aligned}}} The mean value theorem ensures that if there is a root of f in Xk, then it is also in Xk + 1. Moreover, the hypothesis on F′ ensures that Xk + 1 is at most half the size of Xk when m is the midpoint of Y, so this sequence converges towards [x*, x*], where x* is the root of f in X. If F′(X) strictly contains 0, the use of extended interval division produces a union of two intervals for N(X); multiple roots are therefore automatically separated and bounded. == Applications == === Minimization and maximization problems === Newton's method can be used to find a minimum or maximum of a function f(x). The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: x n + 1 = x n − f ′ ( x n ) f ″ ( x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})}{f''(x_{n})}}.} === Multiplicative inverses of numbers and power series === An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number a, using only multiplication and subtraction, that is to say the number x such that ⁠1/x⁠ = a. We can rephrase that as finding the zero of f(x) = ⁠1/x⁠ − a. We have f′(x) = −⁠1/x2⁠. Newton's iteration is x n + 1 = x n − f ( x n ) f ′ ( x n ) = x n + 1 x n − a 1 x n 2 = x n ( 2 − a x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}+{\frac {{\frac {1}{x_{n}}}-a}{\frac {1}{x_{n}^{2}}}}=x_{n}(2-ax_{n}).} Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of a power series. === Solving transcendental equations === Many transcendental equations can be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulative probability density function, such as a Normal distribution to fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to the inverse Normal cumulative distribution. === Numerical verification for solutions of nonlinear equations === A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates. == Code == The following is an example of a possible implementation of Newton's method in the Python (version 3.x) programming language for finding a root of a function f which has derivative f_prime. The initial guess will be x0 = 1 and the function will be f(x) = x2 − 2 so that f′(x) = 2x. Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if f′(xn) ≈ 0, since otherwise a large amount of error could be introduced. == See also == == Notes == == References == Gil, A.; Segura, J.; Temme, N. M. (2007). Numerical methods for special functions. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-634-4. Süli, Endre; Mayers, David (2003). An Introduction to Numerical Analysis. Cambridge University Press. ISBN 0-521-00794-1. == Further reading == Kendall E. Atkinson: An Introduction to Numerical Analysis, John Wiley & Sons Inc., ISBN 0-471-62489-6 (1989). Tjalling J. Ypma: "Historical development of the Newton–Raphson method", SIAM Review, vol.37, no.4, (1995), pp.531–551. doi:10.1137/1037125. Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. pp. xiv+490. doi:10.1007/978-3-540-35447-5. ISBN 3-540-35445-X. MR 2265882. P. Deuflhard: Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Springer Berlin (Series in Computational Mathematics, Vol. 35) (2004). ISBN 3-540-21099-7. C. T. Kelley: Solving Nonlinear Equations with Newton's Method, SIAM (Fundamentals of Algorithms, 1) (2003). ISBN 0-89871-546-6. J. M. Ortega, and W. C. Rheinboldt: Iterative Solution of Nonlinear Equations in Several Variables, SIAM (Classics in Applied Mathematics) (2000). ISBN 0-89871-461-3. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Chapter 9. Root Finding and Nonlinear Sets of Equations Importance Sampling". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge Univ. Press. ISBN 978-0-521-88068-8.. See especially Sections 9.4, 9.6, and 9.7. Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Prentice Hall. pp. 216–221. ISBN 0-13-623603-0. == External links == "Newton method", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Newton's Method". MathWorld. Newton's method, Citizendium. Mathews, J., The Accelerated and Modified Newton Methods, Course notes. Wu, X., Roots of Equations, Course notes.
Wikipedia/Newton–Raphson_method
In mathematics, the graph of a function f {\displaystyle f} is the set of ordered pairs ( x , y ) {\displaystyle (x,y)} , where f ( x ) = y . {\displaystyle f(x)=y.} In the common case where x {\displaystyle x} and f ( x ) {\displaystyle f(x)} are real numbers, these pairs are Cartesian coordinates of points in a plane and often form a curve. The graphical representation of the graph of a function is also known as a plot. In the case of functions of two variables – that is, functions whose domain consists of pairs ( x , y ) {\displaystyle (x,y)} –, the graph usually refers to the set of ordered triples ( x , y , z ) {\displaystyle (x,y,z)} where f ( x , y ) = z {\displaystyle f(x,y)=z} . This is a subset of three-dimensional space; for a continuous real-valued function of two real variables, its graph forms a surface, which can be visualized as a surface plot. In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details. A graph of a function is a special case of a relation. In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective. == Definition == Given a function f : X → Y {\displaystyle f:X\to Y} from a set X (the domain) to a set Y (the codomain), the graph of the function is the set G ( f ) = { ( x , f ( x ) ) : x ∈ X } , {\displaystyle G(f)=\{(x,f(x)):x\in X\},} which is a subset of the Cartesian product X × Y {\displaystyle X\times Y} . In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph. == Examples == === Functions of one variable === The graph of the function f : { 1 , 2 , 3 } → { a , b , c , d } {\displaystyle f:\{1,2,3\}\to \{a,b,c,d\}} defined by f ( x ) = { a , if x = 1 , d , if x = 2 , c , if x = 3 , {\displaystyle f(x)={\begin{cases}a,&{\text{if }}x=1,\\d,&{\text{if }}x=2,\\c,&{\text{if }}x=3,\end{cases}}} is the subset of the set { 1 , 2 , 3 } × { a , b , c , d } {\displaystyle \{1,2,3\}\times \{a,b,c,d\}} G ( f ) = { ( 1 , a ) , ( 2 , d ) , ( 3 , c ) } . {\displaystyle G(f)=\{(1,a),(2,d),(3,c)\}.} From the graph, the domain { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} is recovered as the set of first component of each pair in the graph { 1 , 2 , 3 } = { x : ∃ y , such that ( x , y ) ∈ G ( f ) } {\displaystyle \{1,2,3\}=\{x:\ \exists y,{\text{ such that }}(x,y)\in G(f)\}} . Similarly, the range can be recovered as { a , c , d } = { y : ∃ x , such that ( x , y ) ∈ G ( f ) } {\displaystyle \{a,c,d\}=\{y:\exists x,{\text{ such that }}(x,y)\in G(f)\}} . The codomain { a , b , c , d } {\displaystyle \{a,b,c,d\}} , however, cannot be determined from the graph alone. The graph of the cubic polynomial on the real line f ( x ) = x 3 − 9 x {\displaystyle f(x)=x^{3}-9x} is { ( x , x 3 − 9 x ) : x is a real number } . {\displaystyle \{(x,x^{3}-9x):x{\text{ is a real number}}\}.} If this set is plotted on a Cartesian plane, the result is a curve (see figure). === Functions of two variables === The graph of the trigonometric function f ( x , y ) = sin ⁡ ( x 2 ) cos ⁡ ( y 2 ) {\displaystyle f(x,y)=\sin(x^{2})\cos(y^{2})} is { ( x , y , sin ⁡ ( x 2 ) cos ⁡ ( y 2 ) ) : x and y are real numbers } . {\displaystyle \{(x,y,\sin(x^{2})\cos(y^{2})):x{\text{ and }}y{\text{ are real numbers}}\}.} If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure). Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function: f ( x , y ) = − ( cos ⁡ ( x 2 ) + cos ⁡ ( y 2 ) ) 2 . {\displaystyle f(x,y)=-(\cos(x^{2})+\cos(y^{2}))^{2}.} == See also == == References == == Further reading == == External links == Weisstein, Eric W. "Function Graph." From MathWorld—A Wolfram Web Resource.
Wikipedia/Graph_of_a_function
In mathematics, a cubic function is a function of the form f ( x ) = a x 3 + b x 2 + c x + d , {\displaystyle f(x)=ax^{3}+bx^{2}+cx+d,} that is, a polynomial function of degree three. In many texts, the coefficients a, b, c, and d are supposed to be real numbers, and the function is considered as a real function that maps real numbers to real numbers or as a complex function that maps complex numbers to complex numbers. In other cases, the coefficients may be complex numbers, and the function is a complex function that has the set of the complex numbers as its codomain, even when the domain is restricted to the real numbers. Setting f(x) = 0 produces a cubic equation of the form a x 3 + b x 2 + c x + d = 0 , {\displaystyle ax^{3}+bx^{2}+cx+d=0,} whose solutions are called roots of the function. The derivative of a cubic function is a quadratic function. A cubic function with real coefficients has either one or three real roots (which may not be distinct); all odd-degree polynomials with real coefficients have at least one real root. The graph of a cubic function always has a single inflection point. It may have two critical points, a local minimum and a local maximum. Otherwise, a cubic function is monotonic. The graph of a cubic function is symmetric with respect to its inflection point; that is, it is invariant under a rotation of a half turn around this point. Up to an affine transformation, there are only three possible graphs for cubic functions. Cubic functions are fundamental for cubic interpolation. == History == == Critical and inflection points == The critical points of a cubic function are its stationary points, that is the points where the slope of the function is zero. Thus the critical points of a cubic function f defined by f(x) = ax3 + bx2 + cx + d, occur at values of x such that the derivative 3 a x 2 + 2 b x + c = 0 {\displaystyle 3ax^{2}+2bx+c=0} of the cubic function is zero. The solutions of this equation are the x-values of the critical points and are given, using the quadratic formula, by x critical = − b ± b 2 − 3 a c 3 a . {\displaystyle x_{\text{critical}}={\frac {-b\pm {\sqrt {b^{2}-3ac}}}{3a}}.} The sign of the expression Δ0 = b2 − 3ac inside the square root determines the number of critical points. If it is positive, then there are two critical points, one is a local maximum, and the other is a local minimum. If b2 − 3ac = 0, then there is only one critical point, which is an inflection point. If b2 − 3ac < 0, then there are no (real) critical points. In the two latter cases, that is, if b2 − 3ac is nonpositive, the cubic function is strictly monotonic. See the figure for an example of the case Δ0 > 0. The inflection point of a function is where that function changes concavity. An inflection point occurs when the second derivative f ″ ( x ) = 6 a x + 2 b , {\displaystyle f''(x)=6ax+2b,} is zero, and the third derivative is nonzero. Thus a cubic function has always a single inflection point, which occurs at x inflection = − b 3 a . {\displaystyle x_{\text{inflection}}=-{\frac {b}{3a}}.} == Classification == The graph of a cubic function is a cubic curve, though many cubic curves are not graphs of functions. Although cubic functions depend on four parameters, their graph can have only very few shapes. In fact, the graph of a cubic function is always similar to the graph of a function of the form y = x 3 + p x . {\displaystyle y=x^{3}+px.} This similarity can be built as the composition of translations parallel to the coordinates axes, a homothecy (uniform scaling), and, possibly, a reflection (mirror image) with respect to the y-axis. A further non-uniform scaling can transform the graph into the graph of one among the three cubic functions y = x 3 + x y = x 3 y = x 3 − x . {\displaystyle {\begin{aligned}y&=x^{3}+x\\y&=x^{3}\\y&=x^{3}-x.\end{aligned}}} This means that there are only three graphs of cubic functions up to an affine transformation. The above geometric transformations can be built in the following way, when starting from a general cubic function y = a x 3 + b x 2 + c x + d . {\displaystyle y=ax^{3}+bx^{2}+cx+d.} Firstly, if a < 0, the change of variable x → −x allows supposing a > 0. After this change of variable, the new graph is the mirror image of the previous one, with respect of the y-axis. Then, the change of variable x = x1 − ⁠b/3a⁠ provides a function of the form y = a x 1 3 + p x 1 + q . {\displaystyle y=ax_{1}^{3}+px_{1}+q.} This corresponds to a translation parallel to the x-axis. The change of variable y = y1 + q corresponds to a translation with respect to the y-axis, and gives a function of the form y 1 = a x 1 3 + p x 1 . {\displaystyle y_{1}=ax_{1}^{3}+px_{1}.} The change of variable x 1 = x 2 a , y 1 = y 2 a {\displaystyle \textstyle x_{1}={\frac {x_{2}}{\sqrt {a}}},y_{1}={\frac {y_{2}}{\sqrt {a}}}} corresponds to a uniform scaling, and give, after multiplication by a , {\displaystyle {\sqrt {a}},} a function of the form y 2 = x 2 3 + p x 2 , {\displaystyle y_{2}=x_{2}^{3}+px_{2},} which is the simplest form that can be obtained by a similarity. Then, if p ≠ 0, the non-uniform scaling x 2 = x 3 | p | , y 2 = y 3 | p | 3 {\displaystyle \textstyle x_{2}=x_{3}{\sqrt {|p|}},\quad y_{2}=y_{3}{\sqrt {|p|^{3}}}} gives, after division by | p | 3 , {\displaystyle \textstyle {\sqrt {|p|^{3}}},} y 3 = x 3 3 + x 3 sgn ⁡ ( p ) , {\displaystyle y_{3}=x_{3}^{3}+x_{3}\operatorname {sgn}(p),} where sgn ⁡ ( p ) {\displaystyle \operatorname {sgn}(p)} has the value 1 or −1, depending on the sign of p. If one defines sgn ⁡ ( 0 ) = 0 , {\displaystyle \operatorname {sgn}(0)=0,} the latter form of the function applies to all cases (with x 2 = x 3 {\displaystyle x_{2}=x_{3}} and y 2 = y 3 {\displaystyle y_{2}=y_{3}} ). == Symmetry == For a cubic function of the form y = x 3 + p x , {\displaystyle y=x^{3}+px,} the inflection point is thus the origin. As such a function is an odd function, its graph is symmetric with respect to the inflection point, and invariant under a rotation of a half turn around the inflection point. As these properties are invariant by similarity, the following is true for all cubic functions. The graph of a cubic function is symmetric with respect to its inflection point, and is invariant under a rotation of a half turn around the inflection point. == Collinearities == The tangent lines to the graph of a cubic function at three collinear points intercept the cubic again at collinear points. This can be seen as follows. As this property is invariant under a rigid motion, one may suppose that the function has the form f ( x ) = x 3 + p x . {\displaystyle f(x)=x^{3}+px.} If α is a real number, then the tangent to the graph of f at the point (α, f(α)) is the line {(x, f(α) + (x − α)f ′(α)) : x ∈ R}. So, the intersection point between this line and the graph of f can be obtained solving the equation f(x) = f(α) + (x − α)f ′(α), that is x 3 + p x = α 3 + p α + ( x − α ) ( 3 α 2 + p ) , {\displaystyle x^{3}+px=\alpha ^{3}+p\alpha +(x-\alpha )(3\alpha ^{2}+p),} which can be rewritten x 3 − 3 α 2 x + 2 α 3 = 0 , {\displaystyle x^{3}-3\alpha ^{2}x+2\alpha ^{3}=0,} and factorized as ( x − α ) 2 ( x + 2 α ) = 0. {\displaystyle (x-\alpha )^{2}(x+2\alpha )=0.} So, the tangent intercepts the cubic at ( − 2 α , − 8 α 3 − 2 p α ) = ( − 2 α , − 8 f ( α ) + 6 p α ) . {\displaystyle (-2\alpha ,-8\alpha ^{3}-2p\alpha )=(-2\alpha ,-8f(\alpha )+6p\alpha ).} So, the function that maps a point (x, y) of the graph to the other point where the tangent intercepts the graph is ( x , y ) ↦ ( − 2 x , − 8 y + 6 p x ) . {\displaystyle (x,y)\mapsto (-2x,-8y+6px).} This is an affine transformation that transforms collinear points into collinear points. This proves the claimed result. == Cubic interpolation == Given the values of a function and its derivative at two points, there is exactly one cubic function that has the same four values, which is called a cubic Hermite spline. There are two standard ways for using this fact. Firstly, if one knows, for example by physical measurement, the values of a function and its derivative at some sampling points, one can interpolate the function with a continuously differentiable function, which is a piecewise cubic function. If the value of a function is known at several points, cubic interpolation consists in approximating the function by a continuously differentiable function, which is piecewise cubic. For having a uniquely defined interpolation, two more constraints must be added, such as the values of the derivatives at the endpoints, or a zero curvature at the endpoints. == References == == External links == "Cardano formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994] History of quadratic, cubic and quartic equations on MacTutor archive.
Wikipedia/Cubic_function
Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations (for example, matrix addition, matrix multiplication). The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices such that the group operation is matrix multiplication. Representation theory is a useful method because it reduces problems in abstract algebra to problems in linear algebra, a subject that is well understood. Representations of more abstract objects in terms of familiar linear algebra can elucidate properties and simplify calculations within more abstract theories. For instance, representing a group by an infinite-dimensional Hilbert space allows methods of analysis to be applied to the theory of groups. Furthermore, representation theory is important in physics because it can describe how the symmetry group of a physical system affects the solutions of equations describing that system. Representation theory is pervasive across fields of mathematics. The applications of representation theory are diverse. In addition to its impact on algebra, representation theory generalizes Fourier analysis via harmonic analysis, is connected to geometry via invariant theory and the Erlangen program, has an impact in number theory via automorphic forms and the Langlands program. There are many approaches to representation theory: the same objects can be studied using methods from algebraic geometry, module theory, analytic number theory, differential geometry, operator theory, algebraic combinatorics and topology. The success of representation theory has led to numerous generalizations. One of the most general is in category theory. The algebraic objects to which representation theory applies can be viewed as particular kinds of categories, and the representations as functors from the object category to the category of vector spaces. This description points to two natural generalizations: first, the algebraic objects can be replaced by more general categories; second, the target category of vector spaces can be replaced by other well-understood categories. == Definitions and concepts == Let V {\displaystyle V} be a vector space over a field F {\displaystyle \mathbb {F} } . For instance, suppose V {\displaystyle V} is R n {\displaystyle \mathbb {R} ^{n}} or C n {\displaystyle \mathbb {C} ^{n}} , the standard n-dimensional space of column vectors over the real or complex numbers, respectively. In this case, the idea of representation theory is to do abstract algebra concretely by using n × n {\displaystyle n\times n} matrices of real or complex numbers. There are three main sorts of algebraic objects for which this can be done: groups, associative algebras and Lie algebras. The set of all invertible n × n {\displaystyle n\times n} matrices is a group under matrix multiplication, and the representation theory of groups analyzes a group by describing ("representing") its elements in terms of invertible matrices. Matrix addition and multiplication make the set of all n × n {\displaystyle n\times n} matrices into an associative algebra, and hence there is a corresponding representation theory of associative algebras. If we replace matrix multiplication M N {\displaystyle MN} by the matrix commutator M N − N M {\displaystyle MN-NM} , then the n × n {\displaystyle n\times n} matrices become instead a Lie algebra, leading to a representation theory of Lie algebras. This generalizes to any field F {\displaystyle \mathbb {F} } and any vector space V {\displaystyle V} over F {\displaystyle \mathbb {F} } , with linear maps replacing matrices and composition replacing matrix multiplication: there is a group GL ( V , F ) {\displaystyle {\text{GL}}(V,\mathbb {F} )} of automorphisms of V {\displaystyle V} , an associative algebra End F ( V ) {\displaystyle {\text{End}}_{\mathbb {F} }(V)} of all endomorphisms of V {\displaystyle V} , and a corresponding Lie algebra g l ( V , F ) {\displaystyle {\mathfrak {gl}}(V,\mathbb {F} )} . === Definition === ==== Action ==== There are two ways to define a representation. The first uses the idea of an action, generalizing the way that matrices act on column vectors by matrix multiplication. A representation of a group G {\displaystyle G} or (associative or Lie) algebra A {\displaystyle A} on a vector space V {\displaystyle V} is a map Φ : G × V → V or Φ : A × V → V {\displaystyle \Phi \colon G\times V\to V\quad {\text{or}}\quad \Phi \colon A\times V\to V} with two properties. For any g {\displaystyle g} in G {\displaystyle G} (or a {\displaystyle a} in A {\displaystyle A} ), the map Φ ( g ) : V → V v ↦ Φ ( g , v ) {\displaystyle {\begin{aligned}\Phi (g)\colon V&\to V\\v&\mapsto \Phi (g,v)\end{aligned}}} is linear (over F {\displaystyle \mathbb {F} } ). If we introduce the notation g · v for Φ {\displaystyle \Phi } (g, v), then for any g1, g2 in G and v in V: ( 2.1 ) e ⋅ v = v {\displaystyle (2.1)\quad e\cdot v=v} ( 2.2 ) g 1 ⋅ ( g 2 ⋅ v ) = ( g 1 g 2 ) ⋅ v {\displaystyle (2.2)\quad g_{1}\cdot (g_{2}\cdot v)=(g_{1}g_{2})\cdot v} where e is the identity element of G and g1g2 is the group product in G. The definition for associative algebras is analogous, except that associative algebras do not always have an identity element, in which case equation (2.1) is omitted. Equation (2.2) is an abstract expression of the associativity of matrix multiplication. This doesn't hold for the matrix commutator and also there is no identity element for the commutator. Hence for Lie algebras, the only requirement is that for any x1, x2 in A and v in V: ( 2.2 ′ ) x 1 ⋅ ( x 2 ⋅ v ) − x 2 ⋅ ( x 1 ⋅ v ) = [ x 1 , x 2 ] ⋅ v {\displaystyle (2.2')\quad x_{1}\cdot (x_{2}\cdot v)-x_{2}\cdot (x_{1}\cdot v)=[x_{1},x_{2}]\cdot v} where [x1, x2] is the Lie bracket, which generalizes the matrix commutator MN − NM. ==== Mapping ==== The second way to define a representation focuses on the map φ sending g in G to a linear map φ(g): V → V, which satisfies φ ( g 1 g 2 ) = φ ( g 1 ) ∘ φ ( g 2 ) for all g 1 , g 2 ∈ G {\displaystyle \varphi (g_{1}g_{2})=\varphi (g_{1})\circ \varphi (g_{2})\quad {\text{for all }}g_{1},g_{2}\in G} and similarly in the other cases. This approach is both more concise and more abstract. From this point of view: a representation of a group G on a vector space V is a group homomorphism φ: G → GL(V,F); a representation of an associative algebra A on a vector space V is an algebra homomorphism φ: A → EndF(V); a representation of a Lie algebra a {\displaystyle {\mathfrak {a}}} on a vector space V {\displaystyle V} is a Lie algebra homomorphism ϕ : a → g l ( V , F ) {\displaystyle \phi :{\mathfrak {a}}\to {\mathfrak {gl}}(V,\mathbb {F} )} . === Terminology === The vector space V is called the representation space of φ and its dimension (if finite) is called the dimension of the representation (sometimes degree, as in ). It is also common practice to refer to V itself as the representation when the homomorphism φ is clear from the context; otherwise the notation (V,φ) can be used to denote a representation. When V is of finite dimension n, one can choose a basis for V to identify V with Fn, and hence recover a matrix representation with entries in the field F. An effective or faithful representation is a representation (V,φ), for which the homomorphism φ is injective. === Equivariant maps and isomorphisms === If V {\displaystyle V} and W {\displaystyle W} are vector spaces over F {\displaystyle \mathbb {F} } , equipped with representations φ {\displaystyle \varphi } and ψ {\displaystyle \psi } of a group G {\displaystyle G} , then an equivariant map from V {\displaystyle V} to W {\displaystyle W} is a linear map α : V → W {\displaystyle \alpha :V\rightarrow W} such that α ( g ⋅ v ) = g ⋅ α ( v ) {\displaystyle \alpha (g\cdot v)=g\cdot \alpha (v)} for all g {\displaystyle g} in G {\displaystyle G} and v {\displaystyle v} in V {\displaystyle V} . In terms of φ : G → GL ( V ) {\displaystyle \varphi :G\rightarrow {\text{GL}}(V)} and ψ : G → GL ( W ) {\displaystyle \psi :G\rightarrow {\text{GL}}(W)} , this means α ∘ φ ( g ) = ψ ( g ) ∘ α {\displaystyle \alpha \circ \varphi (g)=\psi (g)\circ \alpha } for all g {\displaystyle g} in G {\displaystyle G} , that is, the following diagram commutes: Equivariant maps for representations of an associative or Lie algebra are defined similarly. If α {\displaystyle \alpha } is invertible, then it is said to be an isomorphism, in which case V {\displaystyle V} and W {\displaystyle W} (or, more precisely, φ {\displaystyle \varphi } and ψ {\displaystyle \psi } ) are isomorphic representations, also phrased as equivalent representations. An equivariant map is often called an intertwining map of representations. Also, in the case of a group G {\displaystyle G} , it is on occasion called a G {\displaystyle G} -map. Isomorphic representations are, for practical purposes, "the same"; they provide the same information about the group or algebra being represented. Representation theory therefore seeks to classify representations up to isomorphism. === Subrepresentations, quotients, and irreducible representations === If ( V , ψ ) {\displaystyle (V,\psi )} is a representation of (say) a group G {\displaystyle G} , and W {\displaystyle W} is a linear subspace of V {\displaystyle V} that is preserved by the action of G {\displaystyle G} in the sense that for all w ∈ W {\displaystyle w\in W} and g ∈ G {\displaystyle g\in G} , g ⋅ w ∈ W {\displaystyle g\cdot w\in W} (Serre calls these W {\displaystyle W} stable under G {\displaystyle G} ), then W {\displaystyle W} is called a subrepresentation: by defining ϕ : G → Aut ( W ) {\displaystyle \phi :G\to {\text{Aut}}(W)} where ϕ ( g ) {\displaystyle \phi (g)} is the restriction of ψ ( g ) {\displaystyle \psi (g)} to W {\displaystyle W} , ( W , ϕ ) {\displaystyle (W,\phi )} is a representation of G {\displaystyle G} and the inclusion of W ↪ V {\displaystyle W\hookrightarrow V} is an equivariant map. The quotient space V / W {\displaystyle V/W} can also be made into a representation of G {\displaystyle G} . If V {\displaystyle V} has exactly two subrepresentations, namely the trivial subspace {0} and V {\displaystyle V} itself, then the representation is said to be irreducible; if V {\displaystyle V} has a proper nontrivial subrepresentation, the representation is said to be reducible. The definition of an irreducible representation implies Schur's lemma: an equivariant map α : ( V , ψ ) → ( V ′ , ψ ′ ) {\displaystyle \alpha :(V,\psi )\to (V',\psi ')} between irreducible representations is either the zero map or an isomorphism, since its kernel and image are subrepresentations. In particular, when V = V ′ {\displaystyle V=V'} , this shows that the equivariant endomorphisms of V {\displaystyle V} form an associative division algebra over the underlying field F. If F is algebraically closed, the only equivariant endomorphisms of an irreducible representation are the scalar multiples of the identity. Irreducible representations are the building blocks of representation theory for many groups: if a representation V {\displaystyle V} is not irreducible then it is built from a subrepresentation and a quotient that are both "simpler" in some sense; for instance, if V {\displaystyle V} is finite-dimensional, then both the subrepresentation and the quotient have smaller dimension. There are counterexamples where a representation has a subrepresentation, but only has one non-trivial irreducible component. For example, the additive group ( R , + ) {\displaystyle (\mathbb {R} ,+)} has a two dimensional representation ϕ ( a ) = [ 1 a 0 1 ] {\displaystyle \phi (a)={\begin{bmatrix}1&a\\0&1\end{bmatrix}}} This group has the vector [ 1 0 ] T {\displaystyle {\begin{bmatrix}1&0\end{bmatrix}}^{\mathsf {T}}} fixed by this homomorphism, but the complement subspace maps to [ 0 1 ] ↦ [ a 1 ] {\displaystyle {\begin{bmatrix}0\\1\end{bmatrix}}\mapsto {\begin{bmatrix}a\\1\end{bmatrix}}} giving only one irreducible subrepresentation. This is true for all unipotent groups.: 112  === Direct sums and indecomposable representations === If (V,φ) and (W,ψ) are representations of (say) a group G, then the direct sum of V and W is a representation, in a canonical way, via the equation g ⋅ ( v , w ) = ( g ⋅ v , g ⋅ w ) . {\displaystyle g\cdot (v,w)=(g\cdot v,g\cdot w).} The direct sum of two representations carries no more information about the group G than the two representations do individually. If a representation is the direct sum of two proper nontrivial subrepresentations, it is said to be decomposable. Otherwise, it is said to be indecomposable. === Complete reducibility === In favorable circumstances, every finite-dimensional representation is a direct sum of irreducible representations: such representations are said to be semisimple. In this case, it suffices to understand only the irreducible representations. Examples where this "complete reducibility" phenomenon occurs (at least over fields of characteristic zero) include finite groups (see Maschke's theorem), compact groups, and semisimple Lie algebras. In cases where complete reducibility does not hold, one must understand how indecomposable representations can be built from irreducible representations by using extensions of quotients by subrepresentations. === Tensor products of representations === Suppose ϕ 1 : G → G L ( V 1 ) {\displaystyle \phi _{1}:G\rightarrow \mathrm {GL} (V_{1})} and ϕ 2 : G → G L ( V 2 ) {\displaystyle \phi _{2}:G\rightarrow \mathrm {GL} (V_{2})} are representations of a group G {\displaystyle G} . Then we can form a representation ϕ 1 ⊗ ϕ 2 {\displaystyle \phi _{1}\otimes \phi _{2}} of G acting on the tensor product vector space V 1 ⊗ V 2 {\displaystyle V_{1}\otimes V_{2}} as follows: ( ϕ 1 ⊗ ϕ 2 ) ( g ) = ϕ 1 ( g ) ⊗ ϕ 2 ( g ) {\displaystyle (\phi _{1}\otimes \phi _{2})(g)=\phi _{1}(g)\otimes \phi _{2}(g)} . If ϕ 1 {\displaystyle \phi _{1}} and ϕ 2 {\displaystyle \phi _{2}} are representations of a Lie algebra, then the correct formula to use is ( ϕ 1 ⊗ ϕ 2 ) ( X ) = ϕ 1 ( X ) ⊗ I + I ⊗ ϕ 2 ( X ) {\displaystyle (\phi _{1}\otimes \phi _{2})(X)=\phi _{1}(X)\otimes I+I\otimes \phi _{2}(X)} . This product can be recognized as the coproduct on a coalgebra. In general, the tensor product of irreducible representations is not irreducible; the process of decomposing a tensor product as a direct sum of irreducible representations is known as Clebsch–Gordan theory. In the case of the representation theory of the group SU(2) (or equivalently, of its complexified Lie algebra s l ( 2 ; C ) {\displaystyle \mathrm {sl} (2;\mathbb {C} )} ), the decomposition is easy to work out. The irreducible representations are labeled by a parameter l {\displaystyle l} that is a non-negative integer or half integer; the representation then has dimension 2 l + 1 {\displaystyle 2l+1} . Suppose we take the tensor product of the representation of two representations, with labels l 1 {\displaystyle l_{1}} and l 2 , {\displaystyle l_{2},} where we assume l 1 ≥ l 2 {\displaystyle l_{1}\geq l_{2}} . Then the tensor product decomposes as a direct sum of one copy of each representation with label l {\displaystyle l} , where l {\displaystyle l} ranges from l 1 − l 2 {\displaystyle l_{1}-l_{2}} to l 1 + l 2 {\displaystyle l_{1}+l_{2}} in increments of 1. If, for example, l 1 = l 2 = 1 {\displaystyle l_{1}=l_{2}=1} , then the values of l {\displaystyle l} that occur are 0, 1, and 2. Thus, the tensor product representation of dimension ( 2 l 1 + 1 ) × ( 2 l 2 + 1 ) = 3 × 3 = 9 {\displaystyle (2l_{1}+1)\times (2l_{2}+1)=3\times 3=9} decomposes as a direct sum of a 1-dimensional representation ( l = 0 ) , {\displaystyle (l=0),} a 3-dimensional representation ( l = 1 ) , {\displaystyle (l=1),} and a 5-dimensional representation ( l = 2 ) {\displaystyle (l=2)} . == Branches and topics == Representation theory is notable for the number of branches it has, and the diversity of the approaches to studying representations of groups and algebras. Although, all the theories have in common the basic concepts discussed already, they differ considerably in detail. The differences are at least 3-fold: Representation theory depends upon the type of algebraic object being represented. There are several different classes of groups, associative algebras and Lie algebras, and their representation theories all have an individual flavour. Representation theory depends upon the nature of the vector space on which the algebraic object is represented. The most important distinction is between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (for example, whether or not the space is a Hilbert space, Banach space, etc.). Additional algebraic structures can also be imposed in the finite-dimensional case. Representation theory depends upon the type of field over which the vector space is defined. The most important cases are the field of complex numbers, the field of real numbers, finite fields, and fields of p-adic numbers. Additional difficulties arise for fields of positive characteristic and for fields that are not algebraically closed. === Finite groups === Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to geometry and crystallography. Representations of finite groups exhibit many of the features of the general theory and point the way to other branches and topics in representation theory. Over a field of characteristic zero, the representation of a finite group G has a number of convenient properties. First, the representations of G are semisimple (completely reducible). This is a consequence of Maschke's theorem, which states that any subrepresentation V of a G-representation W has a G-invariant complement. One proof is to choose any projection π from W to V and replace it by its average πG defined by π G ( x ) = 1 | G | ∑ g ∈ G g ⋅ π ( g − 1 ⋅ x ) . {\displaystyle \pi _{G}(x)={\frac {1}{|G|}}\sum _{g\in G}g\cdot \pi (g^{-1}\cdot x).} πG is equivariant, and its kernel is the required complement. The finite-dimensional G-representations can be understood using character theory: the character of a representation φ: G → GL(V) is the class function χφ: G → F defined by χ φ ( g ) = T r ( φ ( g ) ) {\displaystyle \chi _{\varphi }(g)=\mathrm {Tr} (\varphi (g))} where T r {\displaystyle \mathrm {Tr} } is the trace. An irreducible representation of G is completely determined by its character. Maschke's theorem holds more generally for fields of positive characteristic p, such as the finite fields, as long as the prime p is coprime to the order of G. When p and |G| have a common factor, there are G-representations that are not semisimple, which are studied in a subbranch called modular representation theory. Averaging techniques also show that if F is the real or complex numbers, then any G-representation preserves an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } on V in the sense that ⟨ g ⋅ v , g ⋅ w ⟩ = ⟨ v , w ⟩ {\displaystyle \langle g\cdot v,g\cdot w\rangle =\langle v,w\rangle } for all g in G and v, w in W. Hence any G-representation is unitary. Unitary representations are automatically semisimple, since Maschke's result can be proven by taking the orthogonal complement of a subrepresentation. When studying representations of groups that are not finite, the unitary representations provide a good generalization of the real and complex representations of a finite group. Results such as Maschke's theorem and the unitary property that rely on averaging can be generalized to more general groups by replacing the average with an integral, provided that a suitable notion of integral can be defined. This can be done for compact topological groups (including compact Lie groups), using Haar measure, and the resulting theory is known as abstract harmonic analysis. Over arbitrary fields, another class of finite groups that have a good representation theory are the finite groups of Lie type. Important examples are linear algebraic groups over finite fields. The representation theory of linear algebraic groups and Lie groups extends these examples to infinite-dimensional groups, the latter being intimately related to Lie algebra representations. The importance of character theory for finite groups has an analogue in the theory of weights for representations of Lie groups and Lie algebras. Representations of a finite group G are also linked directly to algebra representations via the group algebra F[G], which is a vector space over F with the elements of G as a basis, equipped with the multiplication operation defined by the group operation, linearity, and the requirement that the group operation and scalar multiplication commute. === Modular representations === Modular representations of a finite group G are representations over a field whose characteristic is not coprime to |G|, so that Maschke's theorem no longer holds (because |G| is not invertible in F and so one cannot divide by it). Nevertheless, Richard Brauer extended much of character theory to modular representations, and this theory played an important role in early progress towards the classification of finite simple groups, especially for simple groups whose characterization was not amenable to purely group-theoretic methods because their Sylow 2-subgroups were "too small". As well as having applications to group theory, modular representations arise naturally in other branches of mathematics, such as algebraic geometry, coding theory, combinatorics and number theory. === Unitary representations === A unitary representation of a group G is a linear representation φ of G on a real or (usually) complex Hilbert space V such that φ(g) is a unitary operator for every g ∈ G. Such representations have been widely applied in quantum mechanics since the 1920s, thanks in particular to the influence of Hermann Weyl, and this has inspired the development of the theory, most notably through the analysis of representations of the Poincaré group by Eugene Wigner. One of the pioneers in constructing a general theory of unitary representations (for any group G rather than just for particular groups useful in applications) was George Mackey, and an extensive theory was developed by Harish-Chandra and others in the 1950s and 1960s. A major goal is to describe the "unitary dual", the space of irreducible unitary representations of G. The theory is most well-developed in the case that G is a locally compact (Hausdorff) topological group and the representations are strongly continuous. For G abelian, the unitary dual is just the space of characters, while for G compact, the Peter–Weyl theorem shows that the irreducible unitary representations are finite-dimensional and the unitary dual is discrete. For example, if G is the circle group S1, then the characters are given by integers, and the unitary dual is Z. For non-compact G, the question of which representations are unitary is a subtle one. Although irreducible unitary representations must be "admissible" (as Harish-Chandra modules) and it is easy to detect which admissible representations have a nondegenerate invariant sesquilinear form, it is hard to determine when this form is positive definite. An effective description of the unitary dual, even for relatively well-behaved groups such as real reductive Lie groups (discussed below), remains an important open problem in representation theory. It has been solved for many particular groups, such as SL(2,R) and the Lorentz group. === Harmonic analysis === The duality between the circle group S1 and the integers Z, or more generally, between a torus Tn and Zn is well known in analysis as the theory of Fourier series, and the Fourier transform similarly expresses the fact that the space of characters on a real vector space is the dual vector space. Thus unitary representation theory and harmonic analysis are intimately related, and abstract harmonic analysis exploits this relationship, by developing the analysis of functions on locally compact topological groups and related spaces. A major goal is to provide a general form of the Fourier transform and the Plancherel theorem. This is done by constructing a measure on the unitary dual and an isomorphism between the regular representation of G on the space L2(G) of square integrable functions on G and its representation on the space of L2 functions on the unitary dual. Pontrjagin duality and the Peter–Weyl theorem achieve this for abelian and compact G respectively. Another approach involves considering all unitary representations, not just the irreducible ones. These form a category, and Tannaka–Krein duality provides a way to recover a compact group from its category of unitary representations. If the group is neither abelian nor compact, no general theory is known with an analogue of the Plancherel theorem or Fourier inversion, although Alexander Grothendieck extended Tannaka–Krein duality to a relationship between linear algebraic groups and tannakian categories. Harmonic analysis has also been extended from the analysis of functions on a group G to functions on homogeneous spaces for G. The theory is particularly well developed for symmetric spaces and provides a theory of automorphic forms (discussed below). === Lie groups === A Lie group is a group that is also a smooth manifold. Many classical groups of matrices over the real or complex numbers are Lie groups. Many of the groups important in physics and chemistry are Lie groups, and their representation theory is crucial to the application of group theory in those fields. The representation theory of Lie groups can be developed first by considering the compact groups, to which results of compact representation theory apply. This theory can be extended to finite-dimensional representations of semisimple Lie groups using Weyl's unitary trick: each semisimple real Lie group G has a complexification, which is a complex Lie group Gc, and this complex Lie group has a maximal compact subgroup K. The finite-dimensional representations of G closely correspond to those of K. A general Lie group is a semidirect product of a solvable Lie group and a semisimple Lie group (the Levi decomposition). The classification of representations of solvable Lie groups is intractable in general, but often easy in practical cases. Representations of semidirect products can then be analysed by means of general results called Mackey theory, which is a generalization of the methods used in Wigner's classification of representations of the Poincaré group. === Lie algebras === A Lie algebra over a field F is a vector space over F equipped with a skew-symmetric bilinear operation called the Lie bracket, which satisfies the Jacobi identity. Lie algebras arise in particular as tangent spaces to Lie groups at the identity element, leading to their interpretation as "infinitesimal symmetries". An important approach to the representation theory of Lie groups is to study the corresponding representation theory of Lie algebras, but representations of Lie algebras also have an intrinsic interest. Lie algebras, like Lie groups, have a Levi decomposition into semisimple and solvable parts, with the representation theory of solvable Lie algebras being intractable in general. In contrast, the finite-dimensional representations of semisimple Lie algebras are completely understood, after work of Élie Cartan. A representation of a semisimple Lie algebra 𝖌 is analysed by choosing a Cartan subalgebra, which is essentially a generic maximal subalgebra 𝖍 of 𝖌 on which the Lie bracket is zero ("abelian"). The representation of 𝖌 can be decomposed into weight spaces that are eigenspaces for the action of 𝖍 and the infinitesimal analogue of characters. The structure of semisimple Lie algebras then reduces the analysis of representations to easily understood combinatorics of the possible weights that can occur. ==== Infinite-dimensional Lie algebras ==== There are many classes of infinite-dimensional Lie algebras whose representations have been studied. Among these, an important class are the Kac–Moody algebras. They are named after Victor Kac and Robert Moody, who independently discovered them. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and share many of their combinatorial properties. This means that they have a class of representations that can be understood in the same way as representations of semisimple Lie algebras. Affine Lie algebras are a special case of Kac–Moody algebras, which have particular importance in mathematics and theoretical physics, especially conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras. ==== Lie superalgebras ==== Lie superalgebras are generalizations of Lie algebras in which the underlying vector space has a Z2-grading, and skew-symmetry and Jacobi identity properties of the Lie bracket are modified by signs. Their representation theory is similar to the representation theory of Lie algebras. === Linear algebraic groups === Linear algebraic groups (or more generally, affine group schemes) are analogues in algebraic geometry of Lie groups, but over more general fields than just R or C. In particular, over finite fields, they give rise to finite groups of Lie type. Although linear algebraic groups have a classification that is very similar to that of Lie groups, their representation theory is rather different (and much less well understood) and requires different techniques, since the Zariski topology is relatively weak, and techniques from analysis are no longer available. === Invariant theory === Invariant theory studies actions on algebraic varieties from the point of view of their effect on functions, which form representations of the group. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. The modern approach analyses the decomposition of these representations into irreducibles. Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence is projective geometry, where invariant theory can be used to organize the subject, and during the 1960s, new life was breathed into the subject by David Mumford in the form of his geometric invariant theory. The representation theory of semisimple Lie groups has its roots in invariant theory and the strong links between representation theory and algebraic geometry have many parallels in differential geometry, beginning with Felix Klein's Erlangen program and Élie Cartan's connections, which place groups and symmetry at the heart of geometry. Modern developments link representation theory and invariant theory to areas as diverse as holonomy, differential operators and the theory of several complex variables. === Automorphic forms and number theory === Automorphic forms are a generalization of modular forms to more general analytic functions, perhaps of several complex variables, with similar transformation properties. The generalization involves replacing the modular group PSL2 (R) and a chosen congruence subgroup by a semisimple Lie group G and a discrete subgroup Γ. Just as modular forms can be viewed as differential forms on a quotient of the upper half space H = PSL2 (R)/SO(2), automorphic forms can be viewed as differential forms (or similar objects) on Γ\G/K, where K is (typically) a maximal compact subgroup of G. Some care is required, however, as the quotient typically has singularities. The quotient of a semisimple Lie group by a compact subgroup is a symmetric space and so the theory of automorphic forms is intimately related to harmonic analysis on symmetric spaces. Before the development of the general theory, many important special cases were worked out in detail, including the Hilbert modular forms and Siegel modular forms. Important results in the theory include the Selberg trace formula and the realization by Robert Langlands that the Riemann–Roch theorem could be applied to calculate the dimension of the space of automorphic forms. The subsequent notion of "automorphic representation" has proved of great technical value for dealing with the case that G is an algebraic group, treated as an adelic algebraic group. As a result, an entire philosophy, the Langlands program has developed around the relation between representation and number theoretic properties of automorphic forms. === Associative algebras === In one sense, associative algebra representations generalize both representations of groups and Lie algebras. A representation of a group induces a representation of a corresponding group ring or group algebra, while representations of a Lie algebra correspond bijectively to representations of its universal enveloping algebra. However, the representation theory of general associative algebras does not have all of the nice properties of the representation theory of groups and Lie algebras. ==== Module theory ==== When considering representations of an associative algebra, one can forget the underlying field, and simply regard the associative algebra as a ring, and its representations as modules. This approach is surprisingly fruitful: many results in representation theory can be interpreted as special cases of results about modules over a ring. ==== Hopf algebras and quantum groups ==== Hopf algebras provide a way to improve the representation theory of associative algebras, while retaining the representation theory of groups and Lie algebras as special cases. In particular, the tensor product of two representations is a representation, as is the dual vector space. The Hopf algebras associated to groups have a commutative algebra structure, and so general Hopf algebras are known as quantum groups, although this term is often restricted to certain Hopf algebras arising as deformations of groups or their universal enveloping algebras. The representation theory of quantum groups has added surprising insights to the representation theory of Lie groups and Lie algebras, for instance through the crystal basis of Kashiwara. == History == == Generalizations == === Set-theoretic representations === A set-theoretic representation (also known as a group action or permutation representation) of a group G on a set X is given by a function ρ from G to XX, the set of functions from X to X, such that for all g1, g2 in G and all x in X: ρ ( 1 ) [ x ] = x {\displaystyle \rho (1)[x]=x} ρ ( g 1 g 2 ) [ x ] = ρ ( g 1 ) [ ρ ( g 2 ) [ x ] ] . {\displaystyle \rho (g_{1}g_{2})[x]=\rho (g_{1})[\rho (g_{2})[x]].} This condition and the axioms for a group imply that ρ(g) is a bijection (or permutation) for all g in G. Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group SX of X. === Representations in other categories === Every group G can be viewed as a category with a single object; morphisms in this category are just the elements of G. Given an arbitrary category C, a representation of G in C is a functor from G to C. Such a functor selects an object X in C and a group homomorphism from G to Aut(X), the automorphism group of X. In the case where C is VectF, the category of vector spaces over a field F, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of G in the category of sets. For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from G to the homeomorphism group of a topological space X. Three types of representations closely related to linear representations are: projective representations: in the category of projective spaces. These can be described as "linear representations up to scalar transformations". affine representations: in the category of affine spaces. For example, the Euclidean group acts affinely upon Euclidean space. corepresentations of unitary and antiunitary groups: in the category of complex vector spaces with morphisms being linear or antilinear transformations. === Representations of categories === Since groups are categories, one can also consider representation of other categories. The simplest generalization is to monoids, which are categories with one object. Groups are monoids for which every morphism is invertible. General monoids have representations in any category. In the category of sets, these are monoid actions, but monoid representations on vector spaces and other objects can be studied. More generally, one can relax the assumption that the category being represented has only one object. In full generality, this is simply the theory of functors between categories, and little can be said. One special case has had a significant impact on representation theory, namely the representation theory of quivers. A quiver is simply a directed graph (with loops and multiple arrows allowed), but it can be made into a category (and also an algebra) by considering paths in the graph. Representations of such categories/algebras have illuminated several aspects of representation theory, for instance by allowing non-semisimple representation theory questions about a group to be reduced in some cases to semisimple representation theory questions about a quiver. == Asymptotic representation theory == For now, see the following. Vershik, Anatoly. "Between "very large and "infinite: the asymptotic representation theory". Probability and Mathematical Statistics. 33 (2): 467–476. Retrieved 21 October 2022.* Anatoly Vershik, Two lectures on the asymptotic representation theory and statistics of Young diagrams, In: Vershik A.M., Yakubovich Y. (eds) Asymptotic Combinatorics with Applications to Mathematical Physics Lecture Notes in Mathematics, vol 1815. Springer 2003 G. Olshanski, Asymptotic representation theory, Lecture notes 2009–2010 https://ncatlab.org/nlab/show/asymptotic+representation+theory == See also == == Notes == == References == Alperin, J. L. (1986), Local Representation Theory: Modular Representations as an Introduction to the Local Representation Theory of Finite Groups, Cambridge University Press, ISBN 978-0-521-44926-7. Bargmann, V. (1947), "Irreducible unitary representations of the Lorenz group", Annals of Mathematics, 48 (3): 568–640, doi:10.2307/1969129, JSTOR 1969129. Borel, Armand (2001), Essays in the History of Lie Groups and Algebraic Groups, American Mathematical Society, ISBN 978-0-8218-0288-5. Borel, Armand; Casselman, W. (1979), Automorphic Forms, Representations, and L-functions, American Mathematical Society, ISBN 978-0-8218-1435-2. Curtis, Charles W.; Reiner, Irving (1962), Representation Theory of Finite Groups and Associative Algebras, John Wiley & Sons (Reedition 2006 by AMS Bookstore), ISBN 978-0-470-18975-7 {{citation}}: ISBN / Date incompatibility (help). Folland, Gerald B. (1995), A Course in Abstract Harmonic Analysis, CRC Press, ISBN 978-0-8493-8490-5. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.. Gelbart, Stephen (1984), "An Elementary Introduction to the Langlands Program", Bulletin of the American Mathematical Society, 10 (2): 177–219, doi:10.1090/S0273-0979-1984-15237-6. Goodman, Roe; Wallach, Nolan R. (1998), Representations and Invariants of the Classical Groups, Cambridge University Press, ISBN 978-0-521-66348-9. Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 Helgason, Sigurdur (1978), Differential Geometry, Lie groups and Symmetric Spaces, Academic Press, ISBN 978-0-12-338460-7 Humphreys, James E. (1972a), Introduction to Lie Algebras and Representation Theory, Birkhäuser, ISBN 978-0-387-90053-7. Humphreys, James E. (1972b), Linear Algebraic Groups, Graduate Texts in Mathematics, vol. 21, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90108-4, MR 0396773 James, Gordon; Liebeck, Martin (1993), Representations and Characters of Groups, Cambridge: Cambridge University Press, ISBN 978-0-521-44590-0. Jantzen, Jens Carsten (2003), Representations of Algebraic Groups, American Mathematical Society, ISBN 978-0-8218-3527-2. Kac, Victor G. (1977), "Lie superalgebras", Advances in Mathematics, 26 (1): 8–96, doi:10.1016/0001-8708(77)90017-2. Kac, Victor G. (1990), Infinite Dimensional Lie Algebras (3rd ed.), Cambridge University Press, ISBN 978-0-521-46693-6. Kim, Shoon Kyung (1999), Group Theoretical Methods and Applications to Molecules and Crystals: And Applications to Molecules and Crystals, Cambridge University Press, ISBN 978-0-521-64062-6. Knapp, Anthony W. (2001), Representation Theory of Semisimple Groups: An Overview Based on Examples, Princeton University Press, ISBN 978-0-691-09089-4. Kostrikin, A. I.; Manin, Yuri I. (1997), Linear Algebra and Geometry, Taylor & Francis, ISBN 978-90-5699-049-7. Lam, T. Y. (1998), "Representations of finite groups: a hundred years", Notices of the AMS, 45 (3, 4): 361–372 (Part I), 465–474 (Part II). Lyubich, Yurii I. (1988). Introduction to the Theory of Banach Representations of Groups. Operator Theory: Advances and Applications. Vol. 30. Basel: Birkhauser. ISBN 978-3-7643-2207-6. Mumford, David; Fogarty, J.; Kirwan, F. (1994), Geometric invariant theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (2) [Results in Mathematics and Related Areas (2)], vol. 34 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-56963-3, MR 0214602; MR0719371 (2nd ed.); MR1304906(3rd ed.) Olver, Peter J. (1999), Classical invariant theory, Cambridge: Cambridge University Press, ISBN 978-0-521-55821-1. Peter, F.; Weyl, Hermann (1927), "Die Vollständigkeit der primitiven Darstellungen einer geschlossenen kontinuierlichen Gruppe", Mathematische Annalen, 97 (1): 737–755, doi:10.1007/BF01447892, S2CID 120013521. Pontrjagin, Lev S. (1934), "The theory of topological commutative groups", Annals of Mathematics, 35 (2): 361–388, doi:10.2307/1968438, JSTOR 1968438. Sally, Paul; Vogan, David A. (1989), Representation Theory and Harmonic Analysis on Semisimple Lie Groups, American Mathematical Society, ISBN 978-0-8218-1526-7. Serre, Jean-Pierre (1977), Linear Representations of Finite Groups, Springer-Verlag, ISBN 978-0387901909. Sharpe, Richard W. (1997), Differential Geometry: Cartan's Generalization of Klein's Erlangen Program, Springer, ISBN 978-0-387-94732-7. Simson, Daniel; Skowronski, Andrzej; Assem, Ibrahim (2007), Elements of the Representation Theory of Associative Algebras, Cambridge University Press, ISBN 978-0-521-88218-7. Sternberg, Shlomo (1994), Group Theory and Physics, Cambridge University Press, ISBN 978-0-521-55885-3. Tung, Wu-Ki (1985). Group Theory in Physics (1st ed.). New Jersey·London·Singapore·Hong Kong: World Scientific. ISBN 978-9971966577. Weyl, Hermann (1928), Gruppentheorie und Quantenmechanik (The Theory of Groups and Quantum Mechanics, translated H.P. Robertson, 1931 ed.), S. Hirzel, Leipzig (reprinted 1950, Dover), ISBN 978-0-486-60269-1 {{citation}}: ISBN / Date incompatibility (help). Weyl, Hermann (1946), The Classical Groups: Their Invariants and Representations (2nd ed.), Princeton University Press (reprinted 1997), ISBN 978-0-691-05756-9 {{citation}}: ISBN / Date incompatibility (help). Wigner, Eugene P. (1939), "On unitary representations of the inhomogeneous Lorentz group", Annals of Mathematics, 40 (1): 149–204, Bibcode:1939AnMat..40..149W, doi:10.2307/1968551, JSTOR 1968551, S2CID 121773411. == External links == "Representation theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Alexander Kirillov Jr., An introduction to Lie groups and Lie algebras (2008). Textbook, preliminary version pdf downloadable from author's home page. Kevin Hartnett, (2020), article on representation theory in Quanta magazine
Wikipedia/Representation_theory
Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. == Main branches == Below are some of the main areas studied in algebraic topology: === Homotopy groups === In mathematics, homotopy groups are used in algebraic topology to classify topological spaces. The first and simplest homotopy group is the fundamental group, which records information about loops in a space. Intuitively, homotopy groups record information about the basic shape, or holes, of a topological space. === Homology === In algebraic topology and abstract algebra, homology (in part from Greek ὁμός homos "identical") is a certain general procedure to associate a sequence of abelian groups or modules with a given mathematical object such as a topological space or a group. === Cohomology === In homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a cochain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign "quantities" to the chains of homology theory. === Manifolds === A manifold is a topological space that near each point resembles Euclidean space. Examples include the plane, the sphere, and the torus, which can all be realized in three dimensions, but also the Klein bottle and real projective plane which cannot be embedded in three dimensions, but can be embedded in four dimensions. Typically, results in algebraic topology focus on global, non-differentiable aspects of manifolds; for example Poincaré duality. === Knot theory === Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in three-dimensional Euclidean space, R 3 {\displaystyle \mathbb {R} ^{3}} . Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R 3 {\displaystyle \mathbb {R} ^{3}} upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. === Complexes === A simplicial complex is a topological space of a certain kind, constructed by "gluing together" points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. A CW complex is a type of topological space introduced by J. H. C. Whitehead to meet the needs of homotopy theory. This class of spaces is broader and has some better categorical properties than simplicial complexes, but still retains a combinatorial nature that allows for computation (often with a much smaller complex). == Method of algebraic invariants == An older name for the subject was combinatorial topology, implying an emphasis on how a space X was constructed from simpler ones (the modern standard tool for such construction is the CW complex). In the 1920s and 1930s, there was growing emphasis on investigating topological spaces by finding correspondences from them to algebraic groups, which led to the change of name to algebraic topology. The combinatorial topology name is still sometimes used to emphasize an algorithmic approach based on decomposition of spaces. In the algebraic approach, one finds a correspondence between spaces and groups that respects the relation of homeomorphism (or more general homotopy) of spaces. This allows one to recast statements about topological spaces into statements about groups, which have a great deal of manageable structure, often making these statements easier to prove. Two major ways in which this can be done are through fundamental groups, or more generally homotopy theory, and through homology and cohomology groups. The fundamental groups give us basic information about the structure of a topological space, but they are often nonabelian and can be difficult to work with. The fundamental group of a (finite) simplicial complex does have a finite presentation. Homology and cohomology groups, on the other hand, are abelian and in many important cases finitely generated. Finitely generated abelian groups are completely classified and are particularly easy to work with. == Setting in category theory == In general, all constructions of algebraic topology are functorial; the notions of category, functor and natural transformation originated here. Fundamental groups and homology and cohomology groups are not only invariants of the underlying topological space, in the sense that two topological spaces which are homeomorphic have the same associated groups, but their associated morphisms also correspond—a continuous mapping of spaces induces a group homomorphism on the associated groups, and these homomorphisms can be used to show non-existence (or, much more deeply, existence) of mappings. One of the first mathematicians to work with different types of cohomology was Georges de Rham. One can use the differential structure of smooth manifolds via de Rham cohomology, or Čech or sheaf cohomology to investigate the solvability of differential equations defined on the manifold in question. De Rham showed that all of these approaches were interrelated and that, for a closed, oriented manifold, the Betti numbers derived through simplicial homology were the same Betti numbers as those derived through de Rham cohomology. This was extended in the 1950s, when Samuel Eilenberg and Norman Steenrod generalized this approach. They defined homology and cohomology as functors equipped with natural transformations subject to certain axioms (e.g., a weak equivalence of spaces passes to an isomorphism of homology groups), verified that all existing (co)homology theories satisfied these axioms, and then proved that such an axiomatization uniquely characterized the theory. == Applications == Classic applications of algebraic topology include: The Brouwer fixed point theorem: every continuous map from the unit n-disk to itself has a fixed point. The free rank of the nth homology group of a simplicial complex is the nth Betti number, which allows one to calculate the Euler–Poincaré characteristic. One can use the differential structure of smooth manifolds via de Rham cohomology, or Čech or sheaf cohomology to investigate the solvability of differential equations defined on the manifold in question. A manifold is orientable when the top-dimensional integral homology group is the integers, and is non-orientable when it is 0. The n-sphere admits a nowhere-vanishing continuous unit vector field if and only if n is odd. (For n = 2, this is sometimes called the "hairy ball theorem".) The Borsuk–Ulam theorem: any continuous map from the n-sphere to Euclidean n-space identifies at least one pair of antipodal points. Any subgroup of a free group is free. This result is quite interesting, because the statement is purely algebraic yet the simplest known proof is topological. Namely, any free group G may be realized as the fundamental group of a graph X. The main theorem on covering spaces tells us that every subgroup H of G is the fundamental group of some covering space Y of X; but every such Y is again a graph. Therefore, its fundamental group H is free. On the other hand, this type of application is also handled more simply by the use of covering morphisms of groupoids, and that technique has yielded subgroup theorems not yet proved by methods of algebraic topology; see Higgins (1971). Topological combinatorics. == Notable people == == Important theorems == == See also == == Notes == == References == Allegretti, Dylan G. L. (2008), Simplicial Sets and van Kampen's Theorem (Discusses generalized versions of van Kampen's theorem applied to topological spaces and simplicial sets). Bredon, Glen E. (1993), Topology and Geometry, Graduate Texts in Mathematics, vol. 139, Springer, ISBN 0-387-97926-3. Brown, R. (2007), Higher dimensional group theory, archived from the original on 2016-05-14, retrieved 2022-08-17 (Gives a broad view of higher-dimensional van Kampen theorems involving multiple groupoids). Brown, R.; Razak, A. (1984), "A van Kampen theorem for unions of non-connected spaces", Arch. Math., 42: 85–88, doi:10.1007/BF01198133, S2CID 122228464. "Gives a general theorem on the fundamental groupoid with a set of base points of a space which is the union of open sets." Brown, R.; Hardie, K.; Kamps, H.; Porter, T. (2002), "The homotopy double groupoid of a Hausdorff space", Theory Appl. Categories, 10 (2): 71–93. Brown, R.; Higgins, P.J. (1978), "On the connection between the second relative homotopy groups of some related spaces", Proc. London Math. Soc., S3-36 (2): 193–212, doi:10.1112/plms/s3-36.2.193. "The first 2-dimensional version of van Kampen's theorem." Brown, Ronald; Higgins, Philip J.; Sivera, Rafael (2011), Nonabelian Algebraic Topology: Filtered Spaces, Crossed Complexes, Cubical Homotopy Groupoids, European Mathematical Society Tracts in Mathematics, vol. 15, European Mathematical Society, arXiv:math/0407275, ISBN 978-3-03719-083-8, archived from the original on 2009-06-04 This provides a homotopy theoretic approach to basic algebraic topology, without needing a basis in singular homology, or the method of simplicial approximation. It contains a lot of material on crossed modules. Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 Greenberg, Marvin J.; Harper, John R. (1981), Algebraic Topology: A First Course, Revised edition, Mathematics Lecture Note Series, Westview/Perseus, ISBN 9780805335576. A functorial, algebraic approach originally by Greenberg with geometric flavoring added by Harper. Hatcher, Allen (2002), Algebraic Topology, Cambridge: Cambridge University Press, ISBN 0-521-79540-0. A modern, geometrically flavoured introduction to algebraic topology. Higgins, Philip J. (1971), Notes on categories and groupoids, Van Nostrand Reinhold, ISBN 9780442034061 Maunder, C. R. F. (1970), "Algebraic Topology", Nature, 227 (5259), London: Van Nostrand Reinhold: 756, Bibcode:1970Natur.227..756F, doi:10.1038/227756a0, ISBN 0-486-69131-4. tom Dieck, Tammo (2008), Algebraic Topology, EMS Textbooks in Mathematics, European Mathematical Society, ISBN 978-3-03719-048-7 van Kampen, Egbert (1933), "On the connection between the fundamental groups of some related spaces", American Journal of Mathematics, 55 (1): 261–7, JSTOR 51000091 == Further reading == Hatcher, Allen (2002). Algebraic topology. Cambridge University Press. ISBN 0-521-79160-X. and ISBN 0-521-79540-0. "Algebraic topology", Encyclopedia of Mathematics, EMS Press, 2001 [1994] May JP (1999). A Concise Course in Algebraic Topology (PDF). University of Chicago Press. Archived (PDF) from the original on 2022-10-09. Retrieved 2008-09-27. Section 2.7 provides a category-theoretic presentation of the theorem as a colimit in the category of groupoids.
Wikipedia/Algebraic_topology
In mathematics, a basic algebraic operation is any one of the common operations of elementary algebra, which include addition, subtraction, multiplication, division, raising to a whole number power, and taking roots (fractional power). These operations may be performed on numbers, in which case they are often called arithmetic operations. They may also be performed, in a similar way, on variables, algebraic expressions, and more generally, on elements of algebraic structures, such as groups and fields. An algebraic operation may also be defined more generally as a function from a Cartesian power of a given set to the same set. The term algebraic operation may also be used for operations that may be defined by compounding basic algebraic operations, such as the dot product. In calculus and mathematical analysis, algebraic operation is also used for the operations that may be defined by purely algebraic methods. For example, exponentiation with an integer or rational exponent is an algebraic operation, but not the general exponentiation with a real or complex exponent. Also, the derivative is an operation on numerical functions and algebraic expressions that is not algebraic. == Notation == Multiplication symbols are usually omitted, and implied, when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy. Sometimes, multiplication symbols are replaced with either a dot or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol, and it must be explicitly used; for example, 3x is written as 3 * x. Rather than using the ambiguous division sign (÷), division is usually represented with a vinculum, a horizontal line, as in ⁠3/x + 1⁠. In plain text and programming languages, a slash (also called a solidus) is used, e.g. 3 / (x + 1). Exponents are usually formatted using superscripts, as in x2. In plain text, the TeX mark-up language, and some programming languages such as MATLAB and Julia, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x2 is written as x ** 2. The plus–minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example, y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes, it is used for denoting a positive-or-negative term such as ±x. == Arithmetic vs algebraic operations == Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below. Note: the use of the letters a {\displaystyle a} and b {\displaystyle b} is arbitrary, and the examples would have been equally valid if x {\displaystyle x} and y {\displaystyle y} were used. == Properties of arithmetic and algebraic operations == == See also == Algebraic expression Algebraic function Elementary algebra Factoring a quadratic expression Order of operations == Notes == == References ==
Wikipedia/Algebraic_operations
Systems science, also referred to as systems research or simply systems, is a transdisciplinary field that is concerned with understanding simple and complex systems in nature and society, which leads to the advancements of formal, natural, social, and applied attributions throughout engineering, technology, and science itself. To systems scientists, the world can be understood as a system of systems. The field aims to develop transdisciplinary foundations that are applicable in a variety of areas, such as psychology, biology, medicine, communication, business, technology, computer science, engineering, and social sciences. Themes commonly stressed in system science are (a) holistic view, (b) interaction between a system and its embedding environment, and (c) complex (often subtle) trajectories of dynamic behavior that sometimes are stable (and thus reinforcing), while at various 'boundary conditions' can become wildly unstable (and thus destructive). Concerns about Earth-scale biosphere/geosphere dynamics is an example of the nature of problems to which systems science seeks to contribute meaningful insights. == Associated fields == The systems sciences are a broad array of fields. One way of conceiving of these is in three groups: fields that have developed systems ideas primarily through theory; those that have done so primarily through practical engagements with problem situations; and those that have applied ideas for other disciplines. === Theoretical fields === ==== Chaos and dynamical systems ==== ==== Complexity ==== ==== Control theory ==== Affect control theory Control engineering Control systems ==== Cybernetics ==== Autopoiesis Conversation Theory Engineering Cybernetics Perceptual Control Theory Management Cybernetics Second-Order Cybernetics Cyber-Physical Systems Artificial Intelligence Synthetic Intelligence ==== Information theory ==== ==== General systems theory ==== Systems theory in anthropology Biochemical systems theory Ecological systems theory Developmental systems theory General systems theory Living systems theory LTI system theory Social systems Sociotechnical systems theory Mathematical system theory World-systems theory ==== Hierarchy Theory ==== === Practical fields === ==== Critical systems thinking ==== ==== Operations research and management science ==== ==== Soft systems methodology ==== The soft systems methodology was developed in England by academics at the University of Lancaster Systems Department through a ten-year action research programme. The main contributor is Peter Checkland (born 18 December 1930, in Birmingham, UK), a British management scientist and emeritus professor of systems at Lancaster University. ==== Systems analysis ==== Systems analysis branch of systems science that analyzes systems, the interactions within those systems, or interaction with its environment, often prior to their automation as computer models. Systems analysis is closely associated with the RAND corporation. ==== Systemic design ==== Systemic design integrates methodologies from systems thinking with advanced design practices to address complex, multi-stakeholder situations. ==== Systems dynamics ==== System dynamics is an approach to understanding the behavior of complex systems over time. It offers "simulation technique for modeling business and social systems", which deals with internal feedback loops and time delays that affect the behavior of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. ==== Systems engineering ==== Systems engineering (SE) is an interdisciplinary field of engineering, that focuses on the development and organization of complex systems. It is the "art and science of creating whole solutions to complex problems", for example: signal processing systems, control systems and communication system, or other forms of high-level modelling and design in specific fields of engineering. Systems Science is foundational to the Embedded Software Development that is founded in the embedded requirements of Systems Engineering. Aerospace systems Biological systems engineering Earth systems engineering and management Electronic systems Enterprise systems engineering Software systems Systems analysis === Applications in other disciplines === ==== Earth system science ==== Climate systems Systems geology ==== Systems biology ==== Computational systems biology Synthetic biology Systems immunology Systems neuroscience ==== Systems chemistry ==== ==== Systems ecology ==== Ecosystem ecology Agroecology ==== Systems psychology ==== Ergonomics Family systems theory Systemic therapy == See also == == References == == Further reading == B. A. Bayraktar (1979). Education in Systems Science. p. 369. Kenneth D. Bailey, "Fifty Years of Systems Science:Further Reflections", Systems Research and Behavioral Science, 22, 2005, pp. 355–361. doi:10.1002/sres.711 Robert L. Flood, Ewart R Carson, Dealing with Complexity: An Introduction to the Theory and Application of Systems Science (2nd Edition), 1993. George J. Klir, Facets of Systems Science (2nd Edition), Kluwer Academic/Plenum Publishers, 2001. Ervin László, Systems Science and World Order: Selected Studies, 1983. G. E. Mobus & M. C. Kalton, Principles of Systems Science, 2015, New York:Springer. Anatol Rapoport (ed.), General Systems: Yearbook of the Society for the Advancement of General Systems Theory, Society for General Systems Research, Vol 1., 1956. Li D. Xu, "The contributions of Systems Science to Information Systems Research", Systems Research and Behavioral Science, 17, 2000, pp. 105–116. Graeme Donald Snooks, "A general theory of complex living systems: Exploring the demand side of dynamics", Complexity, vol. 13, no. 6, July/August 2008. John N. Warfield, "A proposal for Systems Science", Systems Research and Behavioral Science, 20, 2003, pp. 507–520. doi:10.1002/sres.528 Michael C. Jackson, Critical Systems Thinking and the Management of Complexity, 2019, Wiley. == External links == Principia Cybernetica Web International Federation for Systems Research Institute of System Science Knowledge (ISSK.org) International Society for the System Sciences American Society for Cybernetics UK Systems Society Cybernetics Society
Wikipedia/Systems_science
In mathematics, differential topology is the field dealing with the topological properties and smooth properties of smooth manifolds. In this sense differential topology is distinct from the closely related field of differential geometry, which concerns the geometric properties of smooth manifolds, including notions of size, distance, and rigid shape. By comparison differential topology is concerned with coarser properties, such as the number of holes in a manifold, its homotopy type, or the structure of its diffeomorphism group. Because many of these coarser properties may be captured algebraically, differential topology has strong links to algebraic topology. The central goal of the field of differential topology is the classification of all smooth manifolds up to diffeomorphism. Since dimension is an invariant of smooth manifolds up to diffeomorphism type, this classification is often studied by classifying the (connected) manifolds in each dimension separately: In dimension 1, the only smooth manifolds up to diffeomorphism are the circle, the real number line, and allowing a boundary, the half-closed interval [ 0 , 1 ) {\displaystyle [0,1)} and fully closed interval [ 0 , 1 ] {\displaystyle [0,1]} . In dimension 2, every closed surface is classified up to diffeomorphism by its genus, the number of holes (or equivalently its Euler characteristic), and whether or not it is orientable. This is the famous classification of closed surfaces. Already in dimension two the classification of non-compact surfaces becomes difficult, due to the existence of exotic spaces such as Jacob's ladder. In dimension 3, William Thurston's geometrization conjecture, proven by Grigori Perelman, gives a partial classification of compact three-manifolds. Included in this theorem is the Poincaré conjecture, which states that any closed, simply connected three-manifold is homeomorphic (and in fact diffeomorphic) to the 3-sphere. Beginning in dimension 4, the classification becomes much more difficult for two reasons. Firstly, every finitely presented group appears as the fundamental group of some 4-manifold, and since the fundamental group is a diffeomorphism invariant, this makes the classification of 4-manifolds at least as difficult as the classification of finitely presented groups. By the word problem for groups, which is equivalent to the halting problem, it is impossible to classify such groups, so a full topological classification is impossible. Secondly, beginning in dimension four it is possible to have smooth manifolds that are homeomorphic, but with distinct, non-diffeomorphic smooth structures. This is true even for the Euclidean space R 4 {\displaystyle \mathbb {R} ^{4}} , which admits many exotic R 4 {\displaystyle \mathbb {R} ^{4}} structures. This means that the study of differential topology in dimensions 4 and higher must use tools genuinely outside the realm of the regular continuous topology of topological manifolds. One of the central open problems in differential topology is the four-dimensional smooth Poincaré conjecture, which asks if every smooth 4-manifold that is homeomorphic to the 4-sphere, is also diffeomorphic to it. That is, does the 4-sphere admit only one smooth structure? This conjecture is true in dimensions 1, 2, and 3, by the above classification results, but is known to be false in dimension 7 due to the Milnor spheres. Important tools in studying the differential topology of smooth manifolds include the construction of smooth topological invariants of such manifolds, such as de Rham cohomology or the intersection form, as well as smoothable topological constructions, such as smooth surgery theory or the construction of cobordisms. Morse theory is an important tool which studies smooth manifolds by considering the critical points of differentiable functions on the manifold, demonstrating how the smooth structure of the manifold enters into the set of tools available. Oftentimes more geometric or analytical techniques may be used, by equipping a smooth manifold with a Riemannian metric or by studying a differential equation on it. Care must be taken to ensure that the resulting information is insensitive to this choice of extra structure, and so genuinely reflects only the topological properties of the underlying smooth manifold. For example, the Hodge theorem provides a geometric and analytical interpretation of the de Rham cohomology, and gauge theory was used by Simon Donaldson to prove facts about the intersection form of simply connected 4-manifolds. In some cases techniques from contemporary physics may appear, such as topological quantum field theory, which can be used to compute topological invariants of smooth spaces. Famous theorems in differential topology include the Whitney embedding theorem, the hairy ball theorem, the Hopf theorem, the Poincaré–Hopf theorem, Donaldson's theorem, and the Poincaré conjecture. == Description == Differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are 'softer' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. On the other hand, smooth manifolds are more rigid than the topological manifolds. John Milnor discovered that some spheres have more than one smooth structure—see Exotic sphere and Donaldson's theorem. Michel Kervaire exhibited topological manifolds with no smooth structure at all. Some constructions of smooth manifold theory, such as the existence of tangent bundles, can be done in the topological setting with much more work, and others cannot. One of the main topics in differential topology is the study of special kinds of smooth mappings between manifolds, namely immersions and submersions, and the intersections of submanifolds via transversality. More generally one is interested in properties and invariants of smooth manifolds that are carried over by diffeomorphisms, another special kind of smooth mapping. Morse theory is another branch of differential topology, in which topological information about a manifold is deduced from changes in the rank of the Jacobian of a function. For a list of differential topology topics, see the following reference: List of differential geometry topics. == Differential topology versus differential geometry == Differential topology and differential geometry are first characterized by their similarity. They both study primarily the properties of differentiable manifolds, sometimes with a variety of structures imposed on them. One major difference lies in the nature of the problems that each subject tries to address. In one view, differential topology distinguishes itself from differential geometry by studying primarily those problems that are inherently global. Consider the example of a coffee cup and a donut. From the point of view of differential topology, the donut and the coffee cup are the same (in a sense). This is an inherently global view, though, because there is no way for the differential topologist to tell whether the two objects are the same (in this sense) by looking at just a tiny (local) piece of either of them. They must have access to each entire (global) object. From the point of view of differential geometry, the coffee cup and the donut are different because it is impossible to rotate the coffee cup in such a way that its configuration matches that of the donut. This is also a global way of thinking about the problem. But an important distinction is that the geometer does not need the entire object to decide this. By looking, for instance, at just a tiny piece of the handle, they can decide that the coffee cup is different from the donut because the handle is thinner (or more curved) than any piece of the donut. To put it succinctly, differential topology studies structures on manifolds that, in a sense, have no interesting local structure. Differential geometry studies structures on manifolds that do have an interesting local (or sometimes even infinitesimal) structure. More mathematically, for example, the problem of constructing a diffeomorphism between two manifolds of the same dimension is inherently global since locally two such manifolds are always diffeomorphic. Likewise, the problem of computing a quantity on a manifold that is invariant under differentiable mappings is inherently global, since any local invariant will be trivial in the sense that it is already exhibited in the topology of R n {\displaystyle \mathbb {R} ^{n}} . Moreover, differential topology does not restrict itself necessarily to the study of diffeomorphism. For example, symplectic topology—a subbranch of differential topology—studies global properties of symplectic manifolds. Differential geometry concerns itself with problems—which may be local or global—that always have some non-trivial local properties. Thus differential geometry may study differentiable manifolds equipped with a connection, a metric (which may be Riemannian, pseudo-Riemannian, or Finsler), a special sort of distribution (such as a CR structure), and so on. This distinction between differential geometry and differential topology is blurred, however, in questions specifically pertaining to local diffeomorphism invariants such as the tangent space at a point. Differential topology also deals with questions like these, which specifically pertain to the properties of differentiable mappings on R n {\displaystyle \mathbb {R} ^{n}} (for example the tangent bundle, jet bundles, the Whitney extension theorem, and so forth). The distinction is concise in abstract terms: Differential topology is the study of the (infinitesimal, local, and global) properties of structures on manifolds that have only trivial local moduli. Differential geometry is such a study of structures on manifolds that have one or more non-trivial local moduli. == See also == List of differential geometry topics Glossary of differential geometry and topology Important publications in differential geometry Important publications in differential topology Basic introduction to the mathematics of curved spacetime == Notes == == References == Bloch, Ethan D. (1996). A First Course in Geometric Topology and Differential Geometry. Boston: Birkhäuser. ISBN 978-0-8176-3840-5. Hirsch, Morris (1997). Differential Topology. Springer-Verlag. ISBN 978-0-387-90148-0. Lashof, Richard (Dec 1972). "The Tangent Bundle of a Topological Manifold". American Mathematical Monthly. 79 (10): 1090–1096. doi:10.2307/2317423. JSTOR 2317423. Kervaire, Michel A. (Dec 1960). "A manifold which does not admit any differentiable structure". Commentarii Mathematici Helvetici. 34 (1): 257–270. doi:10.1007/BF02565940. == External links == "Differential topology", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Differential_topology
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector v {\displaystyle \mathbf {v} } of a linear transformation T {\displaystyle T} is scaled by a constant factor λ {\displaystyle \lambda } when the linear transformation is applied to it: T v = λ v {\displaystyle T\mathbf {v} =\lambda \mathbf {v} } . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor λ {\displaystyle \lambda } (possibly negative). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system. == Matrices == For an n × n {\displaystyle n{\times }n} matrix A and a nonzero vector v {\displaystyle \mathbf {v} } of length n {\displaystyle n} , if multiplying A by v {\displaystyle \mathbf {v} } (denoted A v {\displaystyle A\mathbf {v} } ) simply scales v {\displaystyle \mathbf {v} } by a factor λ, where λ is a scalar, then v {\displaystyle \mathbf {v} } is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as: A v = λ v {\displaystyle A\mathbf {v} =\lambda \mathbf {v} } . Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices. == Overview == Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation T ( v ) = λ v , {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,} referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like d d x {\displaystyle {\tfrac {d}{dx}}} , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x . {\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.} Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication A v = λ v , {\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,} where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation. The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue. If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis. == History == Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation. Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961. == Eigenvalues and eigenvectors of matrices == Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors x = [ 1 − 3 4 ] and y = [ − 20 60 − 80 ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that x = λ y . {\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case, λ = − 1 20 {\displaystyle \lambda =-{\frac {1}{20}}} . Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A, A v = w , {\displaystyle A\mathbf {v} =\mathbf {w} ,} or [ A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ] [ v 1 v 2 ⋮ v n ] = [ w 1 w 2 ⋮ w n ] {\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}} where, for each row, w i = A i 1 v 1 + A i 2 v 2 + ⋯ + A i n v n = ∑ j = 1 n A i j v j . {\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs that v and w are scalar multiples, that is if then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A. Equation (1) can be stated equivalently as where I is the n by n identity matrix and 0 is the zero vector. === Eigenvalues and the characteristic polynomial === Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of (A − λI), the characteristic polynomial of A is det ( A − λ I ) = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation ( A − λ I ) v = 0 {\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} } . In this example, the eigenvectors are any nonzero scalar multiples of v λ = 1 = [ 1 − 1 ] , v λ = 3 = [ 1 1 ] . {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. === Spectrum of a matrix === The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix. === Algebraic multiplicity === Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial. Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, det ( A − λ I ) = ( λ 1 − λ ) μ A ( λ 1 ) ( λ 2 − λ ) μ A ( λ 2 ) ⋯ ( λ d − λ ) μ A ( λ d ) . {\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as 1 ≤ μ A ( λ i ) ≤ n , μ A = ∑ i = 1 d μ A ( λ i ) = n . {\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. === Eigenspaces, geometric multiplicity, and the eigenbasis for matrices === Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2), E = { v : ( A − λ I ) v = 0 } . {\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of C n {\displaystyle \mathbb {C} ^{n}} . Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as γ A ( λ ) = n − rank ⁡ ( A − λ I ) . {\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. 1 ≤ γ A ( λ ) ≤ μ A ( λ ) ≤ n {\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequality γ A ( λ ) ≤ μ A ( λ ) {\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )} , consider how the definition of geometric multiplicity implies the existence of γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} orthonormal eigenvectors v 1 , … , v γ A ( λ ) {\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}} , such that A v k = λ v k {\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}} . We can therefore find a (unitary) matrix V whose first γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} columns are these eigenvectors, and whose remaining columns can be any orthonormal set of n − γ A ( λ ) {\displaystyle n-\gamma _{A}(\lambda )} vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating D := V T A V {\displaystyle D:=V^{T}AV} , we get a matrix whose top left block is the diagonal matrix λ I γ A ( λ ) {\displaystyle \lambda I_{\gamma _{A}(\lambda )}} . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding − ξ V {\displaystyle -\xi V} on both sides, we get ( A − ξ I ) V = V ( D − ξ I ) {\displaystyle (A-\xi I)V=V(D-\xi I)} since I commutes with V. In other words, A − ξ I {\displaystyle A-\xi I} is similar to D − ξ I {\displaystyle D-\xi I} , and det ( A − ξ I ) = det ( D − ξ I ) {\displaystyle \det(A-\xi I)=\det(D-\xi I)} . But from the definition of D, we know that det ( D − ξ I ) {\displaystyle \det(D-\xi I)} contains a factor ( ξ − λ ) γ A ( λ ) {\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}} , which means that the algebraic multiplicity of λ {\displaystyle \lambda } must satisfy μ A ( λ ) ≥ γ A ( λ ) {\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )} . Suppose A has d ≤ n {\displaystyle d\leq n} distinct eigenvalues λ 1 , … , λ d {\displaystyle \lambda _{1},\ldots ,\lambda _{d}} , where the geometric multiplicity of λ i {\displaystyle \lambda _{i}} is γ A ( λ i ) {\displaystyle \gamma _{A}(\lambda _{i})} . The total geometric multiplicity of A, γ A = ∑ i = 1 d γ A ( λ i ) , d ≤ γ A ≤ n , {\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}} is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γ A = n {\displaystyle \gamma _{A}=n} , then The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space C n {\displaystyle \mathbb {C} ^{n}} . A basis of C n {\displaystyle \mathbb {C} ^{n}} can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis Any vector in C n {\displaystyle \mathbb {C} ^{n}} can be written as a linear combination of eigenvectors of A. === Additional properties === Let A {\displaystyle A} be an arbitrary n × n {\displaystyle n\times n} matrix of complex numbers with eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . Each eigenvalue appears μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} times in this list, where μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: The trace of A {\displaystyle A} , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, tr ⁡ ( A ) = ∑ i = 1 n a i i = ∑ i = 1 n λ i = λ 1 + λ 2 + ⋯ + λ n . {\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.} The determinant of A {\displaystyle A} is the product of all its eigenvalues, det ( A ) = ∏ i = 1 n λ i = λ 1 λ 2 ⋯ λ n . {\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.} The eigenvalues of the k {\displaystyle k} th power of A {\displaystyle A} ; i.e., the eigenvalues of A k {\displaystyle A^{k}} , for any positive integer k {\displaystyle k} , are λ 1 k , … , λ n k {\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}} . The matrix A {\displaystyle A} is invertible if and only if every eigenvalue is nonzero. If A {\displaystyle A} is invertible, then the eigenvalues of A − 1 {\displaystyle A^{-1}} are 1 λ 1 , … , 1 λ n {\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}} and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. If A {\displaystyle A} is equal to its conjugate transpose A ∗ {\displaystyle A^{*}} , or equivalently if A {\displaystyle A} is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. If A {\displaystyle A} is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. If A {\displaystyle A} is unitary, every eigenvalue has absolute value | λ i | = 1 {\displaystyle |\lambda _{i}|=1} . If A {\displaystyle A} is a n × n {\displaystyle n\times n} matrix and { λ 1 , … , λ k } {\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}} are its eigenvalues, then the eigenvalues of matrix I + A {\displaystyle I+A} (where I {\displaystyle I} is the identity matrix) are { λ 1 + 1 , … , λ k + 1 } {\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}} . Moreover, if α ∈ C {\displaystyle \alpha \in \mathbb {C} } , the eigenvalues of α I + A {\displaystyle \alpha I+A} are { λ 1 + α , … , λ k + α } {\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}} . More generally, for a polynomial P {\displaystyle P} the eigenvalues of matrix P ( A ) {\displaystyle P(A)} are { P ( λ 1 ) , … , P ( λ k ) } {\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}} . === Left and right eigenvectors === Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the n × n {\displaystyle n\times n} matrix A {\displaystyle A} in the defining equation, equation (1), A v = λ v . {\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A {\displaystyle A} . In this formulation, the defining equation is u A = κ u , {\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} where κ {\displaystyle \kappa } is a scalar and u {\displaystyle u} is a 1 × n {\displaystyle 1\times n} matrix. Any row vector u {\displaystyle u} satisfying this equation is called a left eigenvector of A {\displaystyle A} and κ {\displaystyle \kappa } is its associated eigenvalue. Taking the transpose of this equation, A T u T = κ u T . {\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector of A {\displaystyle A} is the same as the transpose of a right eigenvector of A T {\displaystyle A^{\textsf {T}}} , with the same eigenvalue. Furthermore, since the characteristic polynomial of A T {\displaystyle A^{\textsf {T}}} is the same as the characteristic polynomial of A {\displaystyle A} , the left and right eigenvectors of A {\displaystyle A} are associated with the same eigenvalues. === Diagonalization and the eigendecomposition === Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Q = [ v 1 v 2 ⋯ v n ] . {\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.} Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, A Q = [ λ 1 v 1 λ 2 v 2 ⋯ λ n v n ] . {\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.} With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then A Q = Q Λ . {\displaystyle AQ=Q\Lambda .} Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, A = Q Λ Q − 1 , {\displaystyle A=Q\Lambda Q^{-1},} or by instead left multiplying both sides by Q−1, Q − 1 A Q = Λ . {\displaystyle Q^{-1}AQ=\Lambda .} A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. === Variational characterization === In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of H {\displaystyle H} is the maximum value of the quadratic form x T H x / x T x {\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} } . A value of x {\displaystyle \mathbf {x} } that realizes that maximum is an eigenvector. === Matrix examples === ==== Two-dimensional matrix example ==== Consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, det ( A − λ I ) = | [ 2 1 1 2 ] − λ [ 1 0 0 1 ] | = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 = ( λ − 3 ) ( λ − 1 ) . {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. For λ=1, equation (2) becomes, ( A − I ) v λ = 1 = [ 1 1 1 1 ] [ v 1 v 2 ] = [ 0 0 ] {\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}} 1 v 1 + 1 v 2 = 0 {\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector with v1 = −v2 solves this equation. Therefore, v λ = 1 = [ v 1 − v 1 ] = [ 1 − 1 ] {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For λ=3, equation (2) becomes ( A − 3 I ) v λ = 3 = [ − 1 1 1 − 1 ] [ v 1 v 2 ] = [ 0 0 ] − 1 v 1 + 1 v 2 = 0 ; 1 v 1 − 1 v 2 = 0 {\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector with v1 = v2 solves this equation. Therefore, v λ = 3 = [ v 1 v 1 ] = [ 1 1 ] {\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively. ==== Three-dimensional matrix example ==== Consider the matrix A = [ 2 0 0 0 3 4 0 4 9 ] . {\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = | [ 2 0 0 0 3 4 0 4 9 ] − λ [ 1 0 0 0 1 0 0 0 1 ] | = | 2 − λ 0 0 0 3 − λ 4 0 4 9 − λ | , = ( 2 − λ ) [ ( 3 − λ ) ( 9 − λ ) − 16 ] = − λ 3 + 14 λ 2 − 35 λ + 22. {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors [ 1 0 0 ] T {\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}} , [ 0 − 2 1 ] T {\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}} , and [ 0 1 2 ] T {\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}} , or any nonzero multiple thereof. ==== Three-dimensional matrix example with complex eigenvalues ==== Consider the cyclic permutation matrix A = [ 0 1 0 0 0 1 1 0 0 ] . {\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are λ 1 = 1 λ 2 = − 1 2 + i 3 2 λ 3 = λ 2 ∗ = − 1 2 − i 3 2 {\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}} where i {\displaystyle i} is an imaginary unit with i 2 = − 1 {\displaystyle i^{2}=-1} . For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example, A [ 5 5 5 ] = [ 5 5 5 ] = 1 ⋅ [ 5 5 5 ] . {\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues, λ 2 λ 3 = 1 , λ 2 2 = λ 3 , λ 3 2 = λ 2 . {\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} Then A [ 1 λ 2 λ 3 ] = [ λ 2 λ 3 1 ] = λ 2 ⋅ [ 1 λ 2 λ 3 ] , {\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},} and A [ 1 λ 3 λ 2 ] = [ λ 3 λ 2 1 ] = λ 3 ⋅ [ 1 λ 3 λ 2 ] . {\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors of A are complex and are v λ 2 = [ 1 λ 2 λ 3 ] T {\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}} and v λ 3 = [ 1 λ 3 λ 2 ] T {\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}} with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair, v λ 2 = v λ 3 ∗ . {\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} ==== Diagonal matrix example ==== Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix A = [ 1 0 0 0 2 0 0 0 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 0 0 ] , v λ 2 = [ 0 1 0 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Triangular matrix example ==== A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, A = [ 1 0 0 1 2 0 2 3 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. These eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 − 1 1 2 ] , v λ 2 = [ 0 1 − 3 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Matrix with repeated eigenvalues example ==== As in the previous example, the lower triangular matrix A = [ 2 0 0 0 1 2 0 0 0 1 3 0 0 0 1 3 ] , {\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},} has a characteristic polynomial that is the product of its diagonal elements, det ( A − λ I ) = | 2 − λ 0 0 0 1 2 − λ 0 0 0 1 3 − λ 0 0 0 1 3 − λ | = ( 2 − λ ) 2 ( 3 − λ ) 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [ 0 1 − 1 1 ] T {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [ 0 0 0 1 ] T {\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}} . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. === Eigenvector-eigenvalue identity === For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, | v i , j | 2 = ∏ k ( λ i − λ k ( M j ) ) ∏ k ≠ i ( λ i − λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} where M j {\textstyle M_{j}} is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature. == Eigenvalues and eigenfunctions of differential operators == The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation D f ( t ) = λ f ( t ) {\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. === Derivative operator example === Consider the derivative operator d d t {\displaystyle {\tfrac {d}{dt}}} with eigenvalue equation d d t f ( t ) = λ f ( t ) . {\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function f ( t ) = f ( 0 ) e λ t , {\displaystyle f(t)=f(0)e^{\lambda t},} is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. == General definition == The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, T : V → V . {\displaystyle T:V\to V.} We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v. === Eigenspaces, geometric multiplicity, and the eigenbasis === Given an eigenvalue λ, consider the set E = { v : T ( v ) = λ v } , {\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, T ( x + y ) = T ( x ) + T ( y ) , T ( α x ) = α T ( x ) , {\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then T ( u + v ) = λ ( u + v ) , T ( α v ) = λ ( α v ) . {\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues. Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. === Spectral theory === If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. === Associative algebras and representation theory === One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence. == Dynamic equations == The simplest difference equations have the form x t = a 1 x t − 1 + a 2 x t − 2 + ⋯ + a k x t − k . {\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.} The solution of this equation for x in terms of t is found by using its characteristic equation λ k − a 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 , {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,} which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations x t − 1 = x t − 1 , … , x t − k + 1 = x t − k + 1 , {\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},} giving a k-dimensional system of the first order in the stacked variable vector [ x t ⋯ x t − k + 1 ] {\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}} in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots λ 1 , … , λ k , {\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},} for use in the solution equation x t = c 1 λ 1 t + ⋯ + c k λ k t . {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.} A similar procedure is used for solving a differential equation of the form d k x d t k + a k − 1 d k − 1 x d t k − 1 + ⋯ + a 1 d x d t + a 0 x = 0. {\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.} == Calculation == The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. === Classical method === The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point. ==== Eigenvalues ==== The eigenvalues of a matrix A {\displaystyle A} can be determined by finding the roots of the characteristic polynomial. This is easy for 2 × 2 {\displaystyle 2\times 2} matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an n × n {\displaystyle n\times n} matrix is a sum of n ! {\displaystyle n!} different products. Explicit algebraic formulas for the roots of a polynomial exist only if the degree n {\displaystyle n} is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree n {\displaystyle n} is the characteristic polynomial of some companion matrix of order n {\displaystyle n} .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. ==== Eigenvectors ==== Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix A = [ 4 1 6 3 ] {\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equation A v = 6 v {\displaystyle Av=6v} , that is [ 4 1 6 3 ] [ x y ] = 6 ⋅ [ x y ] {\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to two linear equations { 4 x + y = 6 x 6 x + 3 y = 6 y {\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.} that is { − 2 x + y = 0 6 x − 3 y = 0 {\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equation y = 2 x {\displaystyle y=2x} . Therefore, any vector of the form [ a 2 a ] T {\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number a {\displaystyle a} , is an eigenvector of A {\displaystyle A} with eigenvalue λ = 6 {\displaystyle \lambda =6} . The matrix A {\displaystyle A} above has another eigenvalue λ = 1 {\displaystyle \lambda =1} . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of 3 x + y = 0 {\displaystyle 3x+y=0} , that is, any vector of the form [ b − 3 b ] T {\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number b {\displaystyle b} . === Simple iterative methods === The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} ; this causes it to converge to an eigenvector of the eigenvalue closest to μ ∈ C {\displaystyle \mu \in \mathbb {C} } . If v {\displaystyle \mathbf {v} } is (a good approximation of) an eigenvector of A {\displaystyle A} , then the corresponding eigenvalue can be computed as λ = v ∗ A v v ∗ v {\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}} where v ∗ {\displaystyle \mathbf {v} ^{*}} denotes the conjugate transpose of v {\displaystyle \mathbf {v} } . === Modern methods === Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities. Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. == Applications == === Geometric transformations === Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is a quadratic equation with discriminant D = − 4 ( sin ⁡ θ ) 2 {\displaystyle D=-4(\sin \theta )^{2}} , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, cos ⁡ θ ± i sin ⁡ θ {\displaystyle \cos \theta \pm i\sin \theta } ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. === Principal component analysis === The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling. === Graphs === In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A {\displaystyle A} , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either D − A {\displaystyle D-A} (sometimes called the combinatorial Laplacian) or I − D − 1 / 2 A D − 1 / 2 {\displaystyle I-D^{-1/2}AD^{-1/2}} (sometimes called the normalized Laplacian), where D {\displaystyle D} is a diagonal matrix with D i i {\displaystyle D_{ii}} equal to the degree of vertex v i {\displaystyle v_{i}} , and in D − 1 / 2 {\displaystyle D^{-1/2}} , the i {\displaystyle i} th diagonal entry is 1 / deg ⁡ ( v i ) {\textstyle 1/{\sqrt {\deg(v_{i})}}} . The k {\displaystyle k} th principal eigenvector of a graph is defined as either the eigenvector corresponding to the k {\displaystyle k} th largest or k {\displaystyle k} th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. === Markov chains === A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. === Vibration analysis === Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by m x ¨ + k x = 0 {\displaystyle m{\ddot {x}}+kx=0} or m x ¨ = − k x {\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expect x {\displaystyle x} to be sinusoidal in time). In n {\displaystyle n} dimensions, m {\displaystyle m} becomes a mass matrix and k {\displaystyle k} a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem k x = ω 2 m x {\displaystyle kx=\omega ^{2}mx} where ω 2 {\displaystyle \omega ^{2}} is the eigenvalue and ω {\displaystyle \omega } is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k {\displaystyle k} alone. Furthermore, damped vibration, governed by m x ¨ + c x ˙ + k x = 0 {\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0} leads to a so-called quadratic eigenvalue problem, ( ω 2 m + ω c + k ) x = 0. {\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. === Tensor of moment of inertia === In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass. === Stress tensor === In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. === Schrödinger equation === An example of an eigenvalue equation where the transformation T {\displaystyle T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H ψ E = E ψ E {\displaystyle H\psi _{E}=E\psi _{E}\,} where H {\displaystyle H} , the Hamiltonian, is a second-order differential operator and ψ E {\displaystyle \psi _{E}} , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E {\displaystyle E} , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψ E {\displaystyle \psi _{E}} within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψ E {\displaystyle \psi _{E}} and H {\displaystyle H} can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . In this notation, the Schrödinger equation is: H | Ψ E ⟩ = E | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle } where | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } is an eigenstate of H {\displaystyle H} and E {\displaystyle E} represents the eigenvalue. H {\displaystyle H} is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above H | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle } is understood to be the vector obtained by application of the transformation H {\displaystyle H} to | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . === Wave transport === Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix t {\displaystyle \mathbf {t} } . The eigenvectors of the transmission operator t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, τ {\displaystyle \tau } , of t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with τ max = 1 {\displaystyle \tau _{\max }=1} and τ min = 0 {\displaystyle \tau _{\min }=0} . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels. === Molecular orbitals === In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. === Geology and glaciology === In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v 1 , v 2 , v 3 {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}} by their eigenvalues E 1 ≥ E 2 ≥ E 3 {\displaystyle E_{1}\geq E_{2}\geq E_{3}} ; v 1 {\displaystyle \mathbf {v} _{1}} then is the primary orientation/dip of clast, v 2 {\displaystyle \mathbf {v} _{2}} is the secondary and v 3 {\displaystyle \mathbf {v} _{3}} is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E 1 {\displaystyle E_{1}} , E 2 {\displaystyle E_{2}} , and E 3 {\displaystyle E_{3}} are dictated by the nature of the sediment's fabric. If E 1 = E 2 = E 3 {\displaystyle E_{1}=E_{2}=E_{3}} , the fabric is said to be isotropic. If E 1 = E 2 > E 3 {\displaystyle E_{1}=E_{2}>E_{3}} , the fabric is said to be planar. If E 1 > E 2 > E 3 {\displaystyle E_{1}>E_{2}>E_{3}} , the fabric is said to be linear. === Basic reproduction number === The basic reproduction number ( R 0 {\displaystyle R_{0}} ) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R 0 {\displaystyle R_{0}} is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, t G {\displaystyle t_{G}} , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time t G {\displaystyle t_{G}} has passed. The value R 0 {\displaystyle R_{0}} is then the largest eigenvalue of the next generation matrix. === Eigenfaces === In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. == See also == Antieigenvalue theory Eigenoperator Eigenplane Eigenmoments Eigenvalue algorithm Quantum states Jordan normal form List of numerical-analysis software Nonlinear eigenproblem Normal eigenvalue Quadratic eigenvalue problem Singular value Spectrum of a matrix == Notes == === Citations === == Sources == == Further reading == == External links == What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts" Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu. Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an n × n {\displaystyle n\times n} size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.) Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors === Theory === Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst
Wikipedia/Eigenvalues_and_eigenvectors
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable. Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. The concepts and techniques found in calculus have diverse applications in science, engineering, and other branches of mathematics. == Etymology == In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis. In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin. In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus. == History == Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India. === Ancient precursors === ==== Egypt ==== Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulae are simple instructions, with no indication as to how they were obtained. ==== Greece ==== Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus (c. 390–337 BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes. During the Hellenistic period, this method was further developed by Archimedes (c. 287 – c. 212 BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines. ==== China ==== The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere. === Medieval === ==== Middle East ==== In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. He determined the equations to calculate the area enclosed by the curve represented by y = x k {\displaystyle y=x^{k}} (which translates to the integral ∫ x k d x {\displaystyle \int x^{k}\,dx} in contemporary notation), for any given non-negative integer value of k {\displaystyle k} .He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. ==== India ==== Bhāskara II (c. 1114–1185) was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if x ≈ y {\displaystyle x\approx y} then sin ⁡ ( y ) − sin ⁡ ( x ) ≈ ( y − x ) cos ⁡ ( y ) . {\displaystyle \sin(y)-\sin(x)\approx (y-x)\cos(y).} This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus. They studied series equivalent to the Maclaurin expansions of ⁠ sin ⁡ ( x ) {\displaystyle \sin(x)} ⁠, ⁠ cos ⁡ ( x ) {\displaystyle \cos(x)} ⁠, and ⁠ arctan ⁡ ( x ) {\displaystyle \arctan(x)} ⁠ more than two hundred years before their introduction in Europe. According to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today". === Modern === Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse. Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670. The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable. These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation. Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today.: 51–52  The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series. When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.: 100  The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815. Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi. === Foundations === In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today. Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis. In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever. Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis. === Significance === While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work, The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking. Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization.: 341–453  Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure.: 685–700  More advanced applications include power series and Fourier series. Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes. == Principles == === Limits and infinitesimals === Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols d x {\displaystyle dx} and d y {\displaystyle dy} were taken to be infinitesimal, and the derivative d y / d x {\displaystyle dy/dx} was their ratio. The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals. === Differential calculus === Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.: 32  In more explicit terms the "doubling function" may be denoted by g(x) = 2x and the "squaring function" by f(x) = x2. The "derivative" now takes the function f(x), defined by the expression "x2", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out. In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called f is denoted by f′, pronounced "f prime" or "f dash". For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above). If the input of the function represents time, then the derivative represents change concerning time. For example, if f is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.: 18–20  If a function is linear (that is if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and: m = rise run = change in y change in x = Δ y Δ x . {\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.} This gives an exact value for the slope of a straight line.: 6  If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is m = f ( a + h ) − f ( a ) ( a + h ) − a = f ( a + h ) − f ( a ) h . {\displaystyle m={\frac {f(a+h)-f(a)}{(a+h)-a}}={\frac {f(a+h)-f(a)}{h}}.} This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The second line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero: lim h → 0 f ( a + h ) − f ( a ) h . {\displaystyle \lim _{h\to 0}{f(a+h)-f(a) \over {h}}.} Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.: 61–63  Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function. f ′ ( 3 ) = lim h → 0 ( 3 + h ) 2 − 3 2 h = lim h → 0 9 + 6 h + h 2 − 9 h = lim h → 0 6 h + h 2 h = lim h → 0 ( 6 + h ) = 6 {\displaystyle {\begin{aligned}f'(3)&=\lim _{h\to 0}{(3+h)^{2}-3^{2} \over {h}}\\&=\lim _{h\to 0}{9+6h+h^{2}-9 \over {h}}\\&=\lim _{h\to 0}{6h+h^{2} \over {h}}\\&=\lim _{h\to 0}(6+h)\\&=6\end{aligned}}} The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.: 63  === Leibniz notation === A common notation, introduced by Leibniz, for the derivative in the example above is y = x 2 d y d x = 2 x . {\displaystyle {\begin{aligned}y&=x^{2}\\{\frac {dy}{dx}}&=2x.\end{aligned}}} In an approach based on limits, the symbol ⁠dy/ dx⁠ is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.: 74  Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of ⁠d/ dx⁠ as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example: d d x ( x 2 ) = 2 x . {\displaystyle {\frac {d}{dx}}(x^{2})=2x.} In this usage, the dx in the denominator is read as "with respect to x".: 79  Another example of correct notation could be: g ( t ) = t 2 + 2 t + 4 d d t g ( t ) = 2 t + 2 {\displaystyle {\begin{aligned}g(t)&=t^{2}+2t+4\\{d \over dt}g(t)&=2t+2\end{aligned}}} Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative. === Integral calculus === Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration.: 508  The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative.: 163–165  F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.: 282  A motivating example is the distance traveled in a given time.: 153  If the speed is constant, only multiplication is needed: D i s t a n c e = S p e e d ⋅ T i m e {\displaystyle \mathrm {Distance} =\mathrm {Speed} \cdot \mathrm {Time} } But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.: 535  This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If f(x) represents speed as it varies over time, the distance traveled between the times represented by a and b is the area of the region between f(x) and the x-axis, between x = a and x = b. To approximate that area, an intuitive method would be to divide up the distance between a and b into several equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as Δx approaches zero.: 512–522  The symbol of integration is ∫ {\displaystyle \int } , an elongated S chosen to suggest summation.: 529  The definite integral is written as: ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width Δx becomes the infinitesimally small dx.: 44  The indefinite integral, or antiderivative, is written: ∫ f ( x ) d x . {\displaystyle \int f(x)\,dx.} Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.: 326  Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter is given by: ∫ 2 x d x = x 2 + C . {\displaystyle \int 2x\,dx=x^{2}+C.} The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.: 135  === Fundamental theorem === The fundamental theorem of calculus states that differentiation and integration are inverse operations.: 290  More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).} Furthermore, for every x in the interval (a, b), d d x ∫ a x f ( t ) d t = f ( x ) . {\displaystyle {\frac {d}{dx}}\int _{a}^{x}f(t)\,dt=f(x).} This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.: 351–352  == Applications == Calculus is used in every branch of the physical sciences,: 1  actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function.: 37  In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments. Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus.: 52–55  Chemistry also uses calculus in determining reaction rates: 599  and in studying radioactive decay.: 814  In biology, population dynamics starts with reproduction and death rates to model population changes.: 631  Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows. In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.: 387  == See also == Glossary of calculus List of calculus topics List of derivatives and integrals in alternative calculi List of differentiation identities Publications in calculus Table of integrals == References == == Further reading == == External links == "Calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Calculus". MathWorld. Topics on Calculus at PlanetMath. Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF Calculus on In Our Time at the BBC Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis The Role of Calculus in College Mathematics Archived 26 July 2021 at the Wayback Machine from ERICDigests.org OpenCourseWare Calculus from the Massachusetts Institute of Technology Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel. Daniel Kleitman, MIT. "Calculus for Beginners and Artists". Calculus training materials at imomath.com (in English and Arabic) The Excursion of Calculus, 1772
Wikipedia/Calculus
Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers. Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson. == History == === Earliest results === In 1713, a letter attributed to Charles Waldegrave, an active Jacobite and uncle to British diplomat James Waldegrave, analyzed a game called "le her". Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game, and the problem is now known as the Waldegrave problem. In 1838, Antoine Augustin Cournot provided a model of competition in oligopolies. Though he did not refer to it as such, he presented a solution that is the Nash equilibrium of the game in his Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth). In 1883, Joseph Bertrand critiqued Cournot's model as unrealistic, providing an alternative model of price competition which would later be formalized by Francis Ysidro Edgeworth. In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined. === Foundation === The work of John von Neumann established game theory as its own independent field in the early-to-mid 20th century, with von Neumann publishing his paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. Von Neumann's work in game theory culminated in his 1944 book Theory of Games and Economic Behavior, co-authored with Oskar Morgenstern. The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann. In 1950, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies. Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. The first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. ==== Prize-winning achievements ==== In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory. In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge were introduced and analyzed. In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy. In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences. In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict. Hurwicz introduced and formalized the concept of incentive compatibility. In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole. == Different types of games == === Cooperative / non-cooperative === A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats). Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from non-cooperative game theory which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria. Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. === Symmetric / asymmetric === A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players. === Zero-sum / non-zero-sum === Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess. Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another. Furthermore, constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings. === Simultaneous / sequential === Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (a type of dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed. The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection. In short, the differences between sequential and simultaneous games are as follows: === Perfect information and imperfect information === An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game. Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go. Many card games are games of imperfect information, such as poker and bridge. Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature". === Bayesian game === One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character. Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist. For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium. === Combinatorial games === Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions. Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies. Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice. === Discrete and continuous games === Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities. === Differential games === Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method. A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval. === Evolutionary game theory === Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted. In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest. In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies. === Stochastic outcomes (and relation to other fields) === Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP). Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game. For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.) General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation. === Metagames === These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory. The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard, whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis. === Mean field game theory === Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematicians Pierre-Louis Lions and Jean-Michel Lasry. == Representation of games == The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games. === Extensive form === The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualized using game trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached. The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either F or U (fair or unfair). Next in the sequence, Player 2, who has now observed Player 1's move, can choose to play either A or R (accept or reject). Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that Player 1 chooses U and then Player 2 chooses A: Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two". The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.) === Normal form === The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3. When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form. Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. === Characteristic function form === In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book. Formally, a characteristic function is a function v : 2 N → R {\displaystyle v:2^{N}\to \mathbb {R} } from the set of all possible coalitions of players to a set of payments, and also satisfies v ( ∅ ) = 0 {\displaystyle v(\emptyset )=0} . The function describes how much collective payoff a set of players can gain by forming a coalition. === Alternative game representations === Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research. In addition to classical game representations, some of the alternative representations also encode time related aspects. == General and applied uses == As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well. Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games. In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions. === Description and modeling === The primary use of game theory is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation. Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics). === Prescriptive or normative analysis === Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism. === Economics === Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy. This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing. The payoffs of the game are generally taken to represent the utility of individual players. A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive. ==== Managerial economics ==== Game theory also has an extensive use in a specific branch or stream of economics – Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms. For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimal pricing strategy based on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics. === Business === The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement. CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include: application of game theory to procurement activity has increased – at the time it was at 19% across all survey respondents 65% of participants predict that use of game theory applications will grow 70% of respondents say that they have "only a basic or a below basic understanding" of game theory 20% of participants had undertaken on-the-job training in game theory 50% of respondents said that new or improved software solutions were desirable 90% of respondents said that they do not have the software they need for their work. === Project management === Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory. Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory. Piraveenan summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management. Government-sector–private-sector games (games that model public–private partnerships) Contractor–contractor games Contractor–subcontractor games Subcontractor–subcontractor games Games involving other players In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios. === Political science === The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians. Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy, he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy. It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively. A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy. However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities. Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations. === Defence science and technology === Game theory has been used extensively to model decision-making scenarios relevant to defence applications. Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare. Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels. The tool, for example, automates the transformation of public vulnerability data into models, allowing defenders to synthesize optimal defence strategies through Stackelberg equilibrium analysis. This approach enhances cyber resilience by enabling defenders to anticipate and counteract attackers’ best responses, making game theory increasingly relevant in adversarial cybersecurity environments. Ho et al. provide a broad summary of game theory applications in defence, highlighting its advantages and limitations across both physical and cyber domains. === Biology === Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in (Maynard Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium. In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren. Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics). Biologists have used the game of chicken to analyze fighting behavior and territoriality. According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature. One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual. Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation c < b × r, where the cost c to the altruist must be less than the benefit b to the recipient multiplied by the coefficient of relatedness r. The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of 1⁄2, because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was 1⁄2 in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller. === Computer science and logic === Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems. Separately, game theory has played a role in online algorithms; in particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games. Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms. The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory. Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment. Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning, mechanism design etc. By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively. === Philosophy === Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis. Following Lewis (1969) game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game. Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Skyrms (1990), and Stalnaker (1999). The synthesis of game theory with ethics was championed by R. B. Braithwaite. The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent. In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986)). Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1998)). === Epidemiology === Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society. == Well known examples of games == === Prisoner's dilemma === William Poundstone described the game in his 1993 book Prisoner's Dilemma: Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge. The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle. However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal. === Battle of the sexes === The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games. An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together. In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies. === Ultimatum game === The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961. One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions. Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer. === Trust game === The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995. In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity. === Cournot Competition === The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price. For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit. However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand. The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output. Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved. === Bertrand Competition === The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices. The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage. == In popular culture == Based on the 1998 book by Sylvia Nasar, the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash. The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games". In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory". The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public. The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary ... to give yourself the minimum amount of failure". Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters. The 1974 novel Spy Story by Len Deighton explores elements of game theory in regard to cold war army exercises. The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory. Joker, the prime antagonist in the 2008 film The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own. In the 2018 film Crazy Rich Asians, the female lead Rachel Chu is a professor of economics and game theory at New York University. At the beginning of the film she is seen in her NYU classroom playing a game of poker with her teaching assistant and wins the game by bluffing; then in the climax of the film, she plays a game of mahjong with her boyfriend's disapproving mother Eleanor, losing the game to Eleanor on purpose but winning her approval as a result. In the 2017 film Molly's Game, Brad, an inexperienced poker player, makes an irrational betting decision without realizing and causes his opponent Harlan to deviate from his Nash Equilibrium strategy, resulting in a significant loss when Harlan loses the hand. == See also == Applied ethics – Practical application of moral considerations Bandwidth-sharing game – Type of resource allocation game Chainstore paradox – Game theory paradox Collective intentionality – Intentionality that occurs when two or more individuals undertake a task together Core (game theory) – term in game theoryPages displaying wikidata descriptions as a fallback Glossary of game theory Intra-household bargaining – negotiations between members of a household to reach decisionsPages displaying wikidata descriptions as a fallback Kingmaker scenario – Endgame situation in game theory Law and economics – Application of economic theory to analysis of legal systems Mutual assured destruction – Doctrine of military strategy Outline of artificial intelligence – Overview of and topical guide to artificial intelligence Parrondo's paradox – Paradox in game theory Precautionary principle – Risk management strategy Quantum refereed game Risk management – Identification, evaluation and control of risks Self-confirming equilibrium Tragedy of the commons – Self-interests causing depletion of a shared resource Traveler's dilemma – non-zero-sum game thought experimentPages displaying wikidata descriptions as a fallback Wilson doctrine (economics) – Argument in economic theory Compositional game theory Lists List of cognitive biases List of emerging technologies List of games in game theory == Notes == == References == == Sources == Ben-David, S.; Borodin, A.; Karp, R.; Tardos, G.; Wigderson, A. (January 1994). "On the power of randomization in on-line algorithms". Algorithmica. 11 (1): 2–14. doi:10.1007/BF01294260. S2CID 26771869. Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper Fisher, Sir Ronald Aylmer (1930). The Genetical Theory of Natural Selection. Clarendon Press. Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4 Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen", Journal of Experimental & Theoretical Artificial Intelligence, 16 (4): 209–243, doi:10.1080/09528130412331294715, S2CID 5737352 Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8 Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, MA: The MIT Press, ISBN 978-0-262-58237-7 Kavka, Gregory S. (1986). Hobbesian Moral and Political Theory. Princeton University Press. ISBN 978-0-691-02765-4. Lewis, David (1969), Convention: A Philosophical Study, ISBN 978-0-631-23257-5 (2002 edition) Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature, 246 (5427): 15–18, Bibcode:1973Natur.246...15S, doi:10.1038/246015a0, S2CID 4224989 Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A modern introduction at the graduate level. Poundstone, William (1993). Prisoner's Dilemma (1st Anchor Books ed.). New York: Anchor. ISBN 0-385-41580-X. Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel Publishers, ISBN 978-0-8462-0970-6 Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese, 12 (4): 350–374, doi:10.1007/BF00485423, S2CID 46979744 Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8 Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press, ISBN 978-0-521-53392-8 Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior, Harvard University Press, ISBN 978-0-674-93047-6 Webb, James N. (2007), Game theory: decisions, interaction and evolution, Undergraduate mathematics, Springer, ISBN 978-1-84628-423-6 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes. == Further reading == === Textbooks and general literature === Aumann, Robert J (1987), "game theory", The New Palgrave: A Dictionary of Economics, vol. 2, pp. 460–82. Camerer, Colin (2003), "Introduction", Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation, pp. 1–25, ISBN 978-0-691-09039-9, archived from the original on 14 May 2011, retrieved 9 February 2011, Description. Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0. Suitable for undergraduate and business students. Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley, ISBN 978-0-201-84758-1. Suitable for upper-level undergraduates. Gaffal, Margit; Padilla Gálvez, Jesús (2014). Dynamics of Rational Negotiation: Game Theory, Language Games and Forms of Life. Springer. Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 978-0-691-00395-5. Suitable for advanced undergraduates. Published in Europe as Gibbons, Robert (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2. Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior, Princeton University Press, ISBN 978-0-691-00943-8 Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 978-0-19-507340-9. Presents game theory in formal way suitable for graduate level. Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0-7167-6630-2. Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation. Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4 Michael Maschler; Eilon Solan; Shmuel Zamir (2013), Game Theory, Cambridge University Press, ISBN 978-1-108-49345-1. Undergraduate textbook. Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6. Suitable for a general audience. Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, retrieved 8 March 2016 Watson, Joel (2013), Strategy: An Introduction to Game Theory (3rd edition), New York: W.W. Norton and Co., ISBN 978-0-393-91838-0. A leading textbook at the advanced undergraduate level. McCain, Roger A. (2010). Game Theory: A Nontechnical Introduction to the Analysis of Strategy. World Scientific. ISBN 978-981-4289-65-8. === Historically important texts === Aumann, R. J.; Shapley, L. S. (1974), Values of Non-Atomic Games, Princeton University Press Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des Sciences Politiques et Sociales Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 978-0-631-12460-3 Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley reprinted edition: R. Duncan Luce; Howard Raiffa (1989), Games and decisions: introduction and critical survey, New York: Dover Publications, ISBN 978-0-486-65943-5 Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 978-0-521-28884-2 Nash, John (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America, 36 (1): 48–49, Bibcode:1950PNAS...36...48N, doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946 Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.) Shapley, L. S. (October 1953). "Stochastic Games". Proceedings of the National Academy of Sciences. 39 (10): 1095–1100. Bibcode:1953PNAS...39.1095S. doi:10.1073/pnas.39.10.1095. PMC 1063912. PMID 16589380. von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen, 100 (1): 295–320, doi:10.1007/bf01448847, S2CID 122961988 English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press. von Neumann, John; Morgenstern, Oskar (1944), "Theory of games and economic behavior", Nature, 157 (3981), Princeton University Press: 172, Bibcode:1946Natur.157..172R, doi:10.1038/157172a0, S2CID 29754824 Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians, 2: 501–4 === Other material === Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601. McDonald, John (1950–1996), Strategy in Poker, Business & War, W. W. Norton, ISBN 978-0-393-31457-1 {{citation}}: ISBN / Date incompatibility (help). A layman's introduction. Papayoanou, Paul (2010), Game Theory for Business: A Primer in Strategic Gaming, Probabilistic, ISBN 978-0-9647938-7-3. Satterthwaite, Mark Allen (April 1975). "Strategy-proofness and Arrow's conditions: Existence and correspondence theorems for voting procedures and social welfare functions" (PDF). Journal of Economic Theory. 10 (2): 187–217. doi:10.1016/0022-0531(75)90050-2. Siegfried, Tom (2006), A Beautiful Math, Joseph Henry Press, ISBN 978-0-309-10192-9 Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 978-0-674-21885-7 Thrall, Robert M.; Lucas, William F. (1963), " n {\displaystyle n} -person games in partition function form", Naval Research Logistics Quarterly, 10 (4): 281–298, doi:10.1002/nav.3800100126 Dolev, Shlomi; Panagopoulou, Panagiota N.; Rabie, Mikaël; Schiller, Elad M.; Spirakis, Paul G. (2011). "Rationality authority for provable rational behavior". Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing. pp. 289–290. doi:10.1145/1993806.1993858. ISBN 978-1-4503-0719-2. Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh (June 2014), "Algorithms, games, and evolution", Proceedings of the National Academy of Sciences of the United States of America, 111 (29): 10620–10623, Bibcode:2014PNAS..11110620C, doi:10.1073/pnas.1406556111, PMC 4115542, PMID 24979793 == External links == James Miller (2015): Introductory Game Theory Videos. "Games, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Paul Walker: History of Game Theory Page. David Levine: Game Theory. Papers, Lecture Notes and much more stuff. Alvin Roth:"Game Theory and Experimental Economics page". Archived from the original on 15 August 2000. Retrieved 13 September 2003. — Comprehensive list of links to game theory information on the Web Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information. Jim Ratliff's Graduate Course in Game Theory (lecture notes). Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy. Bruno Verbeek and Christopher Morris: Game Theory and Ethics Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games. Marek M. Kaminski: Game Theory and Politics Archived 20 October 2006 at the Wayback Machine — Syllabuses and lecture notes for game theory and political science. Websites on game theory and social interactions Kesten Green's Conflict Forecasting at the Wayback Machine (archived 11 April 2011) — See Papers for evidence on the accuracy of forecasts from game theory and other methods Archived 15 September 2019 at the Wayback Machine. McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory. Benjamin Polak: Open Course on Game Theory at Yale Archived 3 August 2010 at the Wayback Machine videos of the course Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA. Antonin Kucera: Stochastic Two-Player Games. Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view
Wikipedia/Game_theory
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called link or line). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. In contrast, if an edge from a person A to a person B means that A owes money to B, then this graph is directed, because owing money is not necessarily reciprocated. Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image). == Definitions == Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. === Graph === A graph (sometimes called an undirected graph to distinguish it from a directed graph, or a simple graph to distinguish it from a multigraph) is a pair G = (V, E), where V is a set whose elements are called vertices (singular: vertex), and E is a set of unordered pairs { v 1 , v 2 } {\displaystyle \{v_{1},v_{2}\}} of vertices, whose elements are called edges (sometimes links or lines). The vertices u and v of an edge {u, v} are called the edge's endpoints. The edge is said to join u and v and to be incident on them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is called isolated. When an edge { u , v } {\displaystyle \{u,v\}} exists, the vertices u and v are called adjacent. A multigraph is a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs. Sometimes, graphs are allowed to contain loops, which are edges that join a vertex to itself. To allow loops, the pairs of vertices in E must be allowed to have the same node twice. Such generalized graphs are called graphs with loops or simply graphs when it is clear from the context that loops are allowed. Generally, the vertex set V is taken to be finite (which implies that the edge set E is also finite). Sometimes infinite graphs are considered, but they are usually viewed as a special kind of binary relation, because most results on finite graphs either do not extend to the infinite case or need a rather different proof. An empty graph is a graph that has an empty set of vertices (and thus an empty set of edges). The order of a graph is its number |V| of vertices, usually denoted by n. The size of a graph is its number |E| of edges, typically denoted by m. However, in some contexts, such as for expressing the computational complexity of algorithms, the term size is used for the quantity |V| + |E| (otherwise, a non-empty graph could have size 0). The degree or valency of a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice. In a graph of order n, the maximum degree of each vertex is n − 1 (or n + 1 if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges is n(n − 1)/2 (or n(n + 1)/2 if loops are allowed). The edges of a graph define a symmetric relation on the vertices, called the adjacency relation. Specifically, two vertices x and y are adjacent if {x, y} is an edge. A graph is fully determined by its adjacency matrix A, which is an n × n square matrix, with Aij specifying the number of connections from vertex i to vertex j. For a simple graph, Aij is either 0, indicating disconnection, or 1, indicating connection; moreover Aii = 0 because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or all Aii being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all Aij being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (meaning Aij = Aji). === Directed graph === A directed graph or digraph is a graph in which edges have orientations. In one restricted but very common sense of the term, a directed graph is a pair G = (V, E) comprising: V, a set of vertices (also called nodes or points); E, a set of edges (also called directed edges, directed links, directed lines, arrows, or arcs), which are ordered pairs of distinct vertices: E ⊆ { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle E\subseteq \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . To avoid ambiguity, this type of object may be called precisely a directed simple graph. In the edge (x, y) directed from x to y, the vertices x and y are called the endpoints of the edge, x the tail of the edge and y the head of the edge. The edge is said to join x and y and to be incident on x and on y. A vertex may exist in a graph and not belong to an edge. The edge (y, x) is called the inverted edge of (x, y). Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head. In one more general sense of the term allowing multiple edges, a directed graph is sometimes defined to be an ordered triple G = (V, E, ϕ) comprising: V, a set of vertices (also called nodes or points); E, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs); ϕ, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices): ϕ : E → { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \phi :E\to \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . To avoid ambiguity, this type of object may be called precisely a directed multigraph. A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex x {\displaystyle x} to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) ( x , x ) {\displaystyle (x,x)} which is not in { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of E {\displaystyle E} should be modified to E ⊆ V 2 {\displaystyle E\subseteq V^{2}} . For directed multigraphs, the definition of ϕ {\displaystyle \phi } should be modified to ϕ : E → V 2 {\displaystyle \phi :E\to V^{2}} . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively. The edges of a directed simple graph permitting loops G is a homogeneous relation ~ on the vertices of G that is called the adjacency relation of G. Specifically, for each edge (x, y), its endpoints x and y are said to be adjacent to one another, which is denoted x ~ y. === Mixed graph === A mixed graph is a graph in which some edges may be directed and some may be undirected. It is an ordered triple G = (V, E, A) for a mixed simple graph and G = (V, E, A, ϕE, ϕA) for a mixed multigraph with V, E (the undirected edges), A (the directed edges), ϕE and ϕA defined as above. Directed and undirected graphs are special cases. === Weighted graph === A weighted graph or a network is a graph in which a number (the weight) is assigned to each edge. Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example in shortest path problems such as the traveling salesman problem. == Types of graphs == === Oriented graph === One definition of an oriented graph is that it is a directed graph in which at most one of (x, y) and (y, x) may be edges of the graph. That is, it is a directed graph that can be formed as an orientation of an undirected (simple) graph. Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph. === Regular graph === A regular graph is a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k. === Complete graph === A complete graph is a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges. === Finite graph === A finite graph is a graph in which the vertex set and the edge set are finite sets. Otherwise, it is called an infinite graph. Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated. === Connected graph === In an undirected graph, an unordered pair of vertices {x, y} is called connected if a path leads from x to y. Otherwise, the unordered pair is called disconnected. A connected graph is an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called a disconnected graph. In a directed graph, an ordered pair of vertices (x, y) is called strongly connected if a directed path leads from x to y. Otherwise, the ordered pair is called weakly connected if an undirected path leads from x to y after replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is called disconnected. A strongly connected graph is a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called a weakly connected graph if every ordered pair of vertices in the graph is weakly connected. Otherwise it is called a disconnected graph. A k-vertex-connected graph or k-edge-connected graph is a graph in which no set of k − 1 vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called simply a k-connected graph. === Bipartite graph === A bipartite graph is a simple graph in which the vertex set can be partitioned into two sets, W and X, so that no two vertices in W share a common edge and no two vertices in X share a common edge. Alternatively, it is a graph with a chromatic number of 2. In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X. === Path graph === A path graph or linear graph of order n ≥ 2 is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the {vi, vi+1} where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph. === Planar graph === A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect. === Cycle graph === A cycle graph or circular graph of order n ≥ 3 is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the {vi, vi+1} where i = 1, 2, …, n − 1, plus the edge {vn, v1}. Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph. === Tree === A tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees. === Polytree === A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. === Advanced classes === More advanced kinds of graphs are: Petersen graph and its generalizations; perfect graphs; cographs; chordal graphs; other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distance-transitive graphs; strongly regular graphs and their generalizations distance-regular graphs. == Properties of graphs == Two edges of a graph are called adjacent if they share a common vertex. Two edges of a directed graph are called consecutive if the head of the first one is the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if the first one is the tail and the second one is the head of an edge), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident. The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object. Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are called edge-labeled. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (In the literature, the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.) The category of directed multigraphs permitting loops is the comma category Set ↓ D where D: Set → Set is the functor taking a set s to s × s. == Examples == The diagram is a schematic representation of the graph with vertices V = { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle V=\{1,2,3,4,5,6\}} and edges E = { { 1 , 2 } , { 1 , 5 } , { 2 , 3 } , { 2 , 5 } , { 3 , 4 } , { 4 , 5 } , { 4 , 6 } } . {\displaystyle E=\{\{1,2\},\{1,5\},\{2,3\},\{2,5\},\{3,4\},\{4,5\},\{4,6\}\}.} In computer science, directed graphs are used to represent knowledge (e.g., conceptual graph), finite-state machines, and many other discrete structures. A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X if and only if xRy. A directed graph can model information networks such as Twitter, with one user following another. Particularly regular examples of directed graphs are given by the Cayley graphs of finitely-generated groups, as well as Schreier coset graphs In category theory, every small category has an underlying directed multigraph whose vertices are the objects of the category, and whose edges are the arrows of the category. In the language of category theory, one says that there is a forgetful functor from the category of small categories to the category of quivers. == Graph operations == There are several operations that produce new graphs from initial ones, which might be classified into the following categories: unary operations, which create a new graph from an initial one, such as: edge contraction, line graph, dual graph, complement graph, graph rewriting; binary operations, which create a new graph from two initial ones, such as: disjoint union of graphs, cartesian product of graphs, tensor product of graphs, strong product of graphs, lexicographic product of graphs, series–parallel graphs. == Generalizations == In a hypergraph, an edge can join any positive number of vertices. An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices. Every graph gives rise to a matroid. In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph. In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs. In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids. == See also == Conceptual graph Graph (abstract data type) Graph database Graph drawing List of graph theory topics List of publications in graph theory Network theory == Notes == == References == Balakrishnan, V. K. (1997). Graph Theory (1st ed.). McGraw-Hill. ISBN 978-0-07-005489-9. Bang-Jensen, J.; Gutin, G. (2000). Digraphs: Theory, Algorithms and Applications. Springer. Bender, Edward A.; Williamson, S. Gill (2010). Lists, Decisions and Graphs. With an Introduction to Probability. Berge, Claude (1958). Théorie des graphes et ses applications (in French). Paris: Dunod. Biggs, Norman (1993). Algebraic Graph Theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-45897-9. Bollobás, Béla (2002). Modern Graph Theory (1st ed.). Springer. ISBN 978-0-387-98488-9. Diestel, Reinhard (2005). Graph Theory (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-26183-4. Graham, R.L.; Grötschel, M.; Lovász, L. (1995). Handbook of Combinatorics. MIT Press. ISBN 978-0-262-07169-7. Gross, Jonathan L.; Yellen, Jay (1998). Graph Theory and Its Applications. CRC Press. ISBN 978-0-8493-3982-0. Gross, Jonathan L.; Yellen, Jay (2003). Handbook of Graph Theory. CRC. ISBN 978-1-58488-090-5. Harary, Frank (1995). Graph Theory. Addison Wesley Publishing Company. ISBN 978-0-201-41033-4. Iyanaga, Shôkichi; Kawada, Yukiyosi (1977). Encyclopedic Dictionary of Mathematics. MIT Press. ISBN 978-0-262-09016-2. Zwillinger, Daniel (2002). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 978-1-58488-291-6. == Further reading == Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Publications. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. == External links == Media related to Graph (discrete mathematics) at Wikimedia Commons Weisstein, Eric W. "Graph". MathWorld.
Wikipedia/Graph_(discrete_mathematics)
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation). Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this are Fermat's Last Theorem, which was proved 358 years after the original formulation, and Goldbach's conjecture, which remains unsolved since the 18th century. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation of public-key cryptography algorithms. == History == Number theory is the branch of mathematics that studies integers and their properties and relations. The integers comprise a set that extends the set of natural numbers { 1 , 2 , 3 , … } {\displaystyle \{1,2,3,\dots \}} to include number 0 {\displaystyle 0} and the negation of natural numbers { − 1 , − 2 , − 3 , … } {\displaystyle \{-1,-2,-3,\dots \}} . Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Number theory is closely related to arithmetic and some authors use the terms as synonyms. However, the word "arithmetic" is used today to mean the study of numerical operations and extends to the real numbers. In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships. Traditionally, it is known as higher arithmetic. By the early twentieth century, the term number theory had been widely adopted. The term number means whole numbers, which refers to either the natural numbers or the integers. Elementary number theory studies aspects of integers that can be investigated using elementary methods such as elementary proofs. Analytic number theory, by contrast, relies on complex numbers and techniques from analysis and calculus. Algebraic number theory employs algebraic structures such as fields and rings to analyze the properties of and relations between numbers. Geometric number theory uses concepts from geometry to study numbers. Further branches of number theory are probabilistic number theory, combinatorial number theory, computational number theory, and applied number theory, which examines the application of number theory to science and technology. === Origins === ==== Ancient Mesopotamia ==== The earliest historical find of an arithmetical nature is a fragment of a table: Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers ( a , b , c ) {\displaystyle (a,b,c)} such that a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} . The triples are too numerous and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity ( 1 2 ( x − 1 x ) ) 2 + 1 = ( 1 2 ( x + 1 x ) ) 2 , {\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right)\right)^{2}+1=\left({\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)^{2},} which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by c / a {\displaystyle c/a} , presumably for actual use as a "table", for example, with a view to applications. It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems. Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind of Babylonian algebra was much more developed. ==== Ancient Greece ==== Although other civilizations probably influenced Greek mathematics at the beginning, all evidence of such borrowings appear relatively late, and it is likely that Greek arithmētikḗ (the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (the Archaic and Classical periods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the early Hellenistic period. In the case of number theory, this means largely Plato, Aristotle, and Euclid. Plato had a keen interest in mathematics, and distinguished clearly between arithmētikḗ and calculation (logistikē). Plato reports in his dialogue Theaetetus that Theodorus had proven that 3 , 5 , … , 17 {\displaystyle {\sqrt {3}},{\sqrt {5}},\dots ,{\sqrt {17}}} are irrational. Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: Platonem ferunt didicisse Pythagorea omnia ("They say Plato learned all things Pythagorean"). Euclid devoted part of his Elements (Books VII–IX) to topics that belong to elementary number theory, including prime numbers and divisibility. He gave an algorithm, the Euclidean algorithm, for computing the greatest common divisor of two numbers (Prop. VII.2) and a proof implying the infinitude of primes (Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it". This is all that is needed to prove that 2 {\displaystyle {\sqrt {2}}} is irrational. Pythagoreans apparently gave great importance to the odd and the even. The discovery that 2 {\displaystyle {\sqrt {2}}} is irrational is credited to the early Pythagoreans, sometimes assigned to Hippasus, who was expelled or split from the Pythagorean community as a result. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic) and lengths and proportions (which may be identified with real numbers, whether rational or not). The Pythagorean tradition also spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th centuries). An epigram published by Lessing in 1773 appears to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution. ===== Late Antiquity ===== Aside from the elementary work of Neopythagoreans such as Nicomachus and Theon of Smyrna, the foremost authority in arithmētikḗ in Late Antiquity was Diophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant: On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and the Arithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The Arithmetica is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form f ( x , y ) = z 2 {\displaystyle f(x,y)=z^{2}} or f ( x , y , z ) = w 2 {\displaystyle f(x,y,z)=w^{2}} . In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought. ==== Asia ==== The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an autochthonous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences n ≡ a 1 mod m 1 {\displaystyle n\equiv a_{1}{\bmod {m}}_{1}} , n ≡ a 2 mod m 2 {\displaystyle n\equiv a_{2}{\bmod {m}}_{2}} could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations. Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century). Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke. ==== Arithmetic in the Islamic golden age ==== In the early ninth century, the caliph al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta). Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912). Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem. ==== Western Europe in the Middle Ages ==== Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica. === Early modern number theory === ==== Fermat ==== Pierre de Fermat (1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area. Over his lifetime, Fermat made the following contributions to the field: One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day. In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer. Fermat's little theorem (1640): if a is not divisible by a prime p, then a p − 1 ≡ 1 mod p . {\displaystyle a^{p-1}\equiv 1{\bmod {p}}.} If a and b are coprime, then a 2 + b 2 {\displaystyle a^{2}+b^{2}} is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form a 2 + b 2 {\displaystyle a^{2}+b^{2}} . These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent. In 1657, Fermat posed the problem of solving x 2 − N y 2 = 1 {\displaystyle x^{2}-Ny^{2}=1} as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent. Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that x 4 + y 4 = z 4 {\displaystyle x^{4}+y^{4}=z^{4}} has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that x 3 + y 3 = z 3 {\displaystyle x^{3}+y^{3}=z^{3}} has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent). Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to x n + y n = z n {\displaystyle x^{n}+y^{n}=z^{n}} for all n ≥ 3 {\displaystyle n\geq 3} ; this claim appears in his annotations in the margins of his copy of Diophantus. ==== Euler ==== The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following: Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that p = x 2 + y 2 {\displaystyle p=x^{2}+y^{2}} if and only if p ≡ 1 mod 4 {\displaystyle p\equiv 1{\bmod {4}}} ; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to x 4 + y 4 = z 2 {\displaystyle x^{4}+y^{4}=z^{2}} (implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method). Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation. First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function. Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form x 2 + N y 2 {\displaystyle x^{2}+Ny^{2}} , some of it prefiguring quadratic reciprocity. Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated. ==== Lagrange, Legendre, and Gauss ==== Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to m X 2 + n Y 2 {\displaystyle mX^{2}+nY^{2}} ), including defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation a x 2 + b y 2 + c z 2 = 0 {\displaystyle ax^{2}+by^{2}+cz^{2}=0} and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for n = 5 {\displaystyle n=5} (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain). Carl Friedrich Gauss (1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. The Disquisitiones Arithmeticae (1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic. In this way, Gauss arguably made forays towards Évariste Galois's work and the area algebraic number theory. === Maturity and division into subfields === Starting early in the nineteenth century, the following developments gradually took place: The rise to self-consciousness of number theory (or higher arithmetic) as a field of study. The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra. The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory. Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms). The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize. == Main subdivisions == === Elementary number theory === Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic. Its primary subjects of study are divisibility, factorization, and primality, as well as congruences in modular arithmetic. Other topics in elementary number theory include Diophantine equations, continued fractions, integer partitions, and Diophantine approximations. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed the product, such as 2 × 3 = 6 {\displaystyle 2\times 3=6} . Divisibility is a property between two nonzero integers related to division. An integer a {\displaystyle a} is said to be divisible by a nonzero integer b {\displaystyle b} if a {\displaystyle a} is a multiple of b {\displaystyle b} ; that is, if there exists an integer q {\displaystyle q} such that a = b q {\displaystyle a=bq} . An equivalent formulation is that b {\displaystyle b} divides a {\displaystyle a} and is denoted by a vertical bar, which in this case is b | a {\displaystyle b|a} . Conversely, if this were not the case, then a {\displaystyle a} would not be divided evenly by b {\displaystyle b} , resulting in a remainder. Euclid's division lemma asserts that a {\displaystyle a} and b {\displaystyle b} can generally be written as a = b q + r {\displaystyle a=bq+r} , where the remainder r < b {\displaystyle r<b} accounts for the leftover quantity. Elementary number theory studies divisibility rules in order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimal digit sum is divisible by 3. A common divisor of several nonzero integers is an integer that divides all of them. The greatest common divisor (gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. The Euclidean algorithm computes the greatest common divisor of two integers a , b {\displaystyle a,b} by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. The algorithm can be extended to solve a special case of linear Diophantine equations a x + b y = 1 {\displaystyle ax+by=1} . A Diophantine equation is an equation with several unknowns and integer coefficients. Another kind of Diophantine equation is described in the Pythagorean theorem, x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} , whose solutions are called Pythagorean triples if they are all integers. Elementary number theory studies the divisibility properties of integers such as parity (even and odd numbers), prime numbers, and perfect numbers. Important number-theoric functions include the divisor-counting function, the divisor summatory function and its modifications, and Euler's totient function. A prime number is an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number. Euclid's theorem demonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. The sieve of Eratosthenes was devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers. Factorization is a method of expressing a number as a product. Specifically in number theory, integer factorization is the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known as prime factorization. A fundamental property of primes is shown in Euclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. The unique factorization theorem is the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example, 120 {\displaystyle 120} is expressed uniquely as 2 × 2 × 2 × 3 × 5 {\displaystyle 2\times 2\times 2\times 3\times 5} or simply 2 3 × 3 × 5 {\displaystyle 2^{3}\times 3\times 5} . Modular arithmetic works with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integers a , b {\displaystyle a,b} modulo n {\displaystyle n} (a positive integer called the modulus) is an equivalence relation whereby n | ( a − b ) {\displaystyle n|(a-b)} is true. Performing Euclidean division on both a {\displaystyle a} and n {\displaystyle n} , and on b {\displaystyle b} and n {\displaystyle n} , yields the same remainder. This written as a ≡ b ( mod n ) {\textstyle a\equiv b{\pmod {n}}} . In a manner analogous to the 12-hour clock, the sum of 4 and 9 is equal to 13, yet congruent to 1. A residue class modulo n {\displaystyle n} is a set that contains all integers congruent to a specified r {\displaystyle r} modulo n {\displaystyle n} . For example, 6 Z + 1 {\displaystyle 6\mathbb {Z} +1} contains all multiples of 6 incremented by 1. Modular arithmetic provides a range of formulas for rapidly solving congruences of very large powers. An influential theorem is Fermat's little theorem, which states that if a prime p {\displaystyle p} is coprime to some integer a {\displaystyle a} , then a p − 1 ≡ 1 ( mod p ) {\textstyle a^{p-1}\equiv 1{\pmod {p}}} is true. Euler's theorem extends this to assert that every integer n {\displaystyle n} satisfies the congruence a φ ( n ) ≡ 1 ( mod n ) , {\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},} where Euler's totient function φ {\displaystyle \varphi } counts all positive integers up to n {\displaystyle n} that are coprime to n {\displaystyle n} . Modular arithmetic also provides formulas that are used to solve congruences with unknowns in a similar vein to equation solving in algebra, such as the Chinese remainder theorem. === Analytic number theory === Analytic number theory, in contrast to elementary number theory, relies on complex numbers and techniques from analysis and calculus. Analytic number theory may be defined in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or in terms of its concerns, as the study within number theory of estimates on the size and density of certain numbers (e.g., primes), as opposed to identities. It studies the distribution of primes, behavior of number-theoric functions, and irrational numbers. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture, the twin prime conjecture, the Hardy–Littlewood conjectures, the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory. Analysis is the branch of mathematics that studies the limit, defined as the value to which a sequence or function tends as the argument (or index) approaches a specific value. For example, the limit of the sequence 0.9, 0.99, 0.999, ... is 1. In the context of functions, the limit of 1 x {\textstyle {\frac {1}{x}}} as x {\displaystyle x} approaches infinity is 0. The complex numbers extend the real numbers with the imaginary unit i {\displaystyle i} defined as the solution to i 2 = − 1 {\displaystyle i^{2}=-1} . Every complex number can be expressed as x + i y {\displaystyle x+iy} , where x {\displaystyle x} is called the real part and y {\displaystyle y} is called the imaginary part. The distribution of primes, described by the function π {\displaystyle \pi } that counts all primes up to a given real number, is unpredictable and is a major subject of study in number theory. Elementary formulas for a partial sequence of primes, including Euler's prime-generating polynomials have been developed. However, these cease to function as the primes become too large. The prime number theorem in analytic number theory provides a formalisation of the notion that prime numbers appear less commonly as their numerical value increases. One distribution states, informally, that the function x log ⁡ ( x ) {\displaystyle {\frac {x}{\log(x)}}} approximates π ( x ) {\displaystyle \pi (x)} . Another distribution involves an offset logarithmic integral which converges to π ( x ) {\displaystyle \pi (x)} more quickly. The zeta function has been demonstrated to be connected to the distribution of primes. It is defined as the series ζ ( s ) = ∑ n = 1 ∞ 1 n s = 1 1 s + 1 2 s + 1 3 s + ⋯ {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots } that converges if s {\displaystyle s} is greater than 1. Euler demonstrated a link involving the infinite product over all prime numbers, expressed as the identity ζ ( s ) = ∏ p prime ( 1 − 1 p s ) − 1 . {\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)^{-1}.} Riemann extended the definition to a complex variable and conjectured that all nontrivial cases ( 0 < ℜ ( s ) < 1 {\displaystyle 0<\Re (s)<1} ) where the function returns a zero are those in which the real part of s {\displaystyle s} is equal to 1 2 {\textstyle {\frac {1}{2}}} . He established a connection between the nontrivial zeroes and the prime-counting function. In what is now recognised as the unsolved Riemann hypothesis, a solution to it would imply direct consequences for understanding the distribution of primes. One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function. Elementary number theory works with elementary proofs, a term that excludes the use of complex numbers but may include basic analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous. For example, proofs based on complex Tauberian theorems, such as Wiener–Ikehara, are often seen as quite enlightening but not elementary despite using Fourier analysis, not complex analysis. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a more advanced proof. Some subjects generally considered to be part of analytic number theory (e.g., sieve theory) are better covered by the second rather than the first definition. Small sieves, for instance, use little analysis and yet still belong to analytic number theory. === Algebraic number theory === An algebraic number is any complex number that is a solution to some polynomial equation f ( x ) = 0 {\displaystyle f(x)=0} with rational coefficients; for example, every solution x {\displaystyle x} of x 5 + ( 11 / 2 ) x 3 − 7 x 2 + 9 = 0 {\displaystyle x^{5}+(11/2)x^{3}-7x^{2}+9=0} is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields. It could be argued that the simplest kind of number fields, namely quadratic fields, were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones Arithmeticae can be restated in terms of ideals and norms in quadratic fields. (A quadratic field consists of all numbers of the form a + b d {\displaystyle a+b{\sqrt {d}}} , where a {\displaystyle a} and b {\displaystyle b} are rational numbers and d {\displaystyle d} is a fixed rational number whose square root is not rational.) For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals and − 5 {\displaystyle {\sqrt {-5}}} , the number 6 {\displaystyle 6} can be factorised both as 6 = 2 ⋅ 3 {\displaystyle 6=2\cdot 3} and 6 = ( 1 + − 5 ) ( 1 − − 5 ) {\displaystyle 6=(1+{\sqrt {-5}})(1-{\sqrt {-5}})} ; all of 2 {\displaystyle 2} , 3 {\displaystyle 3} , 1 + − 5 {\displaystyle 1+{\sqrt {-5}}} and 1 − − 5 {\displaystyle 1-{\sqrt {-5}}} are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalizations of quadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K. (For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood. Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. === Diophantine geometry === The central problem of Diophantine geometry is to determine when a Diophantine equation has integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in n-dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely or infinitely many rational points on a given curve or surface. Consider, for instance, the Pythagorean equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} . One would like to know its rational solutions, namely ( x , y ) {\displaystyle (x,y)} such that x and y are both rational. This is the same as asking for all integer solutions to a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} ; any solution to the latter equation gives us a solution x = a / c {\displaystyle x=a/c} , y = b / c {\displaystyle y=b/c} to the former. It is also the same as asking for all points with rational coordinates on the curve described by x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} (a circle of radius 1 centered on the origin). The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation f ( x , y ) = 0 {\displaystyle f(x,y)=0} , where f {\displaystyle f} is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial. There is also the closely linked area of Diophantine approximations: given a number x {\displaystyle x} , determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call a / q {\displaystyle a/q} (with gcd ( a , q ) = 1 {\displaystyle \gcd(a,q)=1} ) a good approximation to x {\displaystyle x} if | x − a / q | < 1 q c {\displaystyle |x-a/q|<{\frac {1}{q^{c}}}} , where c {\displaystyle c} is large. This question is of special interest if x {\displaystyle x} is an algebraic number. If x {\displaystyle x} cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that π and e have been shown to be transcendental. Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry is a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations. === Other subfields === Probabilistic number theory starts with questions such as the following: Take an integer n at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will n have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average? Combinatorics in number theory starts with questions like the following: Does a fairly "thick" infinite set A {\displaystyle A} contain many elements in arithmetic progression: a {\displaystyle a} , a + b , a + 2 b , a + 3 b , … , a + 10 b {\displaystyle a+b,a+2b,a+3b,\ldots ,a+10b} ? Should it be possible to write large integers as sums of elements of A {\displaystyle A} ?There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. == Applications == For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations". Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis. Number theory has now several modern applications spanning diverse areas such as: Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis. Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics. Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes. Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory. Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2. == See also == Arithmetic dynamics Algebraic function field Arithmetic topology Finite field p-adic number List of number theoretic algorithms == Notes == == References == === Sources === This article incorporates material from the Citizendium article "Number theory", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL. == Further reading == Two of the most popular introductions to the subject are: Hardy, G. H.; Wright, E. M. (2008) [1938]. An introduction to the theory of numbers (rev. by D. R. Heath-Brown and J. H. Silverman, 6th ed.). Oxford University Press. ISBN 978-0-19-921986-5. Vinogradov, I. M. (2003) [1954]. Elements of Number Theory (reprint of the 1954 ed.). Mineola, NY: Dover Publications. Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol 1981). Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are: Ivan M. Niven; Herbert S. Zuckerman; Hugh L. Montgomery (2008) [1960]. An introduction to the theory of numbers (reprint of the 5th 1991 ed.). John Wiley & Sons. ISBN 978-81-265-1811-1. Retrieved 2016-02-28. Rosen, Kenneth H. (2010). Elementary Number Theory (6th ed.). Pearson Education. ISBN 978-0-321-71775-7. Retrieved 2016-02-28. Popular choices for a second textbook include: Borevich, A. I.; Shafarevich, Igor R. (1966). Number theory. Pure and Applied Mathematics. Vol. 20. Boston, MA: Academic Press. ISBN 978-0-12-117850-5. MR 0195803. Serre, Jean-Pierre (1996) [1973]. A course in arithmetic. Graduate Texts in Mathematics. Vol. 7. Springer. ISBN 978-0-387-90040-7. == External links == Number Theory entry in the Encyclopedia of Mathematics Number Theory Web
Wikipedia/Number_theory
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based on Horner's rule, in which a polynomial is written in nested form: a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This allows the evaluation of a polynomial of degree n with only n {\displaystyle n} multiplications and n {\displaystyle n} additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations. Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970. == Polynomial evaluation and long division == Given the polynomial p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constant coefficients, the problem is to evaluate the polynomial at a specific value x 0 {\displaystyle x_{0}} of x . {\displaystyle x.} For this, a new sequence of constants is defined recursively as follows: Then b 0 {\displaystyle b_{0}} is the value of p ( x 0 ) {\displaystyle p(x_{0})} . To see why this works, the polynomial can be written in the form p ( x ) = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .} Thus, by iteratively substituting the b i {\displaystyle b_{i}} into the expression, p ( x 0 ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 ( a n − 1 + b n x 0 ) ⋯ ) ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 b n − 1 ) ) ⋮ = a 0 + x 0 b 1 = b 0 . {\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}} Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; p ( x ) / ( x − x 0 ) {\displaystyle p(x)/(x-x_{0})} with b 0 {\displaystyle b_{0}} (which is equal to p ( x 0 ) {\displaystyle p(x_{0})} ) being the division's remainder, as is demonstrated by the examples below. If x 0 {\displaystyle x_{0}} is a root of p ( x ) {\displaystyle p(x)} , then b 0 = 0 {\displaystyle b_{0}=0} (meaning the remainder is 0 {\displaystyle 0} ), which means you can factor p ( x ) {\displaystyle p(x)} as x − x 0 {\displaystyle x-x_{0}} . To finding the consecutive b {\displaystyle b} -values, you start with determining b n {\displaystyle b_{n}} , which is simply equal to a n {\displaystyle a_{n}} . Then you then work recursively using the formula: b n − 1 = a n − 1 + b n x 0 {\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}} till you arrive at b 0 {\displaystyle b_{0}} . === Examples === Evaluate f ( x ) = 2 x 3 − 6 x 2 + 2 x − 1 {\displaystyle f(x)=2x^{3}-6x^{2}+2x-1} for x = 3 {\displaystyle x=3} . We use synthetic division as follows: x0│ x3 x2 x1 x0 3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5 The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} is 5. But by the polynomial remainder theorem, we know that the remainder is f ( 3 ) {\displaystyle f(3)} . Thus, f ( 3 ) = 5 {\displaystyle f(3)=5} . In this example, if a 3 = 2 , a 2 = − 6 , a 1 = 2 , a 0 = − 1 {\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1} we can see that b 3 = 2 , b 2 = 0 , b 1 = 2 , b 0 = 5 {\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5} , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} . The remainder is 5. This makes Horner's method useful for polynomial long division. Divide x 3 − 6 x 2 + 11 x − 6 {\displaystyle x^{3}-6x^{2}+11x-6} by x − 2 {\displaystyle x-2} : 2 │ 1 −6 11 −6 │ 2 −8 6 └──────────────────────── 1 −4 3 0 The quotient is x 2 − 4 x + 3 {\displaystyle x^{2}-4x+3} . Let f 1 ( x ) = 4 x 4 − 6 x 3 + 3 x − 5 {\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5} and f 2 ( x ) = 2 x − 1 {\displaystyle f_{2}(x)=2x-1} . Divide f 1 ( x ) {\displaystyle f_{1}(x)} by f 2 ( x ) {\displaystyle f_{2}\,(x)} using Horner's method. 0.5 │ 4 −6 0 3 −5 │ 2 −2 −1 1 └─────────────────────── 2 −2 −1 1 −4 The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is f 1 ( x ) f 2 ( x ) = 2 x 3 − 2 x 2 − x + 1 − 4 2 x − 1 . {\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.} === Efficiency === Evaluation using the monomial form of a degree n {\displaystyle n} polynomial requires at most n {\displaystyle n} additions and ( n 2 + n ) / 2 {\displaystyle (n^{2}+n)/2} multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to n {\displaystyle n} additions and 2 n − 1 {\displaystyle 2n-1} multiplications by evaluating the powers of x {\displaystyle x} by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2 n {\displaystyle 2n} times the number of bits of x {\displaystyle x} : the evaluated polynomial has approximate magnitude x n {\displaystyle x^{n}} , and one must also store x n {\displaystyle x^{n}} itself. By contrast, Horner's method requires only n {\displaystyle n} additions and n {\displaystyle n} multiplications, and its storage requirements are only n {\displaystyle n} times the number of bits of x {\displaystyle x} . Alternatively, Horner's method can be computed with n {\displaystyle n} fused multiply–adds. Horner's method can also be extended to evaluate the first k {\displaystyle k} derivatives of the polynomial with k n {\displaystyle kn} additions and multiplications. Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when x {\displaystyle x} is a matrix, Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- n {\displaystyle n} polynomial can be evaluated using only ⌊n/2⌋+2 multiplications and n {\displaystyle n} additions. ==== Parallel evaluation ==== A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + ( a 1 x + a 3 x 3 + a 5 x 5 + ⋯ ) = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + x ( a 1 + a 3 x 2 + a 5 x 4 + ⋯ ) = ∑ i = 0 ⌊ n / 2 ⌋ a 2 i x 2 i + x ∑ i = 0 ⌊ n / 2 ⌋ a 2 i + 1 x 2 i = p 0 ( x 2 ) + x p 1 ( x 2 ) . {\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}} More generally, the summation can be broken into k parts: p ( x ) = ∑ i = 0 n a i x i = ∑ j = 0 k − 1 x j ∑ i = 0 ⌊ n / k ⌋ a k i + j x k i = ∑ j = 0 k − 1 x j p j ( x k ) {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})} where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism. === Application to floating-point multiplication and division === Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) a i = 1 {\displaystyle a_{i}=1} , and x = 2 {\displaystyle x=2} . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), x = 2 {\displaystyle x=2} , so powers of 2 are repeatedly factored out. ==== Example ==== For example, to find the product of two numbers (0.15625) and m: ( 0.15625 ) m = ( 0.00101 b ) m = ( 2 − 3 + 2 − 5 ) m = ( 2 − 3 ) m + ( 2 − 5 ) m = 2 − 3 ( m + ( 2 − 2 ) m ) = 2 − 3 ( m + 2 − 2 ( m ) ) . {\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}} ==== Method ==== To find the product of two binary numbers d and m: A register holding the intermediate result is initialized to d. Begin with the least significant (rightmost) non-zero bit in m. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m. ==== Derivation ==== In general, for a binary number with bit values ( d 3 d 2 d 1 d 0 {\displaystyle d_{3}d_{2}d_{1}d_{0}} ) the product is ( d 3 2 3 + d 2 2 2 + d 1 2 1 + d 0 2 0 ) m = d 3 2 3 m + d 2 2 2 m + d 1 2 1 m + d 0 2 0 m . {\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.} At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation: = d 0 ( m + 2 d 1 d 0 ( m + 2 d 2 d 1 ( m + 2 d 3 d 2 ( m ) ) ) ) . {\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).} The denominators all equal one (or the term is absent), so this reduces to = d 0 ( m + 2 d 1 ( m + 2 d 2 ( m + 2 d 3 ( m ) ) ) ) , {\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),} or equivalently (as consistent with the "method" described above) = d 3 ( m + 2 − 1 d 2 ( m + 2 − 1 d 1 ( m + d 0 ( m ) ) ) ) . {\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).} In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space. === Other applications === Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known. == Polynomial root finding == Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial p n ( x ) {\displaystyle p_{n}(x)} of degree n {\displaystyle n} with zeros z n < z n − 1 < ⋯ < z 1 , {\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},} make some initial guess x 0 {\displaystyle x_{0}} such that z 1 < x 0 {\displaystyle z_{1}<x_{0}} . Now iterate the following two steps: Using Newton's method, find the largest zero z 1 {\displaystyle z_{1}} of p n ( x ) {\displaystyle p_{n}(x)} using the guess x 0 {\displaystyle x_{0}} . Using Horner's method, divide out ( x − z 1 ) {\displaystyle (x-z_{1})} to obtain p n − 1 {\displaystyle p_{n-1}} . Return to step 1 but use the polynomial p n − 1 {\displaystyle p_{n-1}} and the initial guess z 1 {\displaystyle z_{1}} . These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials. === Example === Consider the polynomial p 6 ( x ) = ( x + 8 ) ( x + 5 ) ( x + 3 ) ( x − 2 ) ( x − 3 ) ( x − 7 ) {\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)} which can be expanded to p 6 ( x ) = x 6 + 4 x 5 − 72 x 4 − 214 x 3 + 1127 x 2 + 1602 x − 5040. {\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.} From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next p ( x ) {\displaystyle p(x)} is divided by ( x − 7 ) {\displaystyle (x-7)} to obtain p 5 ( x ) = x 5 + 11 x 4 + 5 x 3 − 179 x 2 − 126 x + 720 {\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720} which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by ( x − 3 ) {\displaystyle (x-3)} to obtain p 4 ( x ) = x 4 + 14 x 3 + 47 x 2 − 38 x − 240 {\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240} which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain p 3 ( x ) = x 3 + 16 x 2 + 79 x + 120 {\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120} which is shown in green and found to have a zero at −3. This polynomial is further reduced to p 2 ( x ) = x 2 + 13 x + 40 {\displaystyle p_{2}(x)=x^{2}+13x+40} which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing p 2 ( x ) {\displaystyle p_{2}(x)} and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found. == Divided difference of a polynomial == Horner's method can be modified to compute the divided difference ( p ( y ) − p ( x ) ) / ( y − x ) . {\displaystyle (p(y)-p(x))/(y-x).} Given the polynomial (as before) p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} proceed as follows b n = a n , d n = b n , b n − 1 = a n − 1 + b n x , d n − 1 = b n − 1 + d n y , ⋮ ⋮ b 1 = a 1 + b 2 x , d 1 = b 1 + d 2 y , b 0 = a 0 + b 1 x . {\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}} At completion, we have p ( x ) = b 0 , p ( y ) − p ( x ) y − x = d 1 , p ( y ) = b 0 + ( y − x ) d 1 . {\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}} This computation of the divided difference is subject to less round-off error than evaluating p ( x ) {\displaystyle p(x)} and p ( y ) {\displaystyle p(y)} separately, particularly when x ≈ y {\displaystyle x\approx y} . Substituting y = x {\displaystyle y=x} in this method gives d 1 = p ′ ( x ) {\displaystyle d_{1}=p'(x)} , the derivative of p ( x ) {\displaystyle p(x)} . == History == Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: Paolo Ruffini in 1809 (see Ruffini's rule) Isaac Newton in 1669 the Chinese mathematician Zhu Shijie in the 14th century the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation) the Chinese mathematician Jia Xian in the 11th century (Song dynasty) The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century). Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way." Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing. == See also == Clenshaw algorithm to evaluate polynomials in Chebyshev form De Boor's algorithm to evaluate splines in B-spline form De Casteljau's algorithm to evaluate polynomials in Bézier form Estrin's scheme to facilitate parallelization on modern computer architectures Lill's method to approximate roots graphically Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r == Notes == == References == == External links == "Horner scheme", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.) For more on the root-finding application see [1] Archived 2018-09-28 at the Wayback Machine
Wikipedia/Horner's_method
In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group. Among the integers, the ideals correspond one-for-one with the non-negative integers: in this ring, every ideal is a principal ideal consisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, the prime ideals of a ring are analogous to prime numbers, and the Chinese remainder theorem can be generalized to ideals. There is a version of unique prime factorization for the ideals of a Dedekind domain (a type of ring important in number theory). The related, but distinct, concept of an ideal in order theory is derived from the notion of ideal in ring theory. A fractional ideal is a generalization of an ideal, and the usual ideals are sometimes called integral ideals for clarity. == History == Ernst Kummer invented the concept of ideal numbers to serve as the "missing" factors in number rings in which unique factorization fails; here the word "ideal" is in the sense of existing in imagination only, in analogy with "ideal" objects in geometry such as points at infinity. In 1876, Richard Dedekind replaced Kummer's undefined concept by concrete sets of numbers, sets that he called ideals, in the third edition of Dirichlet's book Vorlesungen über Zahlentheorie, to which Dedekind had added many supplements. Later the notion was extended beyond number rings to the setting of polynomial rings and other commutative rings by David Hilbert and especially Emmy Noether. == Definitions == Given a ring R, a left ideal is a subset I of R that is a subgroup of the additive group of R {\displaystyle R} that "absorbs multiplication from the left by elements of ⁠ R {\displaystyle R} ⁠"; that is, I {\displaystyle I} is a left ideal if it satisfies the following two conditions: ( I , + ) {\displaystyle (I,+)} is a subgroup of ⁠ ( R , + ) {\displaystyle (R,+)} ⁠, For every r ∈ R {\displaystyle r\in R} and every ⁠ x ∈ I {\displaystyle x\in I} ⁠, the product r x {\displaystyle rx} is in ⁠ I {\displaystyle I} ⁠. In other words, a left ideal is a left submodule of R, considered as a left module over itself. A right ideal is defined similarly, with the condition r x ∈ I {\displaystyle rx\in I} replaced by ⁠ x r ∈ I {\displaystyle xr\in I} ⁠. A two-sided ideal is a left ideal that is also a right ideal. If the ring is commutative, the three definitions are the same, and one talks simply of an ideal. In the non-commutative case, "ideal" is often used instead of "two-sided ideal". If I is a left, right or two-sided ideal, the relation x ∼ y {\displaystyle x\sim y} if and only if x − y ∈ I {\displaystyle x-y\in I} is an equivalence relation on R, and the set of equivalence classes forms a left, right or bi module denoted R / I {\displaystyle R/I} and called the quotient of R by I. (It is an instance of a congruence relation and is a generalization of modular arithmetic.) If the ideal I is two-sided, R / I {\displaystyle R/I} is a ring, and the function R → R / I {\displaystyle R\to R/I} that associates to each element of R its equivalence class is a surjective ring homomorphism that has the ideal as its kernel. Conversely, the kernel of a ring homomorphism is a two-sided ideal. Therefore, the two-sided ideals are exactly the kernels of ring homomorphisms. === Note on convention === By convention, a ring has the multiplicative identity. But some authors do not require a ring to have the multiplicative identity; i.e., for them, a ring is a rng. For a rng R, a left ideal I is a subrng with the additional property that r x {\displaystyle rx} is in I for every r ∈ R {\displaystyle r\in R} and every x ∈ I {\displaystyle x\in I} . (Right and two-sided ideals are defined similarly.) For a ring, an ideal I (say a left ideal) is rarely a subring; since a subring shares the same multiplicative identity with the ambient ring R, if I were a subring, for every r ∈ R {\displaystyle r\in R} , we have r = r 1 ∈ I ; {\displaystyle r=r1\in I;} i.e., I = R {\displaystyle I=R} . The notion of an ideal does not involve associativity; thus, an ideal is also defined for non-associative rings (often without the multiplicative identity) such as a Lie algebra. == Examples and properties == (For the sake of brevity, some results are stated only for left ideals but are usually also true for right ideals with appropriate notation changes.) In a ring R, the set R itself forms a two-sided ideal of R called the unit ideal. It is often also denoted by ( 1 ) {\displaystyle (1)} since it is precisely the two-sided ideal generated (see below) by the unity ⁠ 1 R {\displaystyle 1_{R}} ⁠. Also, the set { 0 R } {\displaystyle \{0_{R}\}} consisting of only the additive identity 0R forms a two-sided ideal called the zero ideal and is denoted by ⁠ ( 0 ) {\displaystyle (0)} ⁠. Every (left, right or two-sided) ideal contains the zero ideal and is contained in the unit ideal. An (left, right or two-sided) ideal that is not the unit ideal is called a proper ideal (as it is a proper subset). Note: a left ideal a {\displaystyle {\mathfrak {a}}} is proper if and only if it does not contain a unit element, since if u ∈ a {\displaystyle u\in {\mathfrak {a}}} is a unit element, then r = ( r u − 1 ) u ∈ a {\displaystyle r=(ru^{-1})u\in {\mathfrak {a}}} for every ⁠ r ∈ R {\displaystyle r\in R} ⁠. Typically there are plenty of proper ideals. In fact, if R is a skew-field, then ( 0 ) , ( 1 ) {\displaystyle (0),(1)} are its only ideals and conversely: that is, a nonzero ring R is a skew-field if ( 0 ) , ( 1 ) {\displaystyle (0),(1)} are the only left (or right) ideals. (Proof: if x {\displaystyle x} is a nonzero element, then the principal left ideal R x {\displaystyle Rx} (see below) is nonzero and thus R x = ( 1 ) {\displaystyle Rx=(1)} ; i.e., y x = 1 {\displaystyle yx=1} for some nonzero ⁠ y {\displaystyle y} ⁠. Likewise, z y = 1 {\displaystyle zy=1} for some nonzero z {\displaystyle z} . Then z = z ( y x ) = ( z y ) x = x {\displaystyle z=z(yx)=(zy)x=x} .) The even integers form an ideal in the ring Z {\displaystyle \mathbb {Z} } of all integers, since the sum of any two even integers is even, and the product of any integer with an even integer is also even; this ideal is usually denoted by ⁠ 2 Z {\displaystyle 2\mathbb {Z} } ⁠. More generally, the set of all integers divisible by a fixed integer n {\displaystyle n} is an ideal denoted ⁠ n Z {\displaystyle n\mathbb {Z} } ⁠. In fact, every non-zero ideal of the ring Z {\displaystyle \mathbb {Z} } is generated by its smallest positive element, as a consequence of Euclidean division, so Z {\displaystyle \mathbb {Z} } is a principal ideal domain. The set of all polynomials with real coefficients that are divisible by the polynomial x 2 + 1 {\displaystyle x^{2}+1} is an ideal in the ring of all real-coefficient polynomials ⁠ R [ x ] {\displaystyle \mathbb {R} [x]} ⁠. Take a ring R {\displaystyle R} and positive integer ⁠ n {\displaystyle n} ⁠. For each ⁠ 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} ⁠, the set of all n × n {\displaystyle n\times n} matrices with entries in R {\displaystyle R} whose i {\displaystyle i} -th row is zero is a right ideal in the ring M n ( R ) {\displaystyle M_{n}(R)} of all n × n {\displaystyle n\times n} matrices with entries in ⁠ R {\displaystyle R} ⁠. It is not a left ideal. Similarly, for each ⁠ 1 ≤ j ≤ n {\displaystyle 1\leq j\leq n} ⁠, the set of all n × n {\displaystyle n\times n} matrices whose j {\displaystyle j} -th column is zero is a left ideal but not a right ideal. The ring C ( R ) {\displaystyle C(\mathbb {R} )} of all continuous functions f {\displaystyle f} from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } under pointwise multiplication contains the ideal of all continuous functions f {\displaystyle f} such that ⁠ f ( 1 ) = 0 {\displaystyle f(1)=0} ⁠. Another ideal in C ( R ) {\displaystyle C(\mathbb {R} )} is given by those functions that vanish for large enough arguments, i.e. those continuous functions f {\displaystyle f} for which there exists a number L > 0 {\displaystyle L>0} such that f ( x ) = 0 {\displaystyle f(x)=0} whenever ⁠ | x | > L {\displaystyle \vert x\vert >L} ⁠. A ring is called a simple ring if it is nonzero and has no two-sided ideals other than ⁠ ( 0 ) , ( 1 ) {\displaystyle (0),(1)} ⁠. Thus, a skew-field is simple and a simple commutative ring is a field. The matrix ring over a skew-field is a simple ring. If f : R → S {\displaystyle f:R\to S} is a ring homomorphism, then the kernel ker ⁡ ( f ) = f − 1 ( 0 S ) {\displaystyle \ker(f)=f^{-1}(0_{S})} is a two-sided ideal of ⁠ R {\displaystyle R} ⁠. By definition, ⁠ f ( 1 R ) = 1 S {\displaystyle f(1_{R})=1_{S}} ⁠, and thus if S {\displaystyle S} is not the zero ring (so ⁠ 1 S ≠ 0 S {\displaystyle 1_{S}\neq 0_{S}} ⁠), then ker ⁡ ( f ) {\displaystyle \ker(f)} is a proper ideal. More generally, for each left ideal I of S, the pre-image f − 1 ( I ) {\displaystyle f^{-1}(I)} is a left ideal. If I is a left ideal of R, then f ( I ) {\displaystyle f(I)} is a left ideal of the subring f ( R ) {\displaystyle f(R)} of S: unless f is surjective, f ( I ) {\displaystyle f(I)} need not be an ideal of S; see also § Extension and contraction of an ideal. Ideal correspondence: Given a surjective ring homomorphism ⁠ f : R → S {\displaystyle f:R\to S} ⁠, there is a bijective order-preserving correspondence between the left (resp. right, two-sided) ideals of R {\displaystyle R} containing the kernel of f {\displaystyle f} and the left (resp. right, two-sided) ideals of S {\displaystyle S} : the correspondence is given by I ↦ f ( I ) {\displaystyle I\mapsto f(I)} and the pre-image ⁠ J ↦ f − 1 ( J ) {\displaystyle J\mapsto f^{-1}(J)} ⁠. Moreover, for commutative rings, this bijective correspondence restricts to prime ideals, maximal ideals, and radical ideals (see the Types of ideals section for the definitions of these ideals). If M is a left R-module and S ⊂ M {\displaystyle S\subset M} a subset, then the annihilator Ann R ⁡ ( S ) = { r ∈ R ∣ r s = 0 , s ∈ S } {\displaystyle \operatorname {Ann} _{R}(S)=\{r\in R\mid rs=0,s\in S\}} of S is a left ideal. Given ideals a , b {\displaystyle {\mathfrak {a}},{\mathfrak {b}}} of a commutative ring R, the R-annihilator of ( b + a ) / a {\displaystyle ({\mathfrak {b}}+{\mathfrak {a}})/{\mathfrak {a}}} is an ideal of R called the ideal quotient of a {\displaystyle {\mathfrak {a}}} by b {\displaystyle {\mathfrak {b}}} and is denoted by ⁠ ( a : b ) {\displaystyle ({\mathfrak {a}}:{\mathfrak {b}})} ⁠; it is an instance of idealizer in commutative algebra. Let a i , i ∈ S {\displaystyle {\mathfrak {a}}_{i},i\in S} be an ascending chain of left ideals in a ring R; i.e., S {\displaystyle S} is a totally ordered set and a i ⊂ a j {\displaystyle {\mathfrak {a}}_{i}\subset {\mathfrak {a}}_{j}} for each ⁠ i < j {\displaystyle i<j} ⁠. Then the union ⋃ i ∈ S a i {\displaystyle \textstyle \bigcup _{i\in S}{\mathfrak {a}}_{i}} is a left ideal of R. (Note: this fact remains true even if R is without the unity 1.) The above fact together with Zorn's lemma proves the following: if E ⊂ R {\displaystyle E\subset R} is a possibly empty subset and a 0 ⊂ R {\displaystyle {\mathfrak {a}}_{0}\subset R} is a left ideal that is disjoint from E, then there is an ideal that is maximal among the ideals containing a 0 {\displaystyle {\mathfrak {a}}_{0}} and disjoint from E. (Again this is still valid if the ring R lacks the unity 1.) When R ≠ 0 {\displaystyle R\neq 0} , taking a 0 = ( 0 ) {\displaystyle {\mathfrak {a}}_{0}=(0)} and ⁠ E = { 1 } {\displaystyle E=\{1\}} ⁠, in particular, there exists a left ideal that is maximal among proper left ideals (often simply called a maximal left ideal); see Krull's theorem for more. An arbitrary union of ideals need not be an ideal, but the following is still true: given a possibly empty subset X of R, there is the smallest left ideal containing X, called the left ideal generated by X and is denoted by ⁠ R X {\displaystyle RX} ⁠. Such an ideal exists since it is the intersection of all left ideals containing X. Equivalently, R X {\displaystyle RX} is the set of all the (finite) left R-linear combinations of elements of X over R: R X = { r 1 x 1 + ⋯ + r n x n ∣ n ∈ N , r i ∈ R , x i ∈ X } {\displaystyle RX=\{r_{1}x_{1}+\dots +r_{n}x_{n}\mid n\in \mathbb {N} ,r_{i}\in R,x_{i}\in X\}} (since such a span is the smallest left ideal containing X.) A right (resp. two-sided) ideal generated by X is defined in the similar way. For "two-sided", one has to use linear combinations from both sides; i.e., R X R = { r 1 x 1 s 1 + ⋯ + r n x n s n ∣ n ∈ N , r i ∈ R , s i ∈ R , x i ∈ X } . {\displaystyle RXR=\{r_{1}x_{1}s_{1}+\dots +r_{n}x_{n}s_{n}\mid n\in \mathbb {N} ,r_{i}\in R,s_{i}\in R,x_{i}\in X\}.} A left (resp. right, two-sided) ideal generated by a single element x is called the principal left (resp. right, two-sided) ideal generated by x and is denoted by R x {\displaystyle Rx} (resp. ⁠ x R , R x R {\displaystyle xR,RxR} ⁠). The principal two-sided ideal R x R {\displaystyle RxR} is often also denoted by ⁠ ( x ) {\displaystyle (x)} ⁠. If X = { x 1 , … , x n } {\displaystyle X=\{x_{1},\dots ,x_{n}\}} is a finite set, then R X R {\displaystyle RXR} is also written as ⁠ ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} ⁠. There is a bijective correspondence between ideals and congruence relations (equivalence relations that respect the ring structure) on the ring: Given an ideal I {\displaystyle I} of a ring ⁠ R {\displaystyle R} ⁠, let x ∼ y {\displaystyle x\sim y} if ⁠ x − y ∈ I {\displaystyle x-y\in I} ⁠. Then ∼ {\displaystyle \sim } is a congruence relation on ⁠ R {\displaystyle R} ⁠. Conversely, given a congruence relation ∼ {\displaystyle \sim } on ⁠ R {\displaystyle R} ⁠, let ⁠ I = { x ∈ R : x ∼ 0 } {\displaystyle I=\{x\in R:x\sim 0\}} ⁠. Then I {\displaystyle I} is an ideal of ⁠ R {\displaystyle R} ⁠. == Types of ideals == To simplify the description all rings are assumed to be commutative. The non-commutative case is discussed in detail in the respective articles. Ideals are important because they appear as kernels of ring homomorphisms and allow one to define factor rings. Different types of ideals are studied because they can be used to construct different types of factor rings. Maximal ideal: A proper ideal I is called a maximal ideal if there exists no other proper ideal J with I a proper subset of J. The factor ring of a maximal ideal is a simple ring in general and is a field for commutative rings. Minimal ideal: A nonzero ideal is called minimal if it contains no other nonzero ideal. Zero ideal: the ideal { 0 } {\displaystyle \{0\}} . Unit ideal: the whole ring (being the ideal generated by 1 {\displaystyle 1} ). Prime ideal: A proper ideal I {\displaystyle I} is called a prime ideal if for any a {\displaystyle a} and b {\displaystyle b} in ⁠ R {\displaystyle R} ⁠, if a b {\displaystyle ab} is in ⁠ I {\displaystyle I} ⁠, then at least one of a {\displaystyle a} and b {\displaystyle b} is in ⁠ I {\displaystyle I} ⁠. The factor ring of a prime ideal is a prime ring in general and is an integral domain for commutative rings. Radical ideal or semiprime ideal: A proper ideal I is called radical or semiprime if for any a in R {\displaystyle R} , if an is in I for some n, then a is in I. The factor ring of a radical ideal is a semiprime ring for general rings, and is a reduced ring for commutative rings. Primary ideal: An ideal I is called a primary ideal if for all a and b in R, if ab is in I, then at least one of a and bn is in I for some natural number n. Every prime ideal is primary, but not conversely. A semiprime primary ideal is prime. Principal ideal: An ideal generated by one element. Finitely generated ideal: This type of ideal is finitely generated as a module. Primitive ideal: A left primitive ideal is the annihilator of a simple left module. Irreducible ideal: An ideal is said to be irreducible if it cannot be written as an intersection of ideals that properly contain it. Comaximal ideals: Two ideals I, J are said to be comaximal if x + y = 1 {\displaystyle x+y=1} for some x ∈ I {\displaystyle x\in I} and ⁠ y ∈ J {\displaystyle y\in J} ⁠. Regular ideal: This term has multiple uses. See the article for a list. Nil ideal: An ideal is a nil ideal if each of its elements is nilpotent. Nilpotent ideal: Some power of it is zero. Parameter ideal: an ideal generated by a system of parameters. Perfect ideal: A proper ideal I in a Noetherian ring R {\displaystyle R} is called a perfect ideal if its grade equals the projective dimension of the associated quotient ring, ⁠ grade ( I ) = proj dim ⁡ ( R / I ) {\displaystyle {\textrm {grade}}(I)={\textrm {proj}}\dim(R/I)} ⁠. A perfect ideal is unmixed. Unmixed ideal: A proper ideal I in a Noetherian ring R {\displaystyle R} is called an unmixed ideal (in height) if the height of I is equal to the height of every associated prime P of R / I {\displaystyle R/I} . (This is stronger than saying that R / I {\displaystyle R/I} is equidimensional. See also equidimensional ring. Two other important terms using "ideal" are not always ideals of their ring. See their respective articles for details: Fractional ideal: This is usually defined when R {\displaystyle R} is a commutative domain with quotient field K {\displaystyle K} . Despite their names, fractional ideals are R {\displaystyle R} submodules of K {\displaystyle K} with a special property. If the fractional ideal is contained entirely in R {\displaystyle R} , then it is truly an ideal of R {\displaystyle R} . Invertible ideal: Usually an invertible ideal A is defined as a fractional ideal for which there is another fractional ideal B such that AB = BA = R. Some authors may also apply "invertible ideal" to ordinary ring ideals A and B with AB = BA = R in rings other than domains. == Ideal operations == The sum and product of ideals are defined as follows. For a {\displaystyle {\mathfrak {a}}} and ⁠ b {\displaystyle {\mathfrak {b}}} ⁠, left (resp. right) ideals of a ring R, their sum is a + b := { a + b ∣ a ∈ a and b ∈ b } {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}:=\{a+b\mid a\in {\mathfrak {a}}{\mbox{ and }}b\in {\mathfrak {b}}\}} , which is a left (resp. right) ideal, and, if a , b {\displaystyle {\mathfrak {a}},{\mathfrak {b}}} are two-sided, a b := { a 1 b 1 + ⋯ + a n b n ∣ a i ∈ a and b i ∈ b , i = 1 , 2 , … , n ; for n = 1 , 2 , … } , {\displaystyle {\mathfrak {a}}{\mathfrak {b}}:=\{a_{1}b_{1}+\dots +a_{n}b_{n}\mid a_{i}\in {\mathfrak {a}}{\mbox{ and }}b_{i}\in {\mathfrak {b}},i=1,2,\dots ,n;{\mbox{ for }}n=1,2,\dots \},} i.e. the product is the ideal generated by all products of the form ab with a in a {\displaystyle {\mathfrak {a}}} and b in ⁠ b {\displaystyle {\mathfrak {b}}} ⁠. Note a + b {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}} is the smallest left (resp. right) ideal containing both a {\displaystyle {\mathfrak {a}}} and b {\displaystyle {\mathfrak {b}}} (or the union ⁠ a ∪ b {\displaystyle {\mathfrak {a}}\cup {\mathfrak {b}}} ⁠), while the product a b {\displaystyle {\mathfrak {a}}{\mathfrak {b}}} is contained in the intersection of a {\displaystyle {\mathfrak {a}}} and ⁠ b {\displaystyle {\mathfrak {b}}} ⁠. The distributive law holds for two-sided ideals ⁠ a , b , c {\displaystyle {\mathfrak {a}},{\mathfrak {b}},{\mathfrak {c}}} ⁠, ⁠ a ( b + c ) = a b + a c {\displaystyle {\mathfrak {a}}({\mathfrak {b}}+{\mathfrak {c}})={\mathfrak {a}}{\mathfrak {b}}+{\mathfrak {a}}{\mathfrak {c}}} ⁠, ⁠ ( a + b ) c = a c + b c {\displaystyle ({\mathfrak {a}}+{\mathfrak {b}}){\mathfrak {c}}={\mathfrak {a}}{\mathfrak {c}}+{\mathfrak {b}}{\mathfrak {c}}} ⁠. If a product is replaced by an intersection, a partial distributive law holds: a ∩ ( b + c ) ⊃ a ∩ b + a ∩ c {\displaystyle {\mathfrak {a}}\cap ({\mathfrak {b}}+{\mathfrak {c}})\supset {\mathfrak {a}}\cap {\mathfrak {b}}+{\mathfrak {a}}\cap {\mathfrak {c}}} where the equality holds if a {\displaystyle {\mathfrak {a}}} contains b {\displaystyle {\mathfrak {b}}} or c {\displaystyle {\mathfrak {c}}} . Remark: The sum and the intersection of ideals is again an ideal; with these two operations as join and meet, the set of all ideals of a given ring forms a complete modular lattice. The lattice is not, in general, a distributive lattice. The three operations of intersection, sum (or join), and product make the set of ideals of a commutative ring into a quantale. If a , b {\displaystyle {\mathfrak {a}},{\mathfrak {b}}} are ideals of a commutative ring R, then a ∩ b = a b {\displaystyle {\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}} in the following two cases (at least) a + b = ( 1 ) {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}=(1)} a {\displaystyle {\mathfrak {a}}} is generated by elements that form a regular sequence modulo ⁠ b {\displaystyle {\mathfrak {b}}} ⁠. (More generally, the difference between a product and an intersection of ideals is measured by the Tor functor: ⁠ Tor 1 R ⁡ ( R / a , R / b ) = ( a ∩ b ) / a b {\displaystyle \operatorname {Tor} _{1}^{R}(R/{\mathfrak {a}},R/{\mathfrak {b}})=({\mathfrak {a}}\cap {\mathfrak {b}})/{\mathfrak {a}}{\mathfrak {b}}} ⁠.) An integral domain is called a Dedekind domain if for each pair of ideals a ⊂ b {\displaystyle {\mathfrak {a}}\subset {\mathfrak {b}}} , there is an ideal c {\displaystyle {\mathfrak {c}}} such that ⁠ a = b c {\displaystyle {\mathfrak {\mathfrak {a}}}={\mathfrak {b}}{\mathfrak {c}}} ⁠. It can then be shown that every nonzero ideal of a Dedekind domain can be uniquely written as a product of maximal ideals, a generalization of the fundamental theorem of arithmetic. == Examples of ideal operations == In Z {\displaystyle \mathbb {Z} } we have ( n ) ∩ ( m ) = lcm ⁡ ( n , m ) Z {\displaystyle (n)\cap (m)=\operatorname {lcm} (n,m)\mathbb {Z} } since ( n ) ∩ ( m ) {\displaystyle (n)\cap (m)} is the set of integers that are divisible by both n {\displaystyle n} and ⁠ m {\displaystyle m} ⁠. Let R = C [ x , y , z , w ] {\displaystyle R=\mathbb {C} [x,y,z,w]} and let ⁠ a = ( z , w ) , b = ( x + z , y + w ) , c = ( x + z , w ) {\displaystyle {\mathfrak {a}}=(z,w),{\mathfrak {b}}=(x+z,y+w),{\mathfrak {c}}=(x+z,w)} ⁠. Then, a + b = ( z , w , x + z , y + w ) = ( x , y , z , w ) {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}=(z,w,x+z,y+w)=(x,y,z,w)} and a + c = ( z , w , x ) {\displaystyle {\mathfrak {a}}+{\mathfrak {c}}=(z,w,x)} a b = ( z ( x + z ) , z ( y + w ) , w ( x + z ) , w ( y + w ) ) = ( z 2 + x z , z y + w z , w x + w z , w y + w 2 ) {\displaystyle {\mathfrak {a}}{\mathfrak {b}}=(z(x+z),z(y+w),w(x+z),w(y+w))=(z^{2}+xz,zy+wz,wx+wz,wy+w^{2})} a c = ( x z + z 2 , z w , x w + z w , w 2 ) {\displaystyle {\mathfrak {a}}{\mathfrak {c}}=(xz+z^{2},zw,xw+zw,w^{2})} a ∩ b = a b {\displaystyle {\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}} while a ∩ c = ( w , x z + z 2 ) ≠ a c {\displaystyle {\mathfrak {a}}\cap {\mathfrak {c}}=(w,xz+z^{2})\neq {\mathfrak {a}}{\mathfrak {c}}} In the first computation, we see the general pattern for taking the sum of two finitely generated ideals, it is the ideal generated by the union of their generators. In the last three we observe that products and intersections agree whenever the two ideals intersect in the zero ideal. These computations can be checked using Macaulay2. == Radical of a ring == Ideals appear naturally in the study of modules, especially in the form of a radical. For simplicity, we work with commutative rings but, with some changes, the results are also true for non-commutative rings. Let R be a commutative ring. By definition, a primitive ideal of R is the annihilator of a (nonzero) simple R-module. The Jacobson radical J = Jac ⁡ ( R ) {\displaystyle J=\operatorname {Jac} (R)} of R is the intersection of all primitive ideals. Equivalently, J = ⋂ m maximal ideals m . {\displaystyle J=\bigcap _{{\mathfrak {m}}{\text{ maximal ideals}}}{\mathfrak {m}}.} Indeed, if M {\displaystyle M} is a simple module and x is a nonzero element in M, then R x = M {\displaystyle Rx=M} and R / Ann ⁡ ( M ) = R / Ann ⁡ ( x ) ≃ M {\displaystyle R/\operatorname {Ann} (M)=R/\operatorname {Ann} (x)\simeq M} , meaning Ann ⁡ ( M ) {\displaystyle \operatorname {Ann} (M)} is a maximal ideal. Conversely, if m {\displaystyle {\mathfrak {m}}} is a maximal ideal, then m {\displaystyle {\mathfrak {m}}} is the annihilator of the simple R-module ⁠ R / m {\displaystyle R/{\mathfrak {m}}} ⁠. There is also another characterization (the proof is not hard): J = { x ∈ R ∣ 1 − y x is a unit element for every y ∈ R } . {\displaystyle J=\{x\in R\mid 1-yx\,{\text{ is a unit element for every }}y\in R\}.} For a not-necessarily-commutative ring, it is a general fact that 1 − y x {\displaystyle 1-yx} is a unit element if and only if 1 − x y {\displaystyle 1-xy} is (see the link) and so this last characterization shows that the radical can be defined both in terms of left and right primitive ideals. The following simple but important fact (Nakayama's lemma) is built-in to the definition of a Jacobson radical: if M is a module such that ⁠ J M = M {\displaystyle JM=M} ⁠, then M does not admit a maximal submodule, since if there is a maximal submodule ⁠ L ⊊ M {\displaystyle L\subsetneq M} ⁠, J ⋅ ( M / L ) = 0 {\displaystyle J\cdot (M/L)=0} and so ⁠ M = J M ⊂ L ⊊ M {\displaystyle M=JM\subset L\subsetneq M} ⁠, a contradiction. Since a nonzero finitely generated module admits a maximal submodule, in particular, one has: If J M = M {\displaystyle JM=M} and M is finitely generated, then ⁠ M = 0 {\displaystyle M=0} ⁠. A maximal ideal is a prime ideal and so one has nil ⁡ ( R ) = ⋂ p prime ideals p ⊂ Jac ⁡ ( R ) {\displaystyle \operatorname {nil} (R)=\bigcap _{{\mathfrak {p}}{\text{ prime ideals }}}{\mathfrak {p}}\subset \operatorname {Jac} (R)} where the intersection on the left is called the nilradical of R. As it turns out, nil ⁡ ( R ) {\displaystyle \operatorname {nil} (R)} is also the set of nilpotent elements of R. If R is an Artinian ring, then Jac ⁡ ( R ) {\displaystyle \operatorname {Jac} (R)} is nilpotent and ⁠ nil ⁡ ( R ) = Jac ⁡ ( R ) {\displaystyle \operatorname {nil} (R)=\operatorname {Jac} (R)} ⁠. (Proof: first note the DCC implies J n = J n + 1 {\displaystyle J^{n}=J^{n+1}} for some n. If (DCC) a ⊋ Ann ⁡ ( J n ) {\displaystyle {\mathfrak {a}}\supsetneq \operatorname {Ann} (J^{n})} is an ideal properly minimal over the latter, then J ⋅ ( a / Ann ⁡ ( J n ) ) = 0 {\displaystyle J\cdot ({\mathfrak {a}}/\operatorname {Ann} (J^{n}))=0} . That is, ⁠ J n a = J n + 1 a = 0 {\displaystyle J^{n}{\mathfrak {a}}=J^{n+1}{\mathfrak {a}}=0} ⁠, a contradiction.) == Extension and contraction of an ideal == Let A and B be two commutative rings, and let f : A → B be a ring homomorphism. If a {\displaystyle {\mathfrak {a}}} is an ideal in A, then f ( a ) {\displaystyle f({\mathfrak {a}})} need not be an ideal in B (e.g. take f to be the inclusion of the ring of integers Z into the field of rationals Q). The extension a e {\displaystyle {\mathfrak {a}}^{e}} of a {\displaystyle {\mathfrak {a}}} in B is defined to be the ideal in B generated by ⁠ f ( a ) {\displaystyle f({\mathfrak {a}})} ⁠. Explicitly, a e = { ∑ y i f ( x i ) : x i ∈ a , y i ∈ B } {\displaystyle {\mathfrak {a}}^{e}={\Big \{}\sum y_{i}f(x_{i}):x_{i}\in {\mathfrak {a}},y_{i}\in B{\Big \}}} If b {\displaystyle {\mathfrak {b}}} is an ideal of B, then f − 1 ( b ) {\displaystyle f^{-1}({\mathfrak {b}})} is always an ideal of A, called the contraction b c {\displaystyle {\mathfrak {b}}^{c}} of b {\displaystyle {\mathfrak {b}}} to A. Assuming f : A → B is a ring homomorphism, a {\displaystyle {\mathfrak {a}}} is an ideal in A, b {\displaystyle {\mathfrak {b}}} is an ideal in B, then: b {\displaystyle {\mathfrak {b}}} is prime in B ⇒ {\displaystyle \Rightarrow } b c {\displaystyle {\mathfrak {b}}^{c}} is prime in A. a e c ⊇ a {\displaystyle {\mathfrak {a}}^{ec}\supseteq {\mathfrak {a}}} b c e ⊆ b {\displaystyle {\mathfrak {b}}^{ce}\subseteq {\mathfrak {b}}} It is false, in general, that a {\displaystyle {\mathfrak {a}}} being prime (or maximal) in A implies that a e {\displaystyle {\mathfrak {a}}^{e}} is prime (or maximal) in B. Many classic examples of this stem from algebraic number theory. For example, embedding ⁠ Z → Z [ i ] {\displaystyle \mathbb {Z} \to \mathbb {Z} \left\lbrack i\right\rbrack } ⁠. In B = Z [ i ] {\displaystyle B=\mathbb {Z} \left\lbrack i\right\rbrack } , the element 2 factors as 2 = ( 1 + i ) ( 1 − i ) {\displaystyle 2=(1+i)(1-i)} where (one can show) neither of 1 + i , 1 − i {\displaystyle 1+i,1-i} are units in B. So ( 2 ) e {\displaystyle (2)^{e}} is not prime in B (and therefore not maximal, as well). Indeed, ( 1 ± i ) 2 = ± 2 i {\displaystyle (1\pm i)^{2}=\pm 2i} shows that ⁠ ( 1 + i ) = ( ( 1 − i ) − ( 1 − i ) 2 ) {\displaystyle (1+i)=((1-i)-(1-i)^{2})} ⁠, ⁠ ( 1 − i ) = ( ( 1 + i ) − ( 1 + i ) 2 ) {\displaystyle (1-i)=((1+i)-(1+i)^{2})} ⁠, and therefore ⁠ ( 2 ) e = ( 1 + i ) 2 {\displaystyle (2)^{e}=(1+i)^{2}} ⁠. On the other hand, if f is surjective and a ⊇ ker ⁡ f {\displaystyle {\mathfrak {a}}\supseteq \ker f} then: a e c = a {\displaystyle {\mathfrak {a}}^{ec}={\mathfrak {a}}} and ⁠ b c e = b {\displaystyle {\mathfrak {b}}^{ce}={\mathfrak {b}}} ⁠. a {\displaystyle {\mathfrak {a}}} is a prime ideal in A ⇔ {\displaystyle \Leftrightarrow } a e {\displaystyle {\mathfrak {a}}^{e}} is a prime ideal in B. a {\displaystyle {\mathfrak {a}}} is a maximal ideal in A ⇔ {\displaystyle \Leftrightarrow } a e {\displaystyle {\mathfrak {a}}^{e}} is a maximal ideal in B. Remark: Let K be a field extension of L, and let B and A be the rings of integers of K and L, respectively. Then B is an integral extension of A, and we let f be the inclusion map from A to B. The behaviour of a prime ideal a = p {\displaystyle {\mathfrak {a}}={\mathfrak {p}}} of A under extension is one of the central problems of algebraic number theory. The following is sometimes useful: a prime ideal p {\displaystyle {\mathfrak {p}}} is a contraction of a prime ideal if and only if ⁠ p = p e c {\displaystyle {\mathfrak {p}}={\mathfrak {p}}^{ec}} ⁠. (Proof: Assuming the latter, note p e B p = B p ⇒ p e {\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}=B_{\mathfrak {p}}\Rightarrow {\mathfrak {p}}^{e}} intersects ⁠ A − p {\displaystyle A-{\mathfrak {p}}} ⁠, a contradiction. Now, the prime ideals of B p {\displaystyle B_{\mathfrak {p}}} correspond to those in B that are disjoint from ⁠ A − p {\displaystyle A-{\mathfrak {p}}} ⁠. Hence, there is a prime ideal q {\displaystyle {\mathfrak {q}}} of B, disjoint from ⁠ A − p {\displaystyle A-{\mathfrak {p}}} ⁠, such that q B p {\displaystyle {\mathfrak {q}}B_{\mathfrak {p}}} is a maximal ideal containing ⁠ p e B p {\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}} ⁠. One then checks that q {\displaystyle {\mathfrak {q}}} lies over ⁠ p {\displaystyle {\mathfrak {p}}} ⁠. The converse is obvious.) == Generalizations == Ideals can be generalized to any monoid object ⁠ ( R , ⊗ ) {\displaystyle (R,\otimes )} ⁠, where R {\displaystyle R} is the object where the monoid structure has been forgotten. A left ideal of R {\displaystyle R} is a subobject I {\displaystyle I} that "absorbs multiplication from the left by elements of ⁠ R {\displaystyle R} ⁠"; that is, I {\displaystyle I} is a left ideal if it satisfies the following two conditions: I {\displaystyle I} is a subobject of R {\displaystyle R} For every r ∈ ( R , ⊗ ) {\displaystyle r\in (R,\otimes )} and every ⁠ x ∈ ( I , ⊗ ) {\displaystyle x\in (I,\otimes )} ⁠, the product r ⊗ x {\displaystyle r\otimes x} is in ⁠ ( I , ⊗ ) {\displaystyle (I,\otimes )} ⁠. A right ideal is defined with the condition "⁠ r ⊗ x ∈ ( I , ⊗ ) {\displaystyle r\otimes x\in (I,\otimes )} ⁠" replaced by "'⁠ x ⊗ r ∈ ( I , ⊗ ) {\displaystyle x\otimes r\in (I,\otimes )} ⁠". A two-sided ideal is a left ideal that is also a right ideal, and is sometimes simply called an ideal. When R {\displaystyle R} is a commutative monoid object respectively, the definitions of left, right, and two-sided ideal coincide, and the term ideal is used alone. == See also == Modular arithmetic Noether isomorphism theorem Boolean prime ideal theorem Ideal theory Ideal (order theory) Ideal norm Splitting of prime ideals in Galois extensions Ideal sheaf == Notes == == References == == External links == Levinson, Jake (July 14, 2014). "The Geometric Interpretation for Extension of Ideals?". Stack Exchange.
Wikipedia/Ideal_(ring_theory)
In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by rank(A) or rk(A); sometimes the parentheses are not written, as in rank A. == Main definitions == In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A. A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in § Proofs that column rank = row rank, below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A. A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank. The rank of a linear map or operator Φ {\displaystyle \Phi } is defined as the dimension of its image: rank ⁡ ( Φ ) := dim ⁡ ( img ⁡ ( Φ ) ) {\displaystyle \operatorname {rank} (\Phi ):=\dim(\operatorname {img} (\Phi ))} where dim {\displaystyle \dim } is the dimension of a vector space, and img {\displaystyle \operatorname {img} } is the image of a map. == Examples == The matrix [ 1 0 1 0 1 1 0 1 1 ] {\displaystyle {\begin{bmatrix}1&0&1\\0&1&1\\0&1&1\end{bmatrix}}} has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3. The matrix A = [ 1 1 0 2 − 1 − 1 0 − 2 ] {\displaystyle A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}} has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose A T = [ 1 − 1 1 − 1 0 0 2 − 2 ] {\displaystyle A^{\mathrm {T} }={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}} of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rank(A) = rank(AT). == Computing the rank of a matrix == === Rank from row echelon forms === A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows. For example, the matrix A given by A = [ 1 2 1 − 2 − 3 1 3 5 0 ] {\displaystyle A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}} can be put in reduced row-echelon form by using the following elementary row operations: [ 1 2 1 − 2 − 3 1 3 5 0 ] → 2 R 1 + R 2 → R 2 [ 1 2 1 0 1 3 3 5 0 ] → − 3 R 1 + R 3 → R 3 [ 1 2 1 0 1 3 0 − 1 − 3 ] → R 2 + R 3 → R 3 [ 1 2 1 0 1 3 0 0 0 ] → − 2 R 2 + R 1 → R 1 [ 1 0 − 5 0 1 3 0 0 0 ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}&\xrightarrow {2R_{1}+R_{2}\to R_{2}} {\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}\xrightarrow {-3R_{1}+R_{3}\to R_{3}} {\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}\\&\xrightarrow {R_{2}+R_{3}\to R_{3}} \,\,{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}\xrightarrow {-2R_{2}+R_{1}\to R_{1}} {\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}~.\end{aligned}}} The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix A is 2. === Computation === When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application. == Proofs that column rank = row rank == === Proof using row reduction === The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in § Rank from row echelon forms. Here is a variant of this proof: It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries. We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014). === Proof using linear combinations === Let A be an m × n matrix. Let the column rank of A be r, and let c1, ..., cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose ith column is formed from the coefficients giving the ith column of A as a linear combination of the r columns of C. In other words, R is the matrix which contains the multiples for the bases of the column space of A (which is C), which are then used to form A as a whole. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see Rank factorization.) === Proof using orthogonality === Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, …, xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, …, Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr: 0 = c 1 A x 1 + c 2 A x 2 + ⋯ + c r A x r = A ( c 1 x 1 + c 2 x 2 + ⋯ + c r x r ) = A v , {\displaystyle 0=c_{1}A\mathbf {x} _{1}+c_{2}A\mathbf {x} _{2}+\cdots +c_{r}A\mathbf {x} _{r}=A(c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r})=A\mathbf {v} ,} where v = c1x1 + c2x2 + ⋯ + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since Av = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v, c 1 x 1 + c 2 x 2 + ⋯ + c r x r = 0. {\displaystyle c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r}=0.} But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that c1 = c2 = ⋯ = cr = 0. It follows that Ax1, Ax2, …, Axr are linearly independent. Now, each Axi is obviously a vector in the column space of A. So, Ax1, Ax2, …, Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof. == Alternative definitions == In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F. === Dimension of image === Given the matrix A {\displaystyle A} , there is an associated linear mapping f : F n → F m {\displaystyle f:F^{n}\to F^{m}} defined by f ( x ) = A x . {\displaystyle f(x)=Ax.} The rank of A {\displaystyle A} is the dimension of the image of f {\displaystyle f} . This definition has the advantage that it can be applied to any linear map without need for a specific matrix. === Rank in terms of nullity === Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank–nullity theorem states that this definition is equivalent to the preceding one. === Column rank – dimension of column space === The rank of A is the maximal number of linearly independent columns c 1 , c 2 , … , c k {\displaystyle \mathbf {c} _{1},\mathbf {c} _{2},\dots ,\mathbf {c} _{k}} of A; this is the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A, which is in fact just the image of the linear map f associated to A). === Row rank – dimension of row space === The rank of A is the maximal number of linearly independent rows of A; this is the dimension of the row space of A. === Decomposition rank === The rank of A is the smallest positive integer k such that A can be factored as A = C R {\displaystyle A=CR} , where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k, the following are equivalent: the column rank of A is less than or equal to k, there exist k columns c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} of size m such that every column of A is a linear combination of c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} , there exist an m × k {\displaystyle m\times k} matrix C and a k × n {\displaystyle k\times n} matrix R such that A = C R {\displaystyle A=CR} (when k is the rank, this is a rank factorization of A), there exist k rows r 1 , … , r k {\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}} of size n such that every row of A is a linear combination of r 1 , … , r k {\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}} , the row rank of A is less than or equal to k. Indeed, the following equivalences are obvious: ( 1 ) ⇔ ( 2 ) ⇔ ( 3 ) ⇔ ( 4 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)} . For example, to prove (3) from (2), take C to be the matrix whose columns are c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} from (2). To prove (2) from (3), take c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} to be the columns of C. It follows from the equivalence ( 1 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (5)} that the row rank is equal to the column rank. As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V → W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V → X and a map X → W. Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details. === Rank in terms of singular values === The rank of A equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} . === Determinantal rank – size of largest non-vanishing minor === The rank of A is the largest order of any non-zero minor in A. (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix. A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent). === Tensor rank – minimum number of simple tensors === The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product c ⋅ r {\displaystyle c\cdot r} of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition. == Properties == We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above. The rank of an m × n matrix is a nonnegative integer and cannot be greater than either m or n. That is, rank ⁡ ( A ) ≤ min ( m , n ) . {\displaystyle \operatorname {rank} (A)\leq \min(m,n).} A matrix that has rank min(m, n) is said to have full rank; otherwise, the matrix is rank deficient. Only a zero matrix has rank zero. f is injective (or "one-to-one") if and only if A has rank n (in this case, we say that A has full column rank). f is surjective (or "onto") if and only if A has rank m (in this case, we say that A has full row rank). If A is a square matrix (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank). If B is any n × k matrix, then rank ⁡ ( A B ) ≤ min ( rank ⁡ ( A ) , rank ⁡ ( B ) ) . {\displaystyle \operatorname {rank} (AB)\leq \min(\operatorname {rank} (A),\operatorname {rank} (B)).} If B is an n × k matrix of rank n, then rank ⁡ ( A B ) = rank ⁡ ( A ) . {\displaystyle \operatorname {rank} (AB)=\operatorname {rank} (A).} If C is an l × m matrix of rank m, then rank ⁡ ( C A ) = rank ⁡ ( A ) . {\displaystyle \operatorname {rank} (CA)=\operatorname {rank} (A).} The rank of A is equal to r if and only if there exists an invertible m × m matrix X and an invertible n × n matrix Y such that X A Y = [ I r 0 0 0 ] , {\displaystyle XAY={\begin{bmatrix}I_{r}&0\\0&0\end{bmatrix}},} where Ir denotes the r × r identity matrix and the three zero matrices have the sizes r × (n − r), (m − r) × r and (m − r) × (n − r). Sylvester’s rank inequality: if A is an m × n matrix and B is n × k, then rank ⁡ ( A ) + rank ⁡ ( B ) − n ≤ rank ⁡ ( A B ) . {\displaystyle \operatorname {rank} (A)+\operatorname {rank} (B)-n\leq \operatorname {rank} (AB).} This is a special case of the next inequality. The inequality due to Frobenius: if AB, ABC and BC are defined, then rank ⁡ ( A B ) + rank ⁡ ( B C ) ≤ rank ⁡ ( B ) + rank ⁡ ( A B C ) . {\displaystyle \operatorname {rank} (AB)+\operatorname {rank} (BC)\leq \operatorname {rank} (B)+\operatorname {rank} (ABC).} Subadditivity: rank ⁡ ( A + B ) ≤ rank ⁡ ( A ) + rank ⁡ ( B ) {\displaystyle \operatorname {rank} (A+B)\leq \operatorname {rank} (A)+\operatorname {rank} (B)} when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not fewer. The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.) If A is a matrix over the real numbers then the rank of A and the rank of its corresponding Gram matrix are equal. Thus, for real matrices rank ⁡ ( A T A ) = rank ⁡ ( A A T ) = rank ⁡ ( A ) = rank ⁡ ( A T ) . {\displaystyle \operatorname {rank} (A^{\mathrm {T} }A)=\operatorname {rank} (AA^{\mathrm {T} })=\operatorname {rank} (A)=\operatorname {rank} (A^{\mathrm {T} }).} This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors x for which A T A x = 0. {\displaystyle A^{\mathrm {T} }A\mathbf {x} =0.} If this condition is fulfilled, we also have 0 = x T A T A x = | A x | 2 . {\displaystyle 0=\mathbf {x} ^{\mathrm {T} }A^{\mathrm {T} }A\mathbf {x} =\left|A\mathbf {x} \right|^{2}.} If A is a matrix over the complex numbers and A ¯ {\displaystyle {\overline {A}}} denotes the complex conjugate of A and A∗ the conjugate transpose of A (i.e., the adjoint of A), then rank ⁡ ( A ) = rank ⁡ ( A ¯ ) = rank ⁡ ( A T ) = rank ⁡ ( A ∗ ) = rank ⁡ ( A ∗ A ) = rank ⁡ ( A A ∗ ) . {\displaystyle \operatorname {rank} (A)=\operatorname {rank} ({\overline {A}})=\operatorname {rank} (A^{\mathrm {T} })=\operatorname {rank} (A^{*})=\operatorname {rank} (A^{*}A)=\operatorname {rank} (AA^{*}).} == Applications == One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions. In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable. In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function. == Generalization == There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist. Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices. There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative. == Matrices as tensors == Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details. The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed. == See also == Matroid rank Nonnegative rank (linear algebra) Rank (differential topology) Multicollinearity Linear dependence == Notes == == References == == Sources == Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4. Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing L3C. ISBN 978-1-944325-11-4. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1. Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5. == Further reading == Roger A. Horn and Charles R. Johnson (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6. Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors [1] and System of Equations [2] Mike Brookes: Matrix Reference Manual. [3]
Wikipedia/Rank_(linear_algebra)
In mathematics, the composition operator takes two functions, f {\displaystyle f} and g {\displaystyle g} , and returns a new function h ( x ) := ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle h(x):=(g\circ f)(x)=g(f(x))} . Thus, the function g is applied after applying f to x. ( g ∘ f ) {\displaystyle (g\circ f)} is pronounced "the composition of g and f". Reverse composition, sometimes denoted , applies the operation in the opposite order, applying f {\displaystyle f} first and g {\displaystyle g} second. Intuitively, reverse composition is a chaining process in which the output of function f feeds the input of function g. The composition of functions is a special case of the composition of relations, sometimes also denoted by ∘ {\displaystyle \circ } . As a result, all properties of composition of relations are true of composition of functions, such as associativity. == Examples == Composition of functions on a finite set: If f = {(1, 1), (2, 3), (3, 1), (4, 2)}, and g = {(1, 2), (2, 3), (3, 1), (4, 2)}, then g ∘ f = {(1, 2), (2, 1), (3, 2), (4, 3)}, as shown in the figure. Composition of functions on an infinite set: If f: R → R (where R is the set of all real numbers) is given by f(x) = 2x + 4 and g: R → R is given by g(x) = x3, then: If an airplane's altitude at time t is a(t), and the air pressure at altitude x is p(x), then (p ∘ a)(t) is the pressure around the plane at time t. Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations. == Properties == The composition of functions is always associative—a property inherited from the composition of relations. That is, if f, g, and h are composable, then f ∘ (g ∘ h) = (f ∘ g) ∘ h. Since the parentheses do not change the result, they are generally omitted. In a strict sense, the composition g ∘ f is only meaningful if the codomain of f equals the domain of g; in a wider sense, it is sufficient that the former be an improper subset of the latter. Moreover, it is often convenient to tacitly restrict the domain of f, such that f produces only values in the domain of g. For example, the composition g ∘ f of the functions f : R → (−∞,+9] defined by f(x) = 9 − x2 and g : [0,+∞) → R defined by g ( x ) = x {\displaystyle g(x)={\sqrt {x}}} can be defined on the interval [−3,+3]. The functions g and f are said to commute with each other if g ∘ f = f ∘ g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, |x| + 3 = |x + 3| only when x ≥ 0. The picture shows another example. The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that (f ∘ g)−1 = g−1∘ f−1. Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula. Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative). == Composition monoids == Suppose one has two (or more) functions f: X → X, g: X → X having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f. Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions f: X → X is called the full transformation semigroup or symmetric semigroup on X. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.) If the given transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group (also known as a permutation group); and one says that the group is generated by these functions. The set of all bijective functions f: X → X (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up to isomorphism). In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup. == Functional powers == If Y ⊆ X, then f : X → Y {\displaystyle f:X\to Y} may compose with itself; this is sometimes denoted as f 2 {\displaystyle f^{2}} . That is: More generally, for any natural number n ≥ 2, the nth functional power can be defined inductively by f n = f ∘ f n−1 = f n−1 ∘ f, a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration. By convention, f 0 is defined as the identity map on f 's domain, idX. If Y = X and f: X → X admits an inverse function f −1, negative functional powers f −n are defined for n > 0 as the negated power of the inverse function: f −n = (f −1)n. Note: If f takes its values in a ring (in particular for real or complex-valued f ), there is a risk of confusion, as f n could also stand for the n-fold product of f, e.g. f 2(x) = f(x) · f(x). For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., tan−1 = arctan ≠ 1/tan. In some cases, when, for a given function f, the equation g ∘ g = f has a unique solution g, that function can be defined as the functional square root of f, then written as g = f 1/2. More generally, when gn = f has a unique solution for some natural number n > 0, then f m/n can be defined as gm. Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems. To avoid ambiguity, some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f[n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead. == Alternative notations == Many mathematicians, particularly in group theory, omit the composition symbol, writing gf for g ∘ f. During the mid-20th century, some mathematicians adopted postfix notation, writing xf  for f(x) and (xf)g for g(f(x)). This can be more natural than prefix notation in many cases, such as in linear algebra when x is a row vector and f and g denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first apply f and then apply g, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f ; g" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation). == Composition operator == Given a function g, the composition operator Cg is defined as that operator which maps functions to functions as C g f = f ∘ g . {\displaystyle C_{g}f=f\circ g.} Composition operators are studied in the field of operator theory. == In programming languages == Function composition appears in one form or another in numerous programming languages. == Multivariate functions == Partial composition is possible for multivariate functions. The function resulting when some argument xi of the function f is replaced by the function g is called a composition of f and g in some computer engineering contexts, and is denoted f |xi = g f | x i = g = f ( x 1 , … , x i − 1 , g ( x 1 , x 2 , … , x n ) , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} When g is a simple constant b, composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor. f | x i = b = f ( x 1 , … , x i − 1 , b , x i + 1 , … , x n ) . {\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given f, a n-ary function, and n m-ary functions g1, ..., gn, the composition of f with g1, ..., gn, is the m-ary function h ( x 1 , … , x m ) = f ( g 1 ( x 1 , … , x m ) , … , g n ( x 1 , … , x m ) ) . {\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called the generalized composite or superposition of f with g1, ..., gn. The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here g1, ..., gn can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition. A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is: f ( g ( a 11 , … , a 1 m ) , … , g ( a n 1 , … , a n m ) ) = g ( f ( a 11 , … , a n 1 ) , … , f ( a 1 m , … , a n m ) ) . {\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic. == Generalizations == Composition can be generalized to arbitrary binary relations. If R ⊆ X × Y and S ⊆ Y × Z are two binary relations, then their composition amounts to R ∘ S = { ( x , z ) ∈ X × Z : ( ∃ y ∈ Y ) ( ( x , y ) ∈ R ∧ ( y , z ) ∈ S ) } {\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}} . Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle R∘S has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)\ =\ g(f(x))} however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem. The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula (f ∘ g)−1 = (g−1 ∘ f −1) applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. - Saunders Mac Lane, Mathematics: Form and Function == Typography == The composition symbol ∘ is encoded as U+2218 ∘ RING OPERATOR (&compfn;, &SmallCircle;); see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ. == See also == Cobweb plot – a graphical technique for functional composition Combinatory logic Composition ring, a formal axiomatization of the composition operation Flow (mathematics) Function composition (computer science) Function of random variable, distribution of a function of a random variable Functional decomposition Functional square root Functional equation Higher-order function Infinite compositions of analytic functions Iterated function Lambda calculus == Notes == == References == == External links == "Composite function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Composition of Functions" by Bruce Atwood, the Wolfram Demonstrations Project, 2007.
Wikipedia/Function_composition
In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures, which are sets with specific operations acting on their elements. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, and algebras over a field. The term abstract algebra was coined in the early 20th century to distinguish it from older parts of algebra, and more specifically from elementary algebra, the use of variables to represent numbers in computation and reasoning. The abstract perspective on algebra has become so fundamental to advanced mathematics that it is simply called "algebra", while the term "abstract algebra" is seldom used except in pedagogy. Algebraic structures, with their associated homomorphisms, form mathematical categories. Category theory gives a unified framework to study properties and constructions that are similar for various structures. Universal algebra is a related subject that studies types of algebraic structures as single objects. For example, the structure of groups is a single object in universal algebra, which is called the variety of groups. == History == Before the nineteenth century, algebra was defined as the study of polynomials. Abstract algebra came into existence during the nineteenth century as more complex problems and solution methods developed. Concrete problems and examples came from number theory, geometry, analysis, and the solutions of algebraic equations. Most theories that are now recognized as parts of abstract algebra started as collections of disparate facts from various branches of mathematics, acquired a common theme that served as a core around which various results were grouped, and finally became unified on a basis of a common set of concepts. This unification occurred in the early decades of the 20th century and resulted in the formal axiomatic definitions of various algebraic structures such as groups, rings, and fields. This historical development is almost the opposite of the treatment found in popular textbooks, such as van der Waerden's Moderne Algebra, which start each chapter with a formal definition of a structure and then follow it with concrete examples. === Elementary algebra === The study of polynomial equations or algebraic equations has a long history. c. 1700 BC, the Babylonians were able to solve quadratic equations specified as word problems. This word problem stage is classified as rhetorical algebra and was the dominant approach up to the 16th century. Al-Khwarizmi originated the word "algebra" in 830 AD, but his work was entirely rhetorical algebra. Fully symbolic algebra did not appear until François Viète's 1591 New Algebra, and even this had some spelled out words that were given symbols in Descartes's 1637 La Géométrie. The formal study of solving symbolic equations led Leonhard Euler to accept what were then considered "nonsense" roots such as negative numbers and imaginary numbers, in the late 18th century. However, European mathematicians, for the most part, resisted these concepts until the middle of the 19th century. George Peacock's 1830 Treatise of Algebra was the first attempt to place algebra on a strictly symbolic basis. He distinguished a new symbolical algebra, distinct from the old arithmetical algebra. Whereas in arithmetical algebra a − b {\displaystyle a-b} is restricted to a ≥ b {\displaystyle a\geq b} , in symbolical algebra all rules of operations hold with no restrictions. Using this Peacock could show laws such as ( − a ) ( − b ) = a b {\displaystyle (-a)(-b)=ab} , by letting a = 0 , c = 0 {\displaystyle a=0,c=0} in ( a − b ) ( c − d ) = a c + b d − a d − b c {\displaystyle (a-b)(c-d)=ac+bd-ad-bc} . Peacock used what he termed the principle of the permanence of equivalent forms to justify his argument, but his reasoning suffered from the problem of induction. For example, a b = a b {\displaystyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}}} holds for the nonnegative real numbers, but not for general complex numbers. === Early group theory === Several areas of mathematics led to the study of groups. Lagrange's 1770 study of the solutions of the quintic equation led to the Galois group of a polynomial. Gauss's 1801 study of Fermat's little theorem led to the ring of integers modulo n, the multiplicative group of integers modulo n, and the more general concepts of cyclic groups and abelian groups. Klein's 1872 Erlangen program studied geometry and led to symmetry groups such as the Euclidean group and the group of projective transformations. In 1874 Lie introduced the theory of Lie groups, aiming for "the Galois theory of differential equations". In 1876 Poincaré and Klein introduced the group of Möbius transformations, and its subgroups such as the modular group and Fuchsian group, based on work on automorphic functions in analysis. The abstract concept of group emerged slowly over the middle of the nineteenth century. Galois in 1832 was the first to use the term "group", signifying a collection of permutations closed under composition. Arthur Cayley's 1854 paper On the theory of groups defined a group as a set with an associative composition operation and the identity 1, today called a monoid. In 1870 Kronecker defined an abstract binary operation that was closed, commutative, associative, and had the left cancellation property b ≠ c → a ⋅ b ≠ a ⋅ c {\displaystyle b\neq c\to a\cdot b\neq a\cdot c} , similar to the modern laws for a finite abelian group. Weber's 1882 definition of a group was a closed binary operation that was associative and had left and right cancellation. Walther von Dyck in 1882 was the first to require inverse elements as part of the definition of a group. Once this abstract group concept emerged, results were reformulated in this abstract setting. For example, Sylow's theorem was reproven by Frobenius in 1887 directly from the laws of a finite group, although Frobenius remarked that the theorem followed from Cauchy's theorem on permutation groups and the fact that every finite group is a subgroup of a permutation group. Otto Hölder was particularly prolific in this area, defining quotient groups in 1889, group automorphisms in 1893, as well as simple groups. He also completed the Jordan–Hölder theorem. Dedekind and Miller independently characterized Hamiltonian groups and introduced the notion of the commutator of two elements. Burnside, Frobenius, and Molien created the representation theory of finite groups at the end of the nineteenth century. J. A. de Séguier's 1905 monograph Elements of the Theory of Abstract Groups presented many of these results in an abstract, general form, relegating "concrete" groups to an appendix, although it was limited to finite groups. The first monograph on both finite and infinite abstract groups was O. K. Schmidt's 1916 Abstract Theory of Groups. === Early ring theory === Noncommutative ring theory began with extensions of the complex numbers to hypercomplex numbers, specifically William Rowan Hamilton's quaternions in 1843. Many other number systems followed shortly. In 1844, Hamilton presented biquaternions, Cayley introduced octonions, and Grassman introduced exterior algebras. James Cockle presented tessarines in 1848 and coquaternions in 1849. William Kingdon Clifford introduced split-biquaternions in 1873. In addition Cayley introduced group algebras over the real and complex numbers in 1854 and square matrices in two papers of 1855 and 1858. Once there were sufficient examples, it remained to classify them. In an 1870 monograph, Benjamin Peirce classified the more than 150 hypercomplex number systems of dimension below 6, and gave an explicit definition of an associative algebra. He defined nilpotent and idempotent elements and proved that any algebra contains one or the other. He also defined the Peirce decomposition. Frobenius in 1878 and Charles Sanders Peirce in 1881 independently proved that the only finite-dimensional division algebras over R {\displaystyle \mathbb {R} } were the real numbers, the complex numbers, and the quaternions. In the 1880s Killing and Cartan showed that semisimple Lie algebras could be decomposed into simple ones, and classified all simple Lie algebras. Inspired by this, in the 1890s Cartan, Frobenius, and Molien proved (independently) that a finite-dimensional associative algebra over R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } uniquely decomposes into the direct sums of a nilpotent algebra and a semisimple algebra that is the product of some number of simple algebras, square matrices over division algebras. Cartan was the first to define concepts such as direct sum and simple algebra, and these concepts proved quite influential. In 1907 Wedderburn extended Cartan's results to an arbitrary field, in what are now called the Wedderburn principal theorem and Artin–Wedderburn theorem. For commutative rings, several areas together led to commutative ring theory. In two papers in 1828 and 1832, Gauss formulated the Gaussian integers and showed that they form a unique factorization domain (UFD) and proved the biquadratic reciprocity law. Jacobi and Eisenstein at around the same time proved a cubic reciprocity law for the Eisenstein integers. The study of Fermat's last theorem led to the algebraic integers. In 1847, Gabriel Lamé thought he had proven FLT, but his proof was faulty as he assumed all the cyclotomic fields were UFDs, yet as Kummer pointed out, Q ( ζ 23 ) ) {\displaystyle \mathbb {Q} (\zeta _{23}))} was not a UFD. In 1846 and 1847 Kummer introduced ideal numbers and proved unique factorization into ideal primes for cyclotomic fields. Dedekind extended this in 1871 to show that every nonzero ideal in the domain of integers of an algebraic number field is a unique product of prime ideals, a precursor of the theory of Dedekind domains. Overall, Dedekind's work created the subject of algebraic number theory. In the 1850s, Riemann introduced the fundamental concept of a Riemann surface. Riemann's methods relied on an assumption he called Dirichlet's principle, which in 1870 was questioned by Weierstrass. Much later, in 1900, Hilbert justified Riemann's approach by developing the direct method in the calculus of variations. In the 1860s and 1870s, Clebsch, Gordan, Brill, and especially M. Noether studied algebraic functions and curves. In particular, Noether studied what conditions were required for a polynomial to be an element of the ideal generated by two algebraic curves in the polynomial ring R [ x , y ] {\displaystyle \mathbb {R} [x,y]} , although Noether did not use this modern language. In 1882 Dedekind and Weber, in analogy with Dedekind's earlier work on algebraic number theory, created a theory of algebraic function fields which allowed the first rigorous definition of a Riemann surface and a rigorous proof of the Riemann–Roch theorem. Kronecker in the 1880s, Hilbert in 1890, Lasker in 1905, and Macaulay in 1913 further investigated the ideals of polynomial rings implicit in E. Noether's work. Lasker proved a special case of the Lasker-Noether theorem, namely that every ideal in a polynomial ring is a finite intersection of primary ideals. Macaulay proved the uniqueness of this decomposition. Overall, this work led to the development of algebraic geometry. In 1801 Gauss introduced binary quadratic forms over the integers and defined their equivalence. He further defined the discriminant of these forms, which is an invariant of a binary form. Between the 1860s and 1890s invariant theory developed and became a major field of algebra. Cayley, Sylvester, Gordan and others found the Jacobian and the Hessian for binary quartic forms and cubic forms. In 1868 Gordan proved that the graded algebra of invariants of a binary form over the complex numbers was finitely generated, i.e., has a basis. Hilbert wrote a thesis on invariants in 1885 and in 1890 showed that any form of any degree or number of variables has a basis. He extended this further in 1890 to Hilbert's basis theorem. Once these theories had been developed, it was still several decades until an abstract ring concept emerged. The first axiomatic definition was given by Abraham Fraenkel in 1914. His definition was mainly the standard axioms: a set with two operations addition, which forms a group (not necessarily commutative), and multiplication, which is associative, distributes over addition, and has an identity element. In addition, he had two axioms on "regular elements" inspired by work on the p-adic numbers, which excluded now-common rings such as the ring of integers. These allowed Fraenkel to prove that addition was commutative. Fraenkel's work aimed to transfer Steinitz's 1910 definition of fields over to rings, but it was not connected with the existing work on concrete systems. Masazo Sono's 1917 definition was the first equivalent to the present one. In 1920, Emmy Noether, in collaboration with W. Schmeidler, published a paper about the theory of ideals in which they defined left and right ideals in a ring. The following year she published a landmark paper called Idealtheorie in Ringbereichen (Ideal theory in rings'), analyzing ascending chain conditions with regard to (mathematical) ideals. The publication gave rise to the term "Noetherian ring", and several other mathematical objects being called Noetherian. Noted algebraist Irving Kaplansky called this work "revolutionary"; results which seemed inextricably connected to properties of polynomial rings were shown to follow from a single axiom. Artin, inspired by Noether's work, came up with the descending chain condition. These definitions marked the birth of abstract ring theory. === Early field theory === In 1801 Gauss introduced the integers mod p, where p is a prime number. Galois extended this in 1830 to finite fields with p n {\displaystyle p^{n}} elements. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by Moore in 1893. In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. The first clear definition of an abstract field was due to Heinrich Martin Weber in 1893. It was missing the associative law for multiplication, but covered finite fields and the fields of algebraic number theory and algebraic geometry. In 1910 Steinitz synthesized the knowledge of abstract field theory accumulated so far. He axiomatically defined fields with the modern definition, classified them by their characteristic, and proved many theorems commonly seen today. === Other major areas === Solving of systems of linear equations, which led to linear algebra === Modern algebra === The end of the 19th and the beginning of the 20th century saw a shift in the methodology of mathematics. Abstract algebra emerged around the start of the 20th century, under the name modern algebra. Its study was part of the drive for more intellectual rigor in mathematics. Initially, the assumptions in classical algebra, on which the whole of mathematics (and major parts of the natural sciences) depend, took the form of axiomatic systems. No longer satisfied with establishing properties of concrete objects, mathematicians started to turn their attention to general theory. Formal definitions of certain algebraic structures began to emerge in the 19th century. For example, results about various groups of permutations came to be seen as instances of general theorems that concern a general notion of an abstract group. Questions of structure and classification of various mathematical objects came to the forefront. These processes were occurring throughout all of mathematics but became especially pronounced in algebra. Formal definitions through primitive operations and axioms were proposed for many basic algebraic structures, such as groups, rings, and fields. Hence such things as group theory and ring theory took their places in pure mathematics. The algebraic investigations of general fields by Ernst Steinitz and of commutative and then general rings by David Hilbert, Emil Artin and Emmy Noether, building on the work of Ernst Kummer, Leopold Kronecker and Richard Dedekind, who had considered ideals in commutative rings, and of Georg Frobenius and Issai Schur, concerning representation theory of groups, came to define abstract algebra. These developments of the last quarter of the 19th century and the first quarter of the 20th century were systematically exposed in Bartel van der Waerden's Moderne Algebra, the two-volume monograph published in 1930–1931 that reoriented the idea of algebra from the theory of equations to the theory of algebraic structures. == Basic concepts == By abstracting away various amounts of detail, mathematicians have defined various algebraic structures that are used in many areas of mathematics. For instance, almost all systems studied are sets, to which the theorems of set theory apply. Those sets that have a certain binary operation defined on them form magmas, to which the concepts concerning magmas, as well those concerning sets, apply. We can add additional constraints on the algebraic structure, such as associativity (to form semigroups); identity, and inverses (to form groups); and other more complex structures. With additional structure, more theorems could be proved, but the generality is reduced. The "hierarchy" of algebraic objects (in terms of generality) creates a hierarchy of the corresponding theories: for instance, the theorems of group theory may be used when studying rings (algebraic objects that have two binary operations with certain axioms) since a ring is a group over one of its operations. In general there is a balance between the amount of generality and the richness of the theory: more general structures have usually fewer nontrivial theorems and fewer applications. Examples of algebraic structures with a single binary operation are: Magma Quasigroup Monoid Semigroup Group Examples involving several operations include: == Branches of abstract algebra == === Group theory === A group is a set G {\displaystyle G} together with a "group product", a binary operation ⋅ : G × G → G {\displaystyle \cdot :G\times G\rightarrow G} . The group satisfies the following defining axioms (cf. Group (mathematics) § Definition): Identity: there exists an element e {\displaystyle e} such that, for each element a {\displaystyle a} in G {\displaystyle G} , it holds that e ⋅ a = a ⋅ e = a {\displaystyle e\cdot a=a\cdot e=a} . Inverse: for each element a {\displaystyle a} of G {\displaystyle G} , there exists an element b {\displaystyle b} so that a ⋅ b = b ⋅ a = e {\displaystyle a\cdot b=b\cdot a=e} . Associativity: for each triplet of elements a , b , c {\displaystyle a,b,c} in G {\displaystyle G} , it holds that ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) {\displaystyle (a\cdot b)\cdot c=a\cdot (b\cdot c)} . === Ring theory === A ring is a set R {\displaystyle R} with two binary operations, addition: ( x , y ) ↦ x + y , {\displaystyle (x,y)\mapsto x+y,} and multiplication: ( x , y ) ↦ x y {\displaystyle (x,y)\mapsto xy} satisfying the following axioms. R {\displaystyle R} is a commutative group under addition. R {\displaystyle R} is a monoid under multiplication. Multiplication is distributive with respect to addition. == Applications == Because of its generality, abstract algebra is used in many fields of mathematics and science. For instance, algebraic topology uses algebraic objects to study topologies. The Poincaré conjecture, proved in 2003, asserts that the fundamental group of a manifold, which encodes information about connectedness, can be used to determine whether a manifold is a sphere or not. Algebraic number theory studies various number rings that generalize the set of integers. Using tools of algebraic number theory, Andrew Wiles proved Fermat's Last Theorem. In physics, groups are used to represent symmetry operations, and the usage of group theory could simplify differential equations. In gauge theory, the requirement of local symmetry can be used to deduce the equations describing a system. The groups that describe those symmetries are Lie groups, and the study of Lie groups and Lie algebras reveals much about the physical system; for instance, the number of force carriers in a theory is equal to the dimension of the Lie algebra, and these bosons interact with the force they mediate if the Lie algebra is nonabelian. == See also == Coding theory Group theory List of publications in abstract algebra == References == === Bibliography === == Further reading == Allenby, R. B. J. T. (1991), Rings, Fields and Groups, Butterworth-Heinemann, ISBN 978-0-340-54440-2 Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1 Burris, Stanley N.; Sankappanavar, H. P. (1999) [1981], A Course in Universal Algebra Gilbert, Jimmie; Gilbert, Linda (2005), Elements of Modern Algebra, Thomson Brooks/Cole, ISBN 978-0-534-40264-8 Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 Sethuraman, B. A. (1996), Rings, Fields, Vector Spaces, and Group Theory: An Introduction to Abstract Algebra via Geometric Constructibility, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94848-5 Whitehead, C. (2002), Guide to Abstract Algebra (2nd ed.), Houndmills: Palgrave, ISBN 978-0-333-79447-0 W. Keith Nicholson (2012) Introduction to Abstract Algebra, 4th edition, John Wiley & Sons ISBN 978-1-118-13535-8 . John R. Durbin (1992) Modern Algebra : an introduction, John Wiley & Sons == External links == Charles C. Pinter (1990) [1982] A Book of Abstract Algebra, second edition, from University of Maryland
Wikipedia/Abstract_algebra
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
Wikipedia/System_of_linear_equations
In mathematics, an inequation is a statement that either an inequality (relations "greater than" and "less than", < and >) or a relation "not equal to" (≠) holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between the two sides, indicating the specific inequality relation. Some examples of inequations are: a < b {\displaystyle a<b} x + y + z ≤ 1 {\displaystyle x+y+z\leq 1} n > 1 {\displaystyle n>1} x ≠ 0 {\displaystyle x\neq 0} In some cases, the term "inequation" has a more restricted definition, reserved only for statements whose inequality relation is "not equal to" (or "distinct"). == Chains of inequations == A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain 0 ≤ a < b ≤ 1 {\displaystyle 0\leq a<b\leq 1} is shorthand for 0 ≤ a a n d a < b a n d b ≤ 1 {\displaystyle 0\leq a~~\mathrm {and} ~~a<b~~\mathrm {and} ~~b\leq 1} which also implies that 0 < b {\displaystyle 0<b} and a < 1 {\displaystyle a<1} . In rare cases, chains without such implications about distant terms are used. For example i ≠ 0 ≠ j {\displaystyle i\neq 0\neq j} is shorthand for i ≠ 0 a n d 0 ≠ j {\displaystyle i\neq 0~~\mathrm {and} ~~0\neq j} , which does not imply i ≠ j . {\displaystyle i\neq j.} Similarly, a < b > c {\displaystyle a<b>c} is shorthand for a < b a n d b > c {\displaystyle a<b~~\mathrm {and} ~~b>c} , which does not imply any order of a {\displaystyle a} and c {\displaystyle c} . == Solving inequations == Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. For example, 0 ≤ x 1 ≤ 690 − 1.5 ⋅ x 2 ∧ 0 ≤ x 2 ≤ 530 − x 1 ∧ x 1 ≤ 640 − 0.75 ⋅ x 2 {\displaystyle 0\leq x_{1}\leq 690-1.5\cdot x_{2}\;\land \;0\leq x_{2}\leq 530-x_{1}\;\land \;x_{1}\leq 640-0.75\cdot x_{2}} is a conjunction of inequations, partly written as chains (where ∧ {\displaystyle \land } can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example. Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming. == Combinations of meanings == Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation f ( x ) < g ( x ) {\displaystyle \textstyle {\sqrt {f(x)}}<g(x)} is logically equivalent to the following three inequations combined: f ( x ) ≥ 0 {\displaystyle f(x)\geq 0} g ( x ) > 0 {\displaystyle g(x)>0} f ( x ) < ( g ( x ) ) 2 {\displaystyle f(x)<\left(g(x)\right)^{2}} == See also == Apartness relation — a form of inequality in constructive mathematics Equation Equals sign Inequality (mathematics) Relational operator == References ==
Wikipedia/Inequation
In mathematics, a constant function is a function whose (output) value is the same for every input value. == Basic properties == As a real-valued function of a real-valued argument, a constant function has the general form y(x) = c or just y = c. For example, the function y(x) = 4 is the specific constant function where the output value is c = 4. The domain of this function is the set of all real numbers. The image of this function is the singleton set {4}. The independent variable x does not appear on the right side of the function expression and so its value is "vacuously substituted"; namely y(0) = 4, y(−2.7) = 4, y(π) = 4, and so on. No matter what value of x is input, the output is 4. The graph of the constant function y = c is a horizontal line in the plane that passes through the point (0, c). In the context of a polynomial in one variable x, the constant function is called non-zero constant function because it is a polynomial of degree 0, and its general form is f(x) = c, where c is nonzero. This function has no intersection point with the x-axis, meaning it has no root (zero). On the other hand, the polynomial f(x) = 0 is the identically zero function. It is the (trivial) constant function and every x is a root. Its graph is the x-axis in the plane. Its graph is symmetric with respect to the y-axis, and therefore a constant function is an even function. In the context where it is defined, the derivative of a function is a measure of the rate of change of function values with respect to change in input values. Because a constant function does not change, its derivative is 0. This is often written: ( x ↦ c ) ′ = 0 {\displaystyle (x\mapsto c)'=0} . The converse is also true. Namely, if y′(x) = 0 for all real numbers x, then y is a constant function. For example, given the constant function y ( x ) = − 2 {\displaystyle y(x)=-{\sqrt {2}}} . The derivative of y is the identically zero function y ′ ( x ) = ( x ↦ − 2 ) ′ = 0 {\displaystyle y'(x)=\left(x\mapsto -{\sqrt {2}}\right)'=0} . == Other properties == For functions between preordered sets, constant functions are both order-preserving and order-reversing; conversely, if f is both order-preserving and order-reversing, and if the domain of f is a lattice, then f must be constant. Every constant function whose domain and codomain are the same set X is a left zero of the full transformation monoid on X, which implies that it is also idempotent. It has zero slope or gradient. Every constant function between topological spaces is continuous. A constant function factors through the one-point set, the terminal object in the category of sets. This observation is instrumental for F. William Lawvere's axiomatization of set theory, the Elementary Theory of the Category of Sets (ETCS). For any non-empty X, every set Y is isomorphic to the set of constant functions in X → Y {\displaystyle X\to Y} . For any X and each element y in Y, there is a unique function y ~ : X → Y {\displaystyle {\tilde {y}}:X\to Y} such that y ~ ( x ) = y {\displaystyle {\tilde {y}}(x)=y} for all x ∈ X {\displaystyle x\in X} . Conversely, if a function f : X → Y {\displaystyle f:X\to Y} satisfies f ( x ) = f ( x ′ ) {\displaystyle f(x)=f(x')} for all x , x ′ ∈ X {\displaystyle x,x'\in X} , f {\displaystyle f} is by definition a constant function. As a corollary, the one-point set is a generator in the category of sets. Every set X {\displaystyle X} is canonically isomorphic to the function set X 1 {\displaystyle X^{1}} , or hom set hom ⁡ ( 1 , X ) {\displaystyle \operatorname {hom} (1,X)} in the category of sets, where 1 is the one-point set. Because of this, and the adjunction between Cartesian products and hom in the category of sets (so there is a canonical isomorphism between functions of two variables and functions of one variable valued in functions of another (single) variable, hom ⁡ ( X × Y , Z ) ≅ hom ⁡ ( X ( hom ⁡ ( Y , Z ) ) {\displaystyle \operatorname {hom} (X\times Y,Z)\cong \operatorname {hom} (X(\operatorname {hom} (Y,Z))} ) the category of sets is a closed monoidal category with the Cartesian product of sets as tensor product and the one-point set as tensor unit. In the isomorphisms λ : 1 × X ≅ X ≅ X × 1 : ρ {\displaystyle \lambda :1\times X\cong X\cong X\times 1:\rho } natural in X, the left and right unitors are the projections p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} the ordered pairs ( ∗ , x ) {\displaystyle (*,x)} and ( x , ∗ ) {\displaystyle (x,*)} respectively to the element x {\displaystyle x} , where ∗ {\displaystyle *} is the unique point in the one-point set. A function on a connected set is locally constant if and only if it is constant. == References == Herrlich, Horst and Strecker, George E., Category Theory, Heldermann Verlag (2007). == External links == Weisstein, Eric W. "Constant Function". MathWorld. "Constant function". PlanetMath.
Wikipedia/Constant_function
In mathematics, an algebraic structure or algebraic system consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities (known as axioms) that these operations must satisfy. An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category. == Introduction == Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, a + (b + c) = (a + b) + c and a(bc) = (ab)c are associative laws, and a + b = b + a and ab = ba are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law. Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem. In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses. == Common axioms == === Equational axioms === An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples. Commutativity An operation ∗ {\displaystyle *} is commutative if x ∗ y = y ∗ x {\displaystyle x*y=y*x} for every x and y in the algebraic structure. Associativity An operation ∗ {\displaystyle *} is associative if ( x ∗ y ) ∗ z = x ∗ ( y ∗ z ) {\displaystyle (x*y)*z=x*(y*z)} for every x, y and z in the algebraic structure. Left distributivity An operation ∗ {\displaystyle *} is left-distributive with respect to another operation + {\displaystyle +} if x ∗ ( y + z ) = ( x ∗ y ) + ( x ∗ z ) {\displaystyle x*(y+z)=(x*y)+(x*z)} for every x, y and z in the algebraic structure (the second operation is denoted here as + {\displaystyle +} , because the second operation is addition in many common examples). Right distributivity An operation ∗ {\displaystyle *} is right-distributive with respect to another operation + {\displaystyle +} if ( y + z ) ∗ x = ( y ∗ x ) + ( z ∗ x ) {\displaystyle (y+z)*x=(y*x)+(z*x)} for every x, y and z in the algebraic structure. Distributivity An operation ∗ {\displaystyle *} is distributive with respect to another operation + {\displaystyle +} if it is both left-distributive and right-distributive. If the operation ∗ {\displaystyle *} is commutative, left and right distributivity are both equivalent to distributivity. === Existential axioms === Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all X there is y such that f ( X , y ) = g ( X , y ) {\displaystyle f(X,y)=g(X,y)} ", where X is a k-tuple of variables. Choosing a specific value of y for each value of X defines a function φ : X ↦ y , {\displaystyle \varphi :X\mapsto y,} which can be viewed as an operation of arity k, and the axiom becomes the identity f ( X , φ ( X ) ) = g ( X , φ ( X ) ) . {\displaystyle f(X,\varphi (X))=g(X,\varphi (X)).} The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation x ↦ − x . {\displaystyle x\mapsto -x.} Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety. Here are some of the most common existential axioms. Identity element A binary operation ∗ {\displaystyle *} has an identity element if there is an element e such that x ∗ e = x and e ∗ x = x {\displaystyle x*e=x\quad {\text{and}}\quad e*x=x} for all x in the structure. Here, the auxiliary operation is the operation of arity zero that has e as its result. Inverse element Given a binary operation ∗ {\displaystyle *} that has an identity element e, an element x is invertible if it has an inverse element, that is, if there exists an element inv ⁡ ( x ) {\displaystyle \operatorname {inv} (x)} such that inv ⁡ ( x ) ∗ x = e and x ∗ inv ⁡ ( x ) = e . {\displaystyle \operatorname {inv} (x)*x=e\quad {\text{and}}\quad x*\operatorname {inv} (x)=e.} For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible. === Non-equational axioms === The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers ( ∀ , ∃ {\displaystyle \forall ,\exists } ) that apply to elements (not to subsets) of the structure. Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation inv such that ∀ x , x = 0 or x ⋅ inv ⁡ ( x ) = 1. {\displaystyle \forall x,\quad x=0\quad {\text{or}}\quad x\cdot \operatorname {inv} (x)=1.} The operation inv can be viewed either as a partial operation that is not defined for x = 0; or as an ordinary function whose value at 0 is arbitrary and must not be used. == Common algebraic structures == === One set with operations === Simple structures: no binary operation: Set: a degenerate algebraic structure S having no operations. Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers. Group: a monoid with a unary operation (inverse), giving rise to inverse elements. Abelian group: a group whose binary operation is commutative. Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition. Ring: a semiring whose additive monoid is an abelian group. Division ring: a nontrivial ring in which division by nonzero elements is defined. Commutative ring: a ring in which the multiplication operation is commutative. Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element). Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law. Complete lattice: a lattice in which arbitrary meet and joins exist. Bounded lattice: a lattice with a greatest element and least element. Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice. Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. === Two sets with operations === Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations. Vector space: a module where the ring R is a field or, in some contexts, a division ring. Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication. Inner product space: a field F and vector space V with a definite bilinear form V × V → F. == Hybrid structures == Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure. Topological group: a group with a topology compatible with the group operation. Lie group: a topological group with a compatible smooth manifold structure. Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order. Archimedean group: a linearly ordered group for which the Archimedean property holds. Topological vector space: a vector space whose M has a compatible topology. Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space. Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure. Vertex operator algebra Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology. == Universal algebra == Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry). Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group. Some structures do not form varieties, because either: It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity; Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations. Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because ( 1 , 0 ) ⋅ ( 0 , 1 ) = ( 0 , 0 ) {\displaystyle (1,0)\cdot (0,1)=(0,0)} , but fields do not have zero divisors. == Category theory == Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure. There are various concepts in category theory that try to capture the algebraic character of a context, for instance algebraic category essentially algebraic category presentable category locally presentable category monadic functors and categories universal property. == Different meanings of "structure" == In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set A {\displaystyle A} ", means that we have defined ring operations on the set A {\displaystyle A} . For another example, the group ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} can be seen as a set Z {\displaystyle \mathbb {Z} } that is equipped with an algebraic structure, namely the operation + {\displaystyle +} . == See also == Free object Mathematical structure Signature (logic) Structure (mathematical logic) == Notes == == References == Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (2nd ed.), AMS Chelsea, ISBN 978-0-8218-1646-2 Michel, Anthony N.; Herget, Charles J. (1993), Applied Algebra and Functional Analysis, New York: Dover Publications, ISBN 978-0-486-67598-5 Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90578-3 Category theory Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 Taylor, Paul (1999), Practical foundations of mathematics, Cambridge University Press, ISBN 978-0-521-63107-5 == External links == Jipsen's algebra structures. Includes many structures not mentioned here. Mathworld page on abstract algebra. Stanford Encyclopedia of Philosophy: Algebra by Vaughan Pratt.
Wikipedia/Algebraic_structures
A non-associative algebra (or distributive algebra) is an algebra over a field where the binary multiplication operation is not assumed to be associative. That is, an algebraic structure A is a non-associative algebra over a field K if it is a vector space over K and is equipped with a K-bilinear binary multiplication operation A × A → A which may or may not be associative. Examples include Lie algebras, Jordan algebras, the octonions, and three-dimensional Euclidean space equipped with the cross product operation. Since it is not assumed that the multiplication is associative, using parentheses to indicate the order of multiplications is necessary. For example, the expressions (ab)(cd), (a(bc))d and a(b(cd)) may all yield different answers. While this use of non-associative means that associativity is not assumed, it does not mean that associativity is disallowed. In other words, "non-associative" means "not necessarily associative", just as "noncommutative" means "not necessarily commutative" for noncommutative rings. An algebra is unital or unitary if it has an identity element e with ex = x = xe for all x in the algebra. For example, the octonions are unital, but Lie algebras never are. The nonassociative algebra structure of A may be studied by associating it with other associative algebras which are subalgebras of the full algebra of K-endomorphisms of A as a K-vector space. Two such are the derivation algebra and the (associative) enveloping algebra, the latter being in a sense "the smallest associative algebra containing A". More generally, some authors consider the concept of a non-associative algebra over a commutative ring R: An R-module equipped with an R-bilinear binary multiplication operation. If a structure obeys all of the ring axioms apart from associativity (for example, any R-algebra), then it is naturally a Z {\displaystyle \mathbb {Z} } -algebra, so some authors refer to non-associative Z {\displaystyle \mathbb {Z} } -algebras as non-associative rings. == Algebras satisfying identities == Ring-like structures with two binary operations and no other restrictions are a broad class, one which is too general to study. For this reason, the best-known kinds of non-associative algebras satisfy identities, or properties, which simplify multiplication somewhat. These include the following ones. === Usual properties === Let x, y and z denote arbitrary elements of the algebra A over the field K. Let powers to positive (non-zero) integer be recursively defined by x1 ≝ x and either xn+1 ≝ xnx (right powers) or xn+1 ≝ xxn (left powers) depending on authors. Unital: there exist an element e so that ex = x = xe; in that case we can define x0 ≝ e. Associative: (xy)z = x(yz). Commutative: xy = yx. Anticommutative: xy = −yx. Jacobi identity: (xy)z + (yz)x + (zx)y = 0 or x(yz) + y(zx) + z(xy) = 0 depending on authors. Jordan identity: (x2y)x = x2(yx) or (xy)x2 = x(yx2) depending on authors. Alternative: (xx)y = x(xy) (left alternative) and (yx)x = y(xx) (right alternative). Flexible: (xy)x = x(yx). nth power associative with n ≥ 2: xn−kxk = xn for all integers k so that 0 < k < n. Third power associative: x2x = xx2. Fourth power associative: x3x = x2x2 = xx3 (compare with fourth power commutative below). Power associative: the subalgebra generated by any element is associative, i.e., nth power associative for all n ≥ 2. nth power commutative with n ≥ 2: xn−kxk = xkxn−k for all integers k so that 0 < k < n. Third power commutative: x2x = xx2. Fourth power commutative: x3x = xx3 (compare with fourth power associative above). Power commutative: the subalgebra generated by any element is commutative, i.e., nth power commutative for all n ≥ 2. Nilpotent of index n ≥ 2: the product of any n elements, in any association, vanishes, but not for some n−1 elements: x1x2…xn = 0 and there exist n−1 elements so that y1y2…yn−1 ≠ 0 for a specific association. Nil of index n ≥ 2: power associative and xn = 0 and there exist an element y so that yn−1 ≠ 0. === Relations between properties === For K of any characteristic: Associative implies alternative. Any two out of the three properties left alternative, right alternative, and flexible, imply the third one. Thus, alternative implies flexible. Alternative implies Jordan identity. Commutative implies flexible. Anticommutative implies flexible. Alternative implies power associative. Flexible implies third power associative. Second power associative and second power commutative are always true. Third power associative and third power commutative are equivalent. nth power associative implies nth power commutative. Nil of index 2 implies anticommutative. Nil of index 2 implies Jordan identity. Nilpotent of index 3 implies Jacobi identity. Nilpotent of index n implies nil of index N with 2 ≤ N ≤ n. Unital and nil of index n are incompatible. If K ≠ GF(2) or dim(A) ≤ 3: Jordan identity and commutative together imply power associative. If char(K) ≠ 2: Right alternative implies power associative. Similarly, left alternative implies power associative. Unital and Jordan identity together imply flexible. Jordan identity and flexible together imply power associative. Commutative and anticommutative together imply nilpotent of index 2. Anticommutative implies nil of index 2. Unital and anticommutative are incompatible. If char(K) ≠ 3: Unital and Jacobi identity are incompatible. If char(K) ∉ {2,3,5}: Commutative and x4 = x2x2 (one of the two identities defining fourth power associative) together imply power associative. If char(K) = 0: Third power associative and x4 = x2x2 (one of the two identities defining fourth power associative) together imply power associative. If char(K) = 2: Commutative and anticommutative are equivalent. === Associator === The associator on A is the K-multilinear map [ ⋅ , ⋅ , ⋅ ] : A × A × A → A {\displaystyle [\cdot ,\cdot ,\cdot ]:A\times A\times A\to A} given by [x,y,z] = (xy)z − x(yz). It measures the degree of nonassociativity of A {\displaystyle A} , and can be used to conveniently express some possible identities satisfied by A. Let x, y and z denote arbitrary elements of the algebra. Associative: [x,y,z] = 0. Alternative: [x,x,y] = 0 (left alternative) and [y,x,x] = 0 (right alternative). It implies that permuting any two terms changes the sign: [x,y,z] = −[x,z,y] = −[z,y,x] = −[y,x,z]; the converse holds only if char(K) ≠ 2. Flexible: [x,y,x] = 0. It implies that permuting the extremal terms changes the sign: [x,y,z] = −[z,y,x]; the converse holds only if char(K) ≠ 2. Jordan identity: [x2,y,x] = 0 or [x,y,x2] = 0 depending on authors. Third power associative: [x,x,x] = 0. The nucleus is the set of elements that associate with all others: that is, the n in A such that [n,A,A] = [A,n,A] = [A,A,n] = {0}. The nucleus is an associative subring of A. === Center === The center of A is the set of elements that commute and associate with everything in A, that is the intersection of C ( A ) = { n ∈ A | n r = r n ∀ r ∈ A } {\displaystyle C(A)=\{n\in A\ |\ nr=rn\,\forall r\in A\,\}} with the nucleus. It turns out that for elements of C(A) it is enough that two of the sets ( [ n , A , A ] , [ A , n , A ] , [ A , A , n ] ) {\displaystyle ([n,A,A],[A,n,A],[A,A,n])} are { 0 } {\displaystyle \{0\}} for the third to also be the zero set. == Examples == Euclidean space R3 with multiplication given by the vector cross product is an example of an algebra which is anticommutative and not associative. The cross product also satisfies the Jacobi identity. Lie algebras are algebras satisfying anticommutativity and the Jacobi identity. Algebras of vector fields on a differentiable manifold (if K is R or the complex numbers C) or an algebraic variety (for general K); Jordan algebras are algebras which satisfy the commutative law and the Jordan identity. Every associative algebra gives rise to a Lie algebra by using the commutator as Lie bracket. In fact every Lie algebra can either be constructed this way, or is a subalgebra of a Lie algebra so constructed. Every associative algebra over a field of characteristic other than 2 gives rise to a Jordan algebra by defining a new multiplication x*y = (xy+yx)/2. In contrast to the Lie algebra case, not every Jordan algebra can be constructed this way. Those that can are called special. Alternative algebras are algebras satisfying the alternative property. The most important examples of alternative algebras are the octonions (an algebra over the reals), and generalizations of the octonions over other fields. All associative algebras are alternative. Up to isomorphism, the only finite-dimensional real alternative, division algebras (see below) are the reals, complexes, quaternions and octonions. Power-associative algebras, are those algebras satisfying the power-associative identity. Examples include all associative algebras, all alternative algebras, Jordan algebras over a field other than GF(2) (see previous section), and the sedenions. The hyperbolic quaternion algebra over R, which was an experimental algebra before the adoption of Minkowski space for special relativity. More classes of algebras: Graded algebras. These include most of the algebras of interest to multilinear algebra, such as the tensor algebra, symmetric algebra, and exterior algebra over a given vector space. Graded algebras can be generalized to filtered algebras. Division algebras, in which multiplicative inverses exist. The finite-dimensional alternative division algebras over the field of real numbers have been classified. They are the real numbers (dimension 1), the complex numbers (dimension 2), the quaternions (dimension 4), and the octonions (dimension 8). The quaternions and octonions are not commutative. Of these algebras, all are associative except for the octonions. Quadratic algebras, which require that xx = re + sx, for some elements r and s in the ground field, and e a unit for the algebra. Examples include all finite-dimensional alternative algebras, and the algebra of real 2-by-2 matrices. Up to isomorphism the only alternative, quadratic real algebras without divisors of zero are the reals, complexes, quaternions, and octonions. The Cayley–Dickson algebras (where K is R), which begin with: the complex numbers C (a commutative and associative algebra); the quaternions H (an associative algebra); the octonions O (an alternative algebra); the sedenions S; the trigintaduonions T and the infinite sequence of Cayley-Dickson algebras (power-associative algebras). Hypercomplex algebras are all finite-dimensional unital R-algebras, they thus include Cayley-Dickson algebras and many more. The Poisson algebras are considered in geometric quantization. They carry two multiplications, turning them into commutative algebras and Lie algebras in different ways. Genetic algebras are non-associative algebras used in mathematical genetics. Triple systems == Properties == There are several properties that may be familiar from ring theory, or from associative algebras, which are not always true for non-associative algebras. Unlike the associative case, elements with a (two-sided) multiplicative inverse might also be a zero divisor. For example, all non-zero elements of the sedenions have a two-sided inverse, but some of them are also zero divisors. == Free non-associative algebra == The free non-associative algebra on a set X over a field K is defined as the algebra with basis consisting of all non-associative monomials, finite formal products of elements of X retaining parentheses. The product of monomials u, v is just (u)(v). The algebra is unital if one takes the empty product as a monomial. Kurosh proved that every subalgebra of a free non-associative algebra is free. == Associated algebras == An algebra A over a field K is in particular a K-vector space and so one can consider the associative algebra EndK(A) of K-linear vector space endomorphism of A. We can associate to the algebra structure on A two subalgebras of EndK(A), the derivation algebra and the (associative) enveloping algebra. === Derivation algebra === A derivation on A is a map D with the property D ( x ⋅ y ) = D ( x ) ⋅ y + x ⋅ D ( y ) . {\displaystyle D(x\cdot y)=D(x)\cdot y+x\cdot D(y)\ .} The derivations on A form a subspace DerK(A) in EndK(A). The commutator of two derivations is again a derivation, so that the Lie bracket gives DerK(A) a structure of Lie algebra. === Enveloping algebra === There are linear maps L and R attached to each element a of an algebra A: L ( a ) : x ↦ a x ; R ( a ) : x ↦ x a . {\displaystyle L(a):x\mapsto ax;\ \ R(a):x\mapsto xa\ .} Here each element L ( a ) , R ( a ) {\displaystyle L(a),R(a)} is regarded as an element of EndK(A). The associative enveloping algebra or multiplication algebra of A is the sub-associative algebra of EndK(A) generated by the left and right linear maps L ( a ) , R ( a ) {\displaystyle L(a),R(a)} . The centroid of A is the centraliser of the enveloping algebra in the endomorphism algebra EndK(A). An algebra is central if its centroid consists of the K-scalar multiples of the identity. Some of the possible identities satisfied by non-associative algebras may be conveniently expressed in terms of the linear maps: Commutative: each L(a) is equal to the corresponding R(a); Associative: any L commutes with any R; Flexible: every L(a) commutes with the corresponding R(a); Jordan: every L(a) commutes with R(a2); Alternative: every L(a)2 = L(a2) and similarly for the right. The quadratic representation Q is defined by Q ( a ) : x ↦ 2 a ⋅ ( a ⋅ x ) − ( a ⋅ a ) ⋅ x {\displaystyle Q(a):x\mapsto 2a\cdot (a\cdot x)-(a\cdot a)\cdot x\ } , or equivalently, Q ( a ) = 2 L 2 ( a ) − L ( a 2 ) . {\displaystyle Q(a)=2L^{2}(a)-L(a^{2})\ .} The article on universal enveloping algebras describes the canonical construction of enveloping algebras, as well as the PBW-type theorems for them. For Lie algebras, such enveloping algebras have a universal property, which does not hold, in general, for non-associative algebras. The best-known example is, perhaps the Albert algebra, an exceptional Jordan algebra that is not enveloped by the canonical construction of the enveloping algebra for Jordan algebras. == See also == List of algebras Commutative non-associative magmas, which give rise to non-associative algebras == Citations == == Notes == == References == Albert, A. Adrian (2003) [1939]. Structure of algebras. American Mathematical Society Colloquium Publ. Vol. 24 (Corrected reprint of the revised 1961 ed.). New York: American Mathematical Society. ISBN 0-8218-1024-3. Zbl 0023.19901. Albert, A. Adrian (1948a). "Power-associative rings". Transactions of the American Mathematical Society. 64: 552–593. doi:10.2307/1990399. ISSN 0002-9947. JSTOR 1990399. MR 0027750. Zbl 0033.15402. Albert, A. Adrian (1948b). "On right alternative algebras". Annals of Mathematics. 50: 318–328. doi:10.2307/1969457. JSTOR 1969457. Bremner, Murray; Murakami, Lúcia; Shestakov, Ivan (2013) [2006]. "Chapter 86: Nonassociative Algebras" (PDF). In Hogben, Leslie (ed.). Handbook of Linear Algebra (2nd ed.). CRC Press. ISBN 978-1-498-78560-0. Herstein, I. N., ed. (2011) [1965]. Some Aspects of Ring Theory: Lectures given at a Summer School of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Varenna (Como), Italy, August 23-31, 1965. C.I.M.E. Summer Schools. Vol. 37 (reprint ed.). Springer-Verlag. ISBN 3-6421-1036-3. Jacobson, Nathan (1968). Structure and representations of Jordan algebras. American Mathematical Society Colloquium Publications, Vol. XXXIX. Providence, R.I.: American Mathematical Society. ISBN 978-0-821-84640-7. MR 0251099. Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by J. Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001. Koecher, Max (1999). Krieg, Aloys; Walcher, Sebastian (eds.). The Minnesota notes on Jordan algebras and their applications. Lecture Notes in Mathematics. Vol. 1710. Berlin: Springer-Verlag. ISBN 3-540-66360-6. Zbl 1072.17513. Kokoris, Louis A. (1955). "Power-associative rings of characteristic two". Proceedings of the American Mathematical Society. 6 (5). American Mathematical Society: 705–710. doi:10.2307/2032920. Kurosh, A.G. (1947). "Non-associative algebras and free products of algebras". Mat. Sbornik. 20 (62). MR 0020986. Zbl 0041.16803. McCrimmon, Kevin (2004). A taste of Jordan algebras. Universitext. Berlin, New York: Springer-Verlag. doi:10.1007/b97489. ISBN 978-0-387-95447-9. MR 2014924. Zbl 1044.17001. Errata. Mikheev, I.M. (1976). "Right nilpotency in right alternative rings". Siberian Mathematical Journal. 17 (1): 178–180. doi:10.1007/BF00969304. Okubo, Susumu (2005) [1995]. Introduction to Octonion and Other Non-Associative Algebras in Physics. Montroll Memorial Lecture Series in Mathematical Physics. Vol. 2. Cambridge University Press. doi:10.1017/CBO9780511524479. ISBN 0-521-01792-0. Zbl 0841.17001. Rosenfeld, Boris (1997). Geometry of Lie groups. Mathematics and its Applications. Vol. 393. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-4390-5. Zbl 0867.53002. Rowen, Louis Halle (2008). Graduate Algebra: Noncommutative View. Graduate studies in mathematics. American Mathematical Society. ISBN 0-8218-8408-5. Schafer, Richard D. (1995) [1966]. An Introduction to Nonassociative Algebras. Dover. ISBN 0-486-68813-5. Zbl 0145.25601. Zhevlakov, Konstantin A.; Slin'ko, Arkadii M.; Shestakov, Ivan P.; Shirshov, Anatoly I. (1982) [1978]. Rings that are nearly associative. Translated by Smith, Harry F. ISBN 0-12-779850-1.
Wikipedia/Non-associative_algebra
In mathematics, specifically in category theory, F-algebras generalize the notion of algebraic structure. Rewriting the algebraic laws in terms of morphisms eliminates all references to quantified elements from the axioms, and these algebraic laws may then be glued together in terms of a single functor F, the signature. F-algebras can also be used to represent data structures used in programming, such as lists and trees. The main related concepts are initial F-algebras which may serve to encapsulate the induction principle, and the dual construction F-coalgebras. == Definition == If C {\displaystyle C} is a category, and F : C → C {\displaystyle F:C\rightarrow C} is an endofunctor of C {\displaystyle C} , then an F {\displaystyle F} -algebra is a tuple ( A , α ) {\displaystyle (A,\alpha )} , where A {\displaystyle A} is an object of C {\displaystyle C} and α {\displaystyle \alpha } is a C {\displaystyle C} -morphism F ( A ) → A {\displaystyle F(A)\rightarrow A} . The object A {\displaystyle A} is called the carrier of the algebra. When it is permissible from context, algebras are often referred to by their carrier only instead of the tuple. A homomorphism from an F {\displaystyle F} -algebra ( A , α ) {\displaystyle (A,\alpha )} to an F {\displaystyle F} -algebra ( B , β ) {\displaystyle (B,\beta )} is a C {\displaystyle C} -morphism f : A → B {\displaystyle f:A\rightarrow B} such that f ∘ α = β ∘ F ( f ) {\displaystyle f\circ \alpha =\beta \circ F(f)} , according to the following commutative diagram: Equipped with these morphisms, F {\displaystyle F} -algebras constitute a category. The dual construction are F {\displaystyle F} -coalgebras, which are objects A ∗ {\displaystyle A^{*}} together with a morphism α ∗ : A ∗ → F ( A ∗ ) {\displaystyle \alpha ^{*}:A^{*}\rightarrow F(A^{*})} . == Examples == === Groups === Classically, a group is a set G {\displaystyle G} with a group law m : G × G → G {\displaystyle m:G\times G\rightarrow G} , with m ( x , y ) = x ⋅ y {\displaystyle m(x,y)=x\cdot y} , satisfying three axioms: the existence of an identity element, the existence of an inverse for each element of the group, and associativity. To put this in a categorical framework, first define the identity and inverse as functions (morphisms of the set G {\displaystyle G} ) by e : 1 → G {\displaystyle e:1\rightarrow G} with e ( ∗ ) = 1 {\displaystyle e(*)=1} , and i : G → G {\displaystyle i:G\rightarrow G} with i ( x ) = x − 1 {\displaystyle i(x)=x^{-1}} . Here 1 {\displaystyle 1} denotes the set with one element 1 = { ∗ } {\displaystyle 1=\left\{*\right\}} , which allows one to identify elements x ∈ G {\displaystyle x\in G} with morphisms 1 → G {\displaystyle 1\rightarrow G} . It is then possible to write the axioms of a group in terms of functions (note how the existential quantifier is absent): ∀ x ∈ G , ∀ y ∈ G , ∀ z ∈ G , m ( m ( x , y ) , z ) = m ( x , m ( y , z ) ) {\displaystyle \forall x\in G,\forall y\in G,\forall z\in G,m(m(x,y),z)=m(x,m(y,z))} , ∀ x ∈ G , m ( e ( ∗ ) , x ) = m ( x , e ( ∗ ) ) = x {\displaystyle \forall x\in G,m(e(*),x)=m(x,e(*))=x} , ∀ x ∈ G , m ( i ( x ) , x ) = m ( x , i ( x ) ) = e ( ∗ ) {\displaystyle \forall x\in G,m(i(x),x)=m(x,i(x))=e(*)} . Then this can be expressed with commutative diagrams: Now use the coproduct (the disjoint union of sets) to glue the three morphisms in one: α = e + i + m {\displaystyle \alpha =e+i+m} according to α : 1 + G + G × G → G , ∗ ↦ 1 , x ↦ x − 1 , ( x , y ) ↦ x ⋅ y . {\displaystyle {\begin{matrix}\alpha :{1}+G+G\times G&\to &G,\\*&\mapsto &1,\\x&\mapsto &x^{-1},\\(x,y)&\mapsto &x\cdot y.\end{matrix}}} Thus a group is an F {\displaystyle F} -algebra where F {\displaystyle F} is the functor F ( G ) = 1 + G + G × G {\displaystyle F(G)=1+G+G\times G} . However the reverse is not necessarily true. Some F {\displaystyle F} -algebra where F {\displaystyle F} is the functor F ( G ) = 1 + G + G × G {\displaystyle F(G)=1+G+G\times G} are not groups. The above construction is used to define group objects over an arbitrary category with finite products and a terminal object 1 {\displaystyle 1} . When the category admits finite coproducts, the group objects are F {\displaystyle F} -algebras. For example, finite groups are F {\displaystyle F} -algebras in the category of finite sets and Lie groups are F {\displaystyle F} -algebras in the category of smooth manifolds with smooth maps. === Algebraic structures === Going one step ahead of universal algebra, most algebraic structures are F-algebras. For example, abelian groups are F-algebras for the same functor F(G) = 1 + G + G×G as for groups, with an additional axiom for commutativity: m ∘ t = m {\displaystyle m\circ t=m} m∘t = m, where t(x,y) = (y,x) is the transpose on GxG. Monoids are F-algebras of signature F(M) = 1 + M×M. In the same vein, semigroups are F-algebras of signature F(S) = S×S Rings, domains and fields are also F-algebras with a signature involving two laws +,•: R×R → R, an additive identity 0: 1 → R, a multiplicative identity 1: 1 → R, and an additive inverse for each element -: R → R. As all these functions share the same codomain R they can be glued into a single signature function 1 + 1 + R + R×R + R×R → R, with axioms to express associativity, distributivity, and so on. This makes rings F-algebras on the category of sets with signature 1 + 1 + R + R×R + R×R. Alternatively, we can look at the functor F(R) = 1 + R×R in the category of abelian groups. In that context, the multiplication is a homomorphism, meaning m(x + y, z) = m(x,z) + m(y,z) and m(x,y + z) = m(x,y) + m(x,z), which are precisely the distributivity conditions. Therefore, a ring is an F-algebra of signature 1 + R×R over the category of abelian groups which satisfies two axioms (associativity and identity for the multiplication). When we come to vector spaces and modules, the signature functor includes a scalar multiplication k×E → E, and the signature F(E) = 1 + E + k×E is parametrized by k over the category of fields, or rings. Algebras over a field can be viewed as F-algebras of signature 1 + 1 + A + A×A + A×A + k×A over the category of sets, of signature 1 + A×A over the category of modules (a module with an internal multiplication), and of signature k×A over the category of rings (a ring with a scalar multiplication), when they are associative and unitary. === Lattice === Not all mathematical structures are F-algebras. For example, a poset P may be defined in categorical terms with a morphism s:P × P → Ω, on a subobject classifier (Ω = {0,1} in the category of sets and s(x,y)=1 precisely when x≤y). The axioms restricting the morphism s to define a poset can be rewritten in terms of morphisms. However, as the codomain of s is Ω and not P, it is not an F-algebra. However, lattices, which are partial orders in which every two elements have a supremum and an infimum, and in particular total orders, are F-algebras. This is because they can equivalently be defined in terms of the algebraic operations: x∨y = inf(x,y) and x∧y = sup(x,y), subject to certain axioms (commutativity, associativity, absorption and idempotency). Thus they are F-algebras of signature P x P + P x P. It is often said that lattice theory draws on both order theory and universal algebra. === Recurrence === Consider the functor F : S e t → S e t {\displaystyle F:\mathrm {\bf {Set}} \to \mathrm {\bf {Set}} } that sends a set X {\displaystyle X} to 1 + X {\displaystyle 1+X} . Here, S e t {\displaystyle \mathrm {\bf {Set}} } denotes the category of sets, + {\displaystyle +} denotes the usual coproduct given by the disjoint union, and 1 {\displaystyle 1} is a terminal object (i.e. any singleton set). Then, the set N {\displaystyle \mathbb {N} } of natural numbers together with the function [ z e r o , s u c c ] : 1 + N → N {\displaystyle [\mathrm {zero} ,\mathrm {succ} ]:1+\mathbb {N} \to \mathbb {N} } —which is the coproduct of the functions z e r o : 1 ↦ 0 {\displaystyle \mathrm {zero} :1\mapsto 0} and s u c c : n ↦ n + 1 {\displaystyle \mathrm {succ} :n\mapsto n+1} —is an F-algebra. == Initial F-algebra == If the category of F-algebras for a given endofunctor F has an initial object, it is called an initial algebra. The algebra ( N , [ z e r o , s u c c ] ) {\displaystyle (\mathbb {N} ,[\mathrm {zero} ,\mathrm {succ} ])} in the above example is an initial algebra. Various finite data structures used in programming, such as lists and trees, can be obtained as initial algebras of specific endofunctors. Types defined by using least fixed point construct with functor F can be regarded as an initial F-algebra, provided that parametricity holds for the type. See also Universal algebra. == Terminal F-coalgebra == In a dual way, a similar relationship exists between notions of greatest fixed point and terminal F-coalgebra. These can be used for allowing potentially infinite objects while maintaining strong normalization property. In the strongly normalizing Charity programming language (i.e. each program terminates in it), coinductive data types can be used to achieve surprising results, enabling the definition of lookup constructs to implement such “strong” functions like the Ackermann function. == See also == Algebras for a monad Algebraic data type Catamorphism Dialgebra == Notes == == References == == External links == Categorical programming with inductive and coinductive types (Archived 2020-11-30 at the Wayback Machine) by Varmo Vene Philip Wadler: Recursive types for free! (Archived 2020-11-30 at the Wayback Machine) University of Glasgow, June 1990. Draft. Algebra and coalgebra (Archived 2019-04-27 at the Wayback Machine) from CLiki B. Jacobs, J. Rutten: A Tutorial on (Co) Algebras and (Co) Induction. Bulletin of the European Association for Theoretical Computer Science, vol. 62, 1997, Archived 2021-02-12 at the Wayback Machine Understanding F-Algebras (Archived 2020-08-04 at the Wayback Machine) by Bartosz Milewski
Wikipedia/F-algebra
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory. Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra". == History == Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots. This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel's complete proof in 1824 (now known as the Abel–Ruffini theorem). Évariste Galois later introduced a theory (presently called Galois theory) to decide which equations are solvable by radicals. == Further problems == Other classical problems of the theory of equations are the following: Linear equations: this problem was solved during antiquity. Simultaneous linear equations: The general theoretical solution was provided by Gabriel Cramer in 1750. However devising efficient methods (algorithms) to solve these systems remains an active subject of research now called linear algebra. Finding the integer solutions of an equation or of a system of equations. These problems are now called Diophantine equations, which are considered a part of number theory (see also integer programming). Systems of polynomial equations: Because of their difficulty, these systems, with few exceptions, have been studied only since the second part of the 19th century. They have led to the development of algebraic geometry. == See also == Root-finding algorithm Properties of polynomial roots Quintic function == References == https://www.britannica.com/science/mathematics/Theory-of-equations == Further reading == Uspensky, James Victor, Theory of Equations (McGraw-Hill), 1963 Dickson, Leonard E., Elementary Theory of Equations (Internet Archive), originally 1914 [1]
Wikipedia/Theory_of_equations
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. == History == Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. == Open-loop and closed-loop (feedback) control == == Classical control theory == == Linear and nonlinear control theory == The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. == Analysis techniques – frequency domain and time domain == Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space. == System interfacing == Control systems can be divided into different categories depending on the number of inputs and outputs. Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker. Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems. === Classical SISO system design === The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. === Modern MIMO system design === Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. == Topics in control theory == === Stability === The stability of a general dynamical system with no input can be described with Lyapunov stability criteria. A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input. Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems. Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function. inside the unit circle for discrete time, when the Z-transform is used. The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the x {\displaystyle x} axis is the real axis and the discrete Z-transform is in circular coordinates where the ρ {\displaystyle \rho } axis is the real axis. When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an impulse response of x [ n ] = 0.5 n u [ n ] {\displaystyle \ x[n]=0.5^{n}u[n]} then the Z-transform (see this example), is given by X ( z ) = 1 1 − 0.5 z − 1 {\displaystyle \ X(z)={\frac {1}{1-0.5z^{-1}}}} which has a pole in z = 0.5 {\displaystyle z=0.5} (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle. However, if the impulse response was x [ n ] = 1.5 n u [ n ] {\displaystyle \ x[n]=1.5^{n}u[n]} then the Z-transform is X ( z ) = 1 1 − 1.5 z − 1 {\displaystyle \ X(z)={\frac {1}{1-1.5z^{-1}}}} which has a pole at z = 1.5 {\displaystyle z=1.5} and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. === Controllability and observability === Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. === Control specification === Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have R e [ λ ] < − λ ¯ {\displaystyle Re[\lambda ]<-{\overline {\lambda }}} , where λ ¯ {\displaystyle {\overline {\lambda }}} is a fixed value strictly greater than zero, instead of simply asking that R e [ λ ] < 0 {\displaystyle Re[\lambda ]<0} . Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). === Model identification and robustness === A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible. System identification The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that m x ¨ ( t ) = − K x ( t ) − B x ˙ ( t ) {\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)} . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. == System classifications == === Linear systems control === For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. === Nonlinear systems control === Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states. === Decentralized systems control === When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. === Deterministic and stochastic systems control === A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. == Main control strategies == Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. List of the main control techniques Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control. Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors. Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations. Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field. A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system. Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy. == People in systems and control == Many active and historical figures made significant contribution to control theory including Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him. Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems. Alexander Lyapunov in the 1890s marks the beginning of stability theory. Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s. Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s. Richard Bellman developed dynamic programming in the 1940s. Warren E. Dixon, control theorist and a professor Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941. Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s. John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s. Lev Pontryagin introduced the maximum principle and the bang-bang principle. Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods. Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation. Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory. == See also == Examples of control systems Topics in control theory Other related topics == References == == Further reading == Levine, William S., ed. (1996). The Control Handbook. New York: CRC Press. ISBN 978-0-8493-8570-4. Karl J. Åström; Richard M. Murray (2008). Feedback Systems: An Introduction for Scientists and Engineers (PDF). Princeton University Press. ISBN 978-0-691-13576-2. Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3. Vannevar Bush (1929). Operational Circuit Analysis. John Wiley and Sons, Inc. Robert F. Stengel (1994). Optimal Control and Estimation. Dover Publications. ISBN 978-0-486-68200-6. Franklin; et al. (2002). Feedback Control of Dynamic Systems (4 ed.). New Jersey: Prentice Hall. ISBN 978-0-13-032393-4. Joseph L. Hellerstein; Dawn M. Tilbury; Sujay Parekh (2004). Feedback Control of Computing Systems. John Wiley and Sons. ISBN 978-0-471-26637-2. Diederich Hinrichsen and Anthony J. Pritchard (2005). Mathematical Systems Theory I – Modelling, State Space Analysis, Stability and Robustness. Springer. ISBN 978-3-540-44125-0. Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition (PDF). Springer. ISBN 978-0-387-98489-6. Goodwin, Graham (2001). Control System Design. Prentice Hall. ISBN 978-0-13-958653-8. Christophe Basso (2012). Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide. Artech House. ISBN 978-1608075577. Boris J. Lurie; Paul J. Enright (2019). Classical Feedback Control with Nonlinear Multi-loop Systems (3 ed.). CRC Press. ISBN 978-1-1385-4114-6. For Chemical Engineering Luyben, William (1989). Process Modeling, Simulation, and Control for Chemical Engineers. McGraw Hill. ISBN 978-0-07-039159-8. == External links == Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods. Control Tuning and Best Practices Advanced control structures, free on-line simulators explaining the control theory
Wikipedia/Control_theory
Algebraic geometry is a branch of mathematics which uses abstract algebraic techniques, mainly from commutative algebra, to solve geometrical problems. Classically, it studies zeros of multivariate polynomials; the modern approach generalizes this in a few different aspects. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations. Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique. In the 20th century, algebraic geometry split into several subareas. The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. Real algebraic geometry is the study of the real algebraic varieties. Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields. A large part of singularity theory is devoted to the singularities of algebraic varieties. Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties. Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry. One key achievement of this abstract algebraic geometry is Grothendieck's scheme theory which allows one to use sheaf theory to study algebraic varieties in a way which is very similar to its use in the study of differential and analytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, through Hilbert's Nullstellensatz, with a maximal ideal of the coordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory. Wiles' proof of the longstanding conjecture called Fermat's Last Theorem is an example of the power of this approach. == Basic notions == === Zeros of simultaneous polynomials === In classical algebraic geometry, the main objects of interest are the vanishing sets of collections of polynomials, meaning the set of all points that simultaneously satisfy one or more polynomial equations. For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points ( x , y , z ) {\displaystyle (x,y,z)} with x 2 + y 2 + z 2 − 1 = 0. {\displaystyle x^{2}+y^{2}+z^{2}-1=0.\,} A "slanted" circle in R3 can be defined as the set of all points ( x , y , z ) {\displaystyle (x,y,z)} which satisfy the two polynomial equations x 2 + y 2 + z 2 − 1 = 0 , {\displaystyle x^{2}+y^{2}+z^{2}-1=0,\,} x + y + z = 0. {\displaystyle x+y+z=0.\,} === Affine varieties === First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries. A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the regular functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k. Therefore, the set of the regular functions on An is a ring, which is denoted k[An]. We say that a polynomial vanishes at a point if evaluating it at that point gives zero. Let S be a set of polynomials in k[An]. The vanishing set of S (or vanishing locus or zero set) is the set V(S) of all points in An where every polynomial in S vanishes. Symbolically, V ( S ) = { ( t 1 , … , t n ) ∣ p ( t 1 , … , t n ) = 0 for all p ∈ S } . {\displaystyle V(S)=\{(t_{1},\dots ,t_{n})\mid p(t_{1},\dots ,t_{n})=0{\text{ for all }}p\in S\}.\,} A subset of An which is V(S), for some S, is called an algebraic set. The V stands for variety (a specific type of algebraic set to be defined below). Given a subset U of An, can one recover the set of polynomials which generate it? If U is any subset of An, define I(U) to be the set of all polynomials whose vanishing set contains U. The I stands for ideal: if two polynomials f and g both vanish on U, then f+g vanishes on U, and if h is any polynomial, then hf vanishes on U, so I(U) is always an ideal of the polynomial ring k[An]. Two natural questions to ask are: Given a subset U of An, when is U = V(I(U))? Given a set S of polynomials, when is S = I(V(S))? The answer to the first question is provided by introducing the Zariski topology, a topology on An whose closed sets are the algebraic sets, and which directly reflects the algebraic structure of k[An]. Then U = V(I(U)) if and only if U is an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given by Hilbert's Nullstellensatz. In one of its forms, it says that I(V(S)) is the radical of the ideal generated by S. In more abstract language, there is a Galois connection, giving rise to two closure operators; they can be identified, and naturally play a basic role in the theory; the example is elaborated at Galois connection. For various reasons we may not always want to work with the entire ideal corresponding to an algebraic set U. Hilbert's basis theorem implies that ideals in k[An] are always finitely generated. An algebraic set is called irreducible if it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called the irreducible components of the algebraic set. An irreducible algebraic set is also called a variety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of a prime ideal of the polynomial ring. Some authors do not make a clear distinction between algebraic sets and varieties and use irreducible variety to make the distinction when needed. === Regular functions === Just as continuous functions are the natural maps on topological spaces and smooth functions are the natural maps on differentiable manifolds, there is a natural class of functions on an algebraic set, called regular functions or polynomial functions. A regular function on an algebraic set V contained in An is the restriction to V of a regular function on An. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and even analytic. It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in a normal topological space, where the Tietze extension theorem guarantees that a continuous function on a closed subset always extends to the ambient topological space. Just as with the regular functions on affine space, the regular functions on V form a ring, which we denote by k[V]. This ring is called the coordinate ring of V. Since regular functions on V come from regular functions on An, there is a relationship between the coordinate rings. Specifically, if a regular function on V is the restriction of two functions f and g in k[An], then f − g is a polynomial function which is null on V and thus belongs to I(V). Thus k[V] may be identified with k[An]/I(V). === Morphism of affine varieties === Using regular functions from an affine variety to A1, we can define regular maps from one affine variety to another. First we will define a regular map from a variety into affine space: Let V be a variety contained in An. Choose m regular functions on V, and call them f1, ..., fm. We define a regular map f from V to Am by letting f = (f1, ..., fm). In other words, each fi determines one coordinate of the range of f. If V′ is a variety contained in Am, we say that f is a regular map from V to V′ if the range of f is contained in V′. The definition of the regular maps apply also to algebraic sets. The regular maps are also called morphisms, as they make the collection of all affine algebraic sets into a category, where the objects are the affine algebraic sets and the morphisms are the regular maps. The affine varieties is a subcategory of the category of the algebraic sets. Given a regular map g from V to V′ and a regular function f of k[V′], then f ∘ g ∈ k[V]. The map f → f ∘ g is a ring homomorphism from k[V′] to k[V]. Conversely, every ring homomorphism from k[V′] to k[V] defines a regular map from V to V′. This defines an equivalence of categories between the category of algebraic sets and the opposite category of the finitely generated reduced k-algebras. This equivalence is one of the starting points of scheme theory. === Rational function and birational equivalence === In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions. If V is an affine variety, its coordinate ring is an integral domain and has thus a field of fractions which is denoted k(V) and called the field of the rational functions on V or, shortly, the function field of V. Its elements are the restrictions to V of the rational functions over the affine space containing V. The domain of a rational function f is not V but the complement of the subvariety (a hypersurface) where the denominator of f vanishes. As with regular maps, one may define a rational map from a variety V to a variety V'. As with the regular maps, the rational maps from V to V' may be identified to the field homomorphisms from k(V') to k(V). Two affine varieties are birationally equivalent if there are two rational functions between them which are inverse one to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic. An affine variety is a rational variety if it is birationally equivalent to an affine space. This means that the variety admits a rational parameterization, that is a parametrization with rational functions. For example, the circle of equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} is a rational curve, as it has the parametric equation x = 2 t 1 + t 2 {\displaystyle x={\frac {2\,t}{1+t^{2}}}} y = 1 − t 2 1 + t 2 , {\displaystyle y={\frac {1-t^{2}}{1+t^{2}}}\,,} which may also be viewed as a rational map from the line to the circle. The problem of resolution of singularities is to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see also smooth completion). It was solved in the affirmative in characteristic 0 by Heisuke Hironaka in 1964 and is yet unsolved in finite characteristic. === Projective variety === Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the number i, a root of the polynomial x2 + 1, projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet. To see how this might come about, consider the variety V(y − x2). If we draw it, we get a parabola. As x goes to positive infinity, the slope of the line from the origin to the point (x, x2) also goes to positive infinity. As x goes to negative infinity, the slope of the same line goes to negative infinity. Compare this to the variety V(y − x3). This is a cubic curve. As x goes to positive infinity, the slope of the line from the origin to the point (x, x3) goes to positive infinity just as before. But unlike before, as x goes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" of V(y − x3) is different from the behavior "at infinity" of V(y − x2). The consideration of the projective completion of the two curves, which is their prolongation "at infinity" in the projective plane, allows us to quantify this difference: the point at infinity of the parabola is a regular point, whose tangent is the line at infinity, while the point at infinity of the cubic curve is a cusp. Also, both curves are rational, as they are parameterized by x, and the Riemann-Roch theorem implies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular. Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example, Bézout's theorem on the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry. Nowadays, the projective space Pn of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension n + 1, or equivalently to the set of the vector lines in a vector space of dimension n + 1. When a coordinate system has been chosen in the space of dimension n + 1, all the points of a line have the same set of coordinates, up to the multiplication by an element of k. This defines the homogeneous coordinates of a point of Pn as a sequence of n + 1 elements of the base field k, defined up to the multiplication by a nonzero element of k (the same for the whole sequence). A polynomial in n + 1 variables vanishes at all points of a line passing through the origin if and only if it is homogeneous. In this case, one says that the polynomial vanishes at the corresponding point of Pn. This allows us to define a projective algebraic set in Pn as the set V(f1, ..., fk), where a finite set of homogeneous polynomials {f1, ..., fk} vanishes. Like for affine algebraic sets, there is a bijection between the projective algebraic sets and the reduced homogeneous ideals which define them. The projective varieties are the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whose homogeneous coordinate ring is an integral domain, the projective coordinates ring being defined as the quotient of the graded ring or the polynomials in n + 1 variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties. The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, the field of the rational functions or function field is a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring. == Real algebraic geometry == Real algebraic geometry is the study of real algebraic varieties. The fact that the field of the real numbers is an ordered field cannot be ignored in such a study. For example, the curve of equation x 2 + y 2 − a = 0 {\displaystyle x^{2}+y^{2}-a=0} is a circle if a > 0 {\displaystyle a>0} , but has no real points if a < 0 {\displaystyle a<0} . Real algebraic geometry also investigates, more broadly, semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of the hyperbola of equation x y − 1 = 0 {\displaystyle xy-1=0} is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined by x y − 1 = 0 {\displaystyle xy-1=0} and x > 0 {\displaystyle x>0} . One open problem in real algebraic geometry is the following part of Hilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingular plane curve of degree 8. == Computational algebraic geometry == One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held at Marseille, France, in June 1979. At this meeting, Dennis S. Arnon showed that George E. Collins's Cylindrical algebraic decomposition (CAD) allows the computation of the topology of semi-algebraic sets, Bruno Buchberger presented Gröbner bases and his algorithm to compute them, and Daniel Lazard presented a new algorithm for solving systems of homogeneous polynomial equations with a computational complexity which is essentially polynomial in the expected number of solutions and thus singly exponential in the number of the unknowns. This algorithm is strongly related to Macaulay's multivariate resultant. Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is singly exponential in the number of the variables. A body of mathematical theory complementary to symbolic methods called numerical algebraic geometry has been developed over the last several decades. The main computational method is homotopy continuation. This supports, for example, a model of floating-point computation for solving problems of algebraic geometry. === Gröbner basis === A Gröbner basis is a system of generators of a polynomial ideal whose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal. Given an ideal I defining an algebraic set V: V is empty (over an algebraically closed extension of the basis field) if and only if the Gröbner basis for any monomial ordering is reduced to {1}. By means of the Hilbert series, one may compute the dimension and the degree of V from any Gröbner basis of I for a monomial ordering refining the total degree. If the dimension of V is 0, then one may compute the points (finite in number) of V from any Gröbner basis of I (see Systems of polynomial equations). A Gröbner basis computation allows one to remove from V all irreducible components which are contained in a given hypersurface. A Gröbner basis computation allows one to compute the Zariski closure of the image of V by the projection on the k first coordinates, and the subset of the image where the projection is not proper. More generally, Gröbner basis computations allow one to compute the Zariski closure of the image and the critical points of a rational function of V into another affine variety. Gröbner basis computations do not allow one to compute directly the primary decomposition of I nor the prime ideals defining the irreducible components of V, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases use regular chains but may need Gröbner bases in some exceptional situations. Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst-case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply. Faugère F5 algorithm realizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem. === Cylindrical algebraic decomposition (CAD) === CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity the Tarski–Seidenberg theorem on quantifier elimination over the real numbers. This theorem concerns the formulas of the first-order logic whose atomic formulas are polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operators and (∧), or (∨), not (¬), for all (∀), and there exists (∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifiers (∀, ∃). The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows one, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula—this is almost every problem concerning explicitly given varieties and semi-algebraic sets. While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless most polynomials appearing in the input are linear, it may not solve problems with more than four variables. Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest. As an example of the state of art, there are efficient algorithms to find at least one point in every connected component of a semi-algebraic set, and thus to test whether a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components. === Asymptotic complexity vs. practical efficiency === The basic general algorithms of computational geometry have a doubly exponential worst-case complexity. More precisely, if d is the maximal degree of the input polynomials and n the number of variables, then their complexity is at most d2cn for some constant c, and, for some inputs, the complexity is at least d2c′n for another constant c′. During the last 20 years of the 20th century, various algorithms were introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexity d O ( n 2 ) {\displaystyle d^{O(n^{2})}} . Among those algorithms which solve a subproblem of the problems solved by Gröbner bases, one may cite testing whether an affine variety is empty and solving nonhomogeneous polynomial systems which have a finite number of solutions. Such algorithms are rarely implemented because, on most entries, Faugère's F4 and F5 algorithms have a better practical efficiency and probably a similar or better complexity (probably because the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases). The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may cite counting the number of connected components, testing whether two points are in the same components, and computing a Whitney stratification of a real algebraic set. They have a complexity of d O ( n 2 ) {\displaystyle d^{O(n^{2})}} , but the constant involved by O notation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and it is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency. == Abstract modern viewpoint == The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes, formal schemes, ind-schemes, algebraic spaces, algebraic stacks, and so on. The need for this arises already from the useful ideas within the theory of varieties; for example, the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions, and developing formal grounds for natural intersection theory and deformation theory lead to some of the further extensions. Most remarkably, in the early 1960s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. Sometimes, other algebraic sites replace the category of affine schemes. For example, Nikolai Durov has introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of a tropical geometry, of an absolute geometry over a field of one element, and an algebraic analogue of Arakelov's geometry were realized in this setup. Another formal generalization is possible to universal algebraic geometry in which every variety of algebras has its own algebraic geometry. The term variety of algebras should not be confused with algebraic variety. The language of schemes, stacks, and generalizations has proved to be a valuable way of dealing with geometric concepts and become cornerstones of modern algebraic geometry. Algebraic stacks can be further generalized, and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend the Grothendieck site of affine schemes to a higher-categorical site of derived affine schemes, by replacing the commutative rings with an infinity category of differential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets with presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery, one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which satisfies certain infinite-categorical versions of sheaf axioms (and to be algebraic, inductively a sequence of representability conditions). Quillen model categories, Segal categories, and quasicategories are some of the most-used tools to formalize this, yielding the derived algebraic geometry, introduced by the school of Carlos Simpson, including Andre Hirschowitz, Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié, and others; and developed further by Jacob Lurie, Bertrand Toën, and Gabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories, has been developed from the early 1990s by Maxim Kontsevich and followers. == History == === Before the 16th century === Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus (c. 350 BC) considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter, and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves. Medieval mathematicians, including Omar Khayyam, Leonardo of Pisa, Gersonides and Nicole Oresme in the Medieval Period, solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) believed that there was a relationship between arithmetic, algebra, and geometry. This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century. === Renaissance === Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th- and 17th-century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). During the same period, Blaise Pascal and Gérard Desargues approached geometry from a different perspective, developing the synthetic notions of projective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greek ruler-and-compass construction. Ultimately, the analytic geometry of Descartes and Fermat won out, for it supplied the 18th-century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus of Newton and Leibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by the calculus of infinitesimals of Lagrange and Euler. === 19th and early 20th century === It took the simultaneous 19th-century developments of non-Euclidean geometry and Abelian integrals in order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up by Edmond Laguerre and Arthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea of homogeneous polynomial forms, and more specifically quadratic forms, on projective space. Subsequently, Felix Klein studied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class of transformations on the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamental Kleinian geometry on projective space, they concerned themselves also with the higher-degree birational transformations. This weaker notion of congruence would later lead members of the 20th century Italian school of algebraic geometry to classify algebraic surfaces up to birational isomorphism. The second early-19th-century development, that of Abelian integrals, would lead Bernhard Riemann to the development of Riemann surfaces. In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry. === 20th century === B. L. van der Waerden, Oscar Zariski, and André Weil developed a foundation for algebraic geometry based on contemporary commutative algebra, including valuation theory and the theory of ideals. One of the goals was to give a rigorous framework for proving the results of the Italian school of algebraic geometry. In particular, this school used systematically the notion of generic point without any precise definition, which was first given by these authors during the 1930s. In the 1950s and 1960s, Jean-Pierre Serre and Alexander Grothendieck recast the foundations making use of sheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea of schemes was worked out, in conjunction with a very refined apparatus of homological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both to number theory and to more classical geometric questions on algebraic varieties, singularities, moduli, and formal moduli. An important class of varieties, not easily understood directly from their defining equations, are the abelian varieties, which are the projective varieties whose points form an abelian group. The prototypical examples are the elliptic curves, which have a rich theory. They were instrumental in the proof of Fermat's Last Theorem and are also used in elliptic-curve cryptography. In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory of Gröbner bases, introduced by Bruno Buchberger in 1965. Another founding method, more specially devoted to real algebraic geometry, is the cylindrical algebraic decomposition, introduced by George E. Collins in 1973. See also: derived algebraic geometry. == Analytic geometry == An analytic variety over the field of real or complex numbers is defined locally as the set of common solutions of several equations involving analytic functions. It is analogous to the concept of algebraic variety in that it carries a structure sheaf of analytic functions instead of regular functions. Any complex manifold is a complex analytic variety. Since analytic varieties may have singular points, not all complex analytic varieties are manifolds. Over a non-archimedean field analytic geometry is studied via rigid analytic spaces. Modern analytic geometry over the field of complex numbers is closely related to complex algebraic geometry, as has been shown by Jean-Pierre Serre in his paper GAGA, the name of which is French for Algebraic geometry and analytic geometry. The GAGA results over the field of complex numbers may be extended to rigid analytic spaces over non-archimedean fields. == Applications == Algebraic geometry now finds applications in statistics, control theory, robotics, error-correcting codes, phylogenetics and geometric modelling. There are also connections to string theory, game theory, graph matchings, solitons and integer programming. == See also == == Notes == == References == === Sources === Kline, M. (1972). Mathematical Thought from Ancient to Modern Times. Vol. 1. Oxford University Press. ISBN 0195061357. == Further reading == Some classic textbooks that predate schemes van der Waerden, B. L. (1945). Einfuehrung in die algebraische Geometrie. Dover. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 1. Cambridge University Press. ISBN 978-0-521-46900-5. Zbl 0796.14001. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 2. Cambridge University Press. ISBN 978-0-521-46901-2. Zbl 0796.14002. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 3. Cambridge University Press. ISBN 978-0-521-46775-9. Zbl 0796.14003. Modern textbooks that do not use the language of schemes Garrity, Thomas; et al. (2013). Algebraic Geometry A Problem Solving Approach. American Mathematical Society. ISBN 978-0-821-89396-8. Griffiths, Phillip; Harris, Joe (1994). Principles of Algebraic Geometry. Wiley-Interscience. ISBN 978-0-471-05059-9. Zbl 0836.14001. Harris, Joe (1995). Algebraic Geometry A First Course. Springer-Verlag. ISBN 978-0-387-97716-4. Zbl 0779.14001. Mumford, David (1995). Algebraic Geometry I Complex Projective Varieties (2nd ed.). Springer-Verlag. ISBN 978-3-540-58657-9. Zbl 0821.14001. Reid, Miles (1988). Undergraduate Algebraic Geometry. Cambridge University Press. ISBN 978-0-521-35662-6. Zbl 0701.14001. Shafarevich, Igor (1995). Basic Algebraic Geometry I Varieties in Projective Space (2nd ed.). Springer-Verlag. ISBN 978-0-387-54812-8. Zbl 0797.14001. Textbooks in computational algebraic geometry Cox, David A.; Little, John; O'Shea, Donal (1997). Ideals, Varieties, and Algorithms (2nd ed.). Springer-Verlag. ISBN 978-0-387-94680-1. Zbl 0861.13012. Schenck, Hal (2003). Computational Algebraic Geometry. Cambridge University Press. Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise (2006). Algorithms in real algebraic geometry. Springer-Verlag. González-Vega, Laureano; Recio, Tómas (1996). Algorithms in algebraic geometry and applications. Birkhaüser. Elkadi, Mohamed; Mourrain, Bernard; Piene, Ragni, eds. (2006). Algebraic geometry and geometric modeling. Springer-Verlag. Dickenstein, Alicia; Schreyer, Frank-Olaf; Sommese, Andrew J., eds. (2008). Algorithms in Algebraic Geometry. The IMA Volumes in Mathematics and its Applications. Vol. 146. Springer. ISBN 9780387751559. LCCN 2007938208. Cox, David A.; Little, John B.; O'Shea, Donal (1998). Using algebraic geometry. Springer-Verlag. Caviness, Bob F.; Johnson, Jeremy R. (1998). Quantifier elimination and cylindrical algebraic decomposition. Springer-Verlag. Textbooks and references for schemes Eisenbud, David; Harris, Joe (1998). The Geometry of Schemes. Springer-Verlag. ISBN 978-0-387-98637-1. Zbl 0960.14002. Grothendieck, Alexander (1960). Éléments de géométrie algébrique. Publications Mathématiques de l'IHÉS. Zbl 0118.36206. Grothendieck, Alexander; Dieudonné, Jean Alexandre (1971). Éléments de géométrie algébrique. Vol. 1 (2nd ed.). Springer-Verlag. ISBN 978-3-540-05113-8. Zbl 0203.23301. Hartshorne, Robin (1977). Algebraic Geometry. Springer-Verlag. ISBN 978-0-387-90244-9. Zbl 0367.14001. Mumford, David (1999). The Red Book of Varieties and Schemes Includes the Michigan Lectures on Curves and Their Jacobians (2nd ed.). Springer-Verlag. ISBN 978-3-540-63293-1. Zbl 0945.14001. Shafarevich, Igor (1995). Basic Algebraic Geometry II Schemes and complex manifolds (2nd ed.). Springer-Verlag. ISBN 978-3-540-57554-2. Zbl 0797.14002. == External links == Foundations of Algebraic Geometry by Ravi Vakil, 808 pp. Algebraic geometry entry on PlanetMath English translation of the van der Waerden textbook Dieudonné, Jean (March 3, 1972). "The History of Algebraic Geometry". Talk at the Department of Mathematics of the University of Wisconsin–Milwaukee. Archived from the original on 2021-11-22 – via YouTube. The Stacks Project, an open source textbook and reference work on algebraic stacks and algebraic geometry Adjectives Project, an online database for searching examples of schemes and morphisms based on their properties
Wikipedia/Algebraic_geometry
In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all n × n matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and Rn×n). Some sets of infinite matrices form infinite matrix rings. A subring of a matrix ring is again a matrix ring. Over a rng, one can form matrix rngs. When R is a commutative ring, the matrix ring Mn(R) is an associative algebra over R, and may be called a matrix algebra. In this setting, if M is a matrix and r is in R, then the matrix rM is the matrix M with each of its entries multiplied by r. == Examples == The set of all n × n square matrices over R, denoted Mn(R). This is sometimes called the "full ring of n-by-n matrices". The set of all upper triangular matrices over R. The set of all lower triangular matrices over R. The set of all diagonal matrices over R. This subalgebra of Mn(R) is isomorphic to the direct product of n copies of R. For any index set I, the ring of endomorphisms of the right R-module M = ⨁ i ∈ I R {\textstyle M=\bigoplus _{i\in I}R} is isomorphic to the ring C F M I ( R ) {\displaystyle \mathbb {CFM} _{I}(R)} of column finite matrices whose entries are indexed by I × I and whose columns each contain only finitely many nonzero entries. The ring of endomorphisms of M considered as a left R-module is isomorphic to the ring R F M I ( R ) {\displaystyle \mathbb {RFM} _{I}(R)} of row finite matrices. If R is a Banach algebra, then the condition of row or column finiteness in the previous point can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring. This idea can be used to represent operators on Hilbert spaces, for example. The intersection of the row-finite and column-finite matrix rings forms a ring R C F M I ( R ) {\displaystyle \mathbb {RCFM} _{I}(R)} . If R is commutative, then Mn(R) has a structure of a *-algebra over R, where the involution * on Mn(R) is matrix transposition. If A is a C*-algebra, then Mn(A) is another C*-algebra. If A is non-unital, then Mn(A) is also non-unital. By the Gelfand–Naimark theorem, there exists a Hilbert space H and an isometric *-isomorphism from A to a norm-closed subalgebra of the algebra B(H) of continuous operators; this identifies Mn(A) with a subalgebra of B(H⊕n). For simplicity, if we further suppose that H is separable and A ⊆ {\displaystyle \subseteq } B(H) is a unital C*-algebra, we can break up A into a matrix ring over a smaller C*-algebra. One can do so by fixing a projection p and hence its orthogonal projection 1 − p; one can identify A with ( p A p p A ( 1 − p ) ( 1 − p ) A p ( 1 − p ) A ( 1 − p ) ) {\textstyle {\begin{pmatrix}pAp&pA(1-p)\\(1-p)Ap&(1-p)A(1-p)\end{pmatrix}}} , where matrix multiplication works as intended because of the orthogonality of the projections. In order to identify A with a matrix ring over a C*-algebra, we require that p and 1 − p have the same "rank"; more precisely, we need that p and 1 − p are Murray–von Neumann equivalent, i.e., there exists a partial isometry u such that p = uu* and 1 − p = u*u. One can easily generalize this to matrices of larger sizes. Complex matrix algebras Mn(C) are, up to isomorphism, the only finite-dimensional simple associative algebras over the field C of complex numbers. Prior to the invention of matrix algebras, Hamilton in 1853 introduced a ring, whose elements he called biquaternions and modern authors would call tensors in C ⊗R H, that was later shown to be isomorphic to M2(C). One basis of M2(C) consists of the four matrix units (matrices with one 1 and all other entries 0); another basis is given by the identity matrix and the three Pauli matrices. A matrix ring over a field is a Frobenius algebra, with Frobenius form given by the trace of the product: σ(A, B) = tr(AB). == Structure == The matrix ring Mn(R) can be identified with the ring of endomorphisms of the free right R-module of rank n; that is, Mn(R) ≅ EndR(Rn). Matrix multiplication corresponds to composition of endomorphisms. The ring Mn(D) over a division ring D is an Artinian simple ring, a special type of semisimple ring. The rings C F M I ( D ) {\displaystyle \mathbb {CFM} _{I}(D)} and R F M I ( D ) {\displaystyle \mathbb {RFM} _{I}(D)} are not simple and not Artinian if the set I is infinite, but they are still full linear rings. The Artin–Wedderburn theorem states that every semisimple ring is isomorphic to a finite direct product ∏ i = 1 r M n i ⁡ ( D i ) {\textstyle \prod _{i=1}^{r}\operatorname {M} _{n_{i}}(D_{i})} , for some nonnegative integer r, positive integers ni, and division rings Di. When we view Mn(C) as the ring of linear endomorphisms of Cn, those matrices which vanish on a given subspace V form a left ideal. Conversely, for a given left ideal I of Mn(C) the intersection of null spaces of all matrices in I gives a subspace of Cn. Under this construction, the left ideals of Mn(C) are in bijection with the subspaces of Cn. There is a bijection between the two-sided ideals of Mn(R) and the two-sided ideals of R. Namely, for each ideal I of R, the set of all n × n matrices with entries in I is an ideal of Mn(R), and each ideal of Mn(R) arises in this way. This implies that Mn(R) is simple if and only if R is simple. For n ≥ 2, not every left ideal or right ideal of Mn(R) arises by the previous construction from a left ideal or a right ideal in R. For example, the set of matrices whose columns with indices 2 through n are all zero forms a left ideal in Mn(R). The previous ideal correspondence actually arises from the fact that the rings R and Mn(R) are Morita equivalent. Roughly speaking, this means that the category of left R-modules and the category of left Mn(R)-modules are very similar. Because of this, there is a natural bijective correspondence between the isomorphism classes of left R-modules and left Mn(R)-modules, and between the isomorphism classes of left ideals of R and left ideals of Mn(R). Identical statements hold for right modules and right ideals. Through Morita equivalence, Mn(R) inherits any Morita-invariant properties of R, such as being simple, Artinian, Noetherian, prime. == Properties == If S is a subring of R, then Mn(S) is a subring of Mn(R). For example, Mn(Z) is a subring of Mn(Q). The matrix ring Mn(R) is commutative if and only if n = 0, R = 0, or R is commutative and n = 1. In fact, this is true also for the subring of upper triangular matrices. Here is an example showing two upper triangular 2 × 2 matrices that do not commute, assuming 1 ≠ 0 in R: [ 1 0 0 0 ] [ 1 1 0 0 ] = [ 1 1 0 0 ] {\displaystyle {\begin{bmatrix}1&0\\0&0\end{bmatrix}}{\begin{bmatrix}1&1\\0&0\end{bmatrix}}={\begin{bmatrix}1&1\\0&0\end{bmatrix}}} and [ 1 1 0 0 ] [ 1 0 0 0 ] = [ 1 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&1\\0&0\end{bmatrix}}{\begin{bmatrix}1&0\\0&0\end{bmatrix}}={\begin{bmatrix}1&0\\0&0\end{bmatrix}}.} For n ≥ 2, the matrix ring Mn(R) over a nonzero ring has zero divisors and nilpotent elements; the same holds for the ring of upper triangular matrices. An example in 2 × 2 matrices would be [ 0 1 0 0 ] [ 0 1 0 0 ] = [ 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\end{bmatrix}}={\begin{bmatrix}0&0\\0&0\end{bmatrix}}.} The center of Mn(R) consists of the scalar multiples of the identity matrix, In, in which the scalar belongs to the center of R. The unit group of Mn(R), consisting of the invertible matrices under multiplication, is denoted GLn(R). If F is a field, then for any two matrices A and B in Mn(F), the equality AB = In implies BA = In. This is not true for every ring R though. A ring R whose matrix rings all have the mentioned property is known as a stably finite ring (Lam 1999, p. 5). == Matrix semiring == In fact, R needs to be only a semiring for Mn(R) to be defined. In this case, Mn(R) is a semiring, called the matrix semiring. Similarly, if R is a commutative semiring, then Mn(R) is a matrix semialgebra. For example, if R is the Boolean semiring (the two-element Boolean algebra R = {0, 1} with 1 + 1 = 1), then Mn(R) is the semiring of binary relations on an n-element set with union as addition, composition of relations as multiplication, the empty relation (zero matrix) as the zero, and the identity relation (identity matrix) as the unity. == See also == Central simple algebra Clifford algebra Hurwitz's theorem (normed division algebras) Generic matrix ring Sylvester's law of inertia == Citations == == References ==
Wikipedia/Matrix_algebra
In mathematics, general topology (or point set topology) is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. The fundamental concepts in point-set topology are continuity, compactness, and connectedness: Continuous functions, intuitively, take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space. Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces. == History == General topology grew out of a number of areas, most importantly the following: the detailed study of subsets of the real line (once known as the topology of point sets; this usage is now obsolete) the introduction of the manifold concept the study of metric spaces, especially normed linear spaces, in the early days of functional analysis. General topology assumed its present form around 1940. It captures, one might say, almost everything in the intuition of continuity, in a technically adequate form that can be applied in any area of mathematics. == A topology on a set == Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: Both the empty set and X are elements of τ Any union of elements of τ is an element of τ Any intersection of finitely many elements of τ is an element of τ If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both (clopen set), or neither. The empty set and X itself are always both closed and open. === Basis for a topology === A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them. === Subspace and quotient === Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. A quotient space is defined as follows: if X is a topological space and Y is a set, and if f : X→ Y is a surjective function, then the quotient topology on Y is the collection of subsets of Y that have open inverse images under f. In other words, the quotient topology is the finest topology on Y for which f is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X. The map f is then the natural projection onto the set of equivalence classes. === Examples of topological spaces === A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. ==== Discrete and trivial topologies ==== Any set can be given the discrete topology, in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique. ==== Cofinite and cocountable topologies ==== Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set. Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. ==== Topologies on the real and complex numbers ==== There are many ways to define a topology on R, the set of real numbers. The standard topology on R is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces Rn can be given a topology. In the usual topology on Rn the basic open sets are the open balls. Similarly, C, the set of complex numbers, and Cn have a standard topology in which the basic open sets are open balls. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [a, b). This topology on R is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. ==== The metric topology ==== Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. ==== Further examples ==== There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Every manifold has a natural topology, since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from Rn. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On Rn or Cn, the closed sets of the Zariski topology are the solution sets of systems of polynomial equations. A linear graph has a natural topology that generalises many of the geometric aspects of graphs with vertices and edges. Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Any local field has a topology native to it, and this can be extended to vector spaces over that field. The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics. If Γ is an ordinal number, then the set Γ = [0, Γ) may be endowed with the order topology generated by the intervals (a, b), [0, b) and (a, Γ) where a and b are elements of Γ. == Continuous functions == Continuity is expressed in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V and whose image under f contains f(x). This is equivalent to the condition that the preimages of the open (closed) sets in Y are open (closed) in X. In metric spaces, this definition is equivalent to the ε–δ-definition that is often used in analysis. An extreme example: if a set X is given the discrete topology, all functions f : X → T {\displaystyle f\colon X\rightarrow T} to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose range is indiscrete is continuous. === Alternative definitions === Several equivalent definitions for a topological structure exist and thus there are several equivalent ways to define a continuous function. ==== Neighborhood definition ==== Definitions based on preimages are often difficult to use directly. The following criterion expresses continuity in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above δ-ε definition of continuity in the context of metric spaces. However, in general topological spaces, there is no notion of nearness or distance. Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. ==== Sequences and nets ==== In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). Thus sequentially continuous functions "preserve sequential limits". Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions. ==== Closure operator definition ==== Instead of specifying the open subsets of a topological space, the topology can also be determined by a closure operator (denoted cl), which assigns to any subset A ⊆ X its closure, or an interior operator (denoted int), which assigns to any subset A of X its interior. In these terms, a function f : ( X , c l ) → ( X ′ , c l ′ ) {\displaystyle f\colon (X,\mathrm {cl} )\to (X',\mathrm {cl} ')\,} between topological spaces is continuous in the sense above if and only if for all subsets A of X f ( c l ( A ) ) ⊆ c l ′ ( f ( A ) ) . {\displaystyle f(\mathrm {cl} (A))\subseteq \mathrm {cl} '(f(A)).} That is to say, given any element x of X that is in the closure of any subset A, f(x) belongs to the closure of f(A). This is equivalent to the requirement that for all subsets A' of X' f − 1 ( c l ′ ( A ′ ) ) ⊇ c l ( f − 1 ( A ′ ) ) . {\displaystyle f^{-1}(\mathrm {cl} '(A'))\supseteq \mathrm {cl} (f^{-1}(A')).} Moreover, f : ( X , i n t ) → ( X ′ , i n t ′ ) {\displaystyle f\colon (X,\mathrm {int} )\to (X',\mathrm {int} ')\,} is continuous if and only if f − 1 ( i n t ′ ( A ) ) ⊆ i n t ( f − 1 ( A ) ) {\displaystyle f^{-1}(\mathrm {int} '(A))\subseteq \mathrm {int} (f^{-1}(A))} for any subset A of X. === Properties === If f: X → Y and g: Y → Z are continuous, then so is the composition g ∘ f: X → Z. If f: X → Y is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology τ1 is said to be coarser than another topology τ2 (notation: τ1 ⊆ τ2) if every open subset with respect to τ1 is also open with respect to τ2. Then, the identity map idX: (X, τ2) → (X, τ1) is continuous if and only if τ1 ⊆ τ2 (see also comparison of topologies). More generally, a continuous function ( X , τ X ) → ( Y , τ Y ) {\displaystyle (X,\tau _{X})\rightarrow (Y,\tau _{Y})} stays continuous if the topology τY is replaced by a coarser topology and/or τX is replaced by a finer topology. === Homeomorphisms === Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. In fact, if an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f−1 need not be continuous. A bijective continuous function with continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. === Defining topologies via continuous functions === Given a function f : X → S , {\displaystyle f\colon X\rightarrow S,\,} where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f−1(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus the final topology can be characterized as the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S has a basis of open sets given by those sets of the form f−1(U) where U is open in X . If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus the initial topology can be characterized as the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions S → X {\displaystyle S\rightarrow X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\displaystyle X\rightarrow S.} == Compact sets == Formally, a topological space X is called compact if each of its open covers has a finite subcover. Otherwise it is called non-compact. Explicitly, this means that for every arbitrary collection { U α } α ∈ A {\displaystyle \{U_{\alpha }\}_{\alpha \in A}} of open subsets of X such that X = ⋃ α ∈ A U α , {\displaystyle X=\bigcup _{\alpha \in A}U_{\alpha },} there is a finite subset J of A such that X = ⋃ i ∈ J U i . {\displaystyle X=\bigcup _{i\in J}U_{i}.} Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta. Every closed interval in R of finite length is compact. More is true: In Rn, a set is compact if and only if it is closed and bounded. (See Heine–Borel theorem). Every continuous image of a compact space is compact. A compact subset of a Hausdorff space is closed. Every continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism. Every sequence of points in a compact metric space has a convergent subsequence. Every compact finite-dimensional manifold can be embedded in some Euclidean space Rn. == Connected sets == A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space X the following conditions are equivalent: X is connected. X cannot be divided into two disjoint nonempty closed sets. The only subsets of X that are both open and closed (clopen sets) are X and the empty set. The only subsets of X with empty boundary are X and the empty set. X cannot be written as the union of two nonempty separated sets. The only continuous functions from X to {0,1}, the two-point space endowed with the discrete topology, are constant. Every interval in R is connected. The continuous image of a connected space is connected. === Connected components === The maximal connected subsets (ordered by inclusion) of a nonempty topological space are called the connected components of the space. The components of any topological space X form a partition of X: they are disjoint, nonempty, and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets, which are not open. Let Γ x {\displaystyle \Gamma _{x}} be the connected component of x in a topological space X, and Γ x ′ {\displaystyle \Gamma _{x}'} be the intersection of all open-closed sets containing x (called quasi-component of x.) Then Γ x ⊂ Γ x ′ {\displaystyle \Gamma _{x}\subset \Gamma '_{x}} where the equality holds if X is compact Hausdorff or locally connected. === Disconnected spaces === A space in which all components are one-point sets is called totally disconnected. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open neighborhoods U of x and V of y such that X is the union of U and V. Clearly any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers Q, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. === Path-connected sets === A path from a point x to a point y in a topological space X is a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y. A path-component of X is an equivalence class of X under the equivalence relation, which makes x equivalent to y if there is a path from x to y. The space X is said to be path-connected (or pathwise connected or 0-connected) if there is at most one path-component; that is, if there is a path joining any two points in X. Again, many authors exclude the empty space. Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve. However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of Rn or Cn are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. == Products of spaces == Given X such that X := ∏ i ∈ I X i , {\displaystyle X:=\prod _{i\in I}X_{i},} is the Cartesian product of the topological spaces Xi, indexed by i ∈ I {\displaystyle i\in I} , and the canonical projections pi : X → Xi, the product topology on X is defined as the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology. The open sets in the product topology are unions (finite or infinite) of sets of the form ∏ i ∈ I U i {\displaystyle \prod _{i\in I}U_{i}} , where each Ui is open in Xi and Ui ≠ Xi only finitely many times. In particular, for a finite product (in particular, for the product of two topological spaces), the products of base elements of the Xi gives a basis for the product ∏ i ∈ I X i {\displaystyle \prod _{i\in I}X_{i}} . The product topology on X is the topology generated by sets of the form pi−1(U), where i is in I and U is an open subset of Xi. In other words, the sets {pi−1(U)} form a subbase for the topology on X. A subset of X is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form pi−1(U). The pi−1(U) are sometimes called open cylinders, and their intersections are cylinder sets. In general, the product of the topologies of each Xi forms a basis for what is called the box topology on X. In general, the box topology is finer than the product topology, but for finite products they coincide. Related to compactness is Tychonoff's theorem: the (arbitrary) product of compact spaces is compact. == Separation axioms == Many of these names have alternative meanings in some of mathematical literature, as explained on History of the separation axioms; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous. Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles. In all of the following definitions, X is again a topological space. X is T0, or Kolmogorov, if any two distinct points in X are topologically distinguishable. (It is a common theme among the separation axioms to have one version of an axiom that requires T0 and one version that doesn't.) X is T1, or accessible or Fréchet, if any two distinct points in X are separated. Thus, X is T1 if and only if it is both T0 and R0. (Though you may say such things as T1 space, Fréchet topology, and Suppose that the topological space X is Fréchet, avoid saying Fréchet space in this context, since there is another entirely different notion of Fréchet space in functional analysis.) X is Hausdorff, or T2 or separated, if any two distinct points in X are separated by neighbourhoods. Thus, X is Hausdorff if and only if it is both T0 and R1. A Hausdorff space must also be T1. X is T2½, or Urysohn, if any two distinct points in X are separated by closed neighbourhoods. A T2½ space must also be Hausdorff. X is regular, or T3, if it is T0 and if given any point x and closed set F in X such that x does not belong to F, they are separated by neighbourhoods. (In fact, in a regular space, any such x and F is also separated by closed neighbourhoods.) X is Tychonoff, or T3½, completely T3, or completely regular, if it is T0 and if f, given any point x and closed set F in X such that x does not belong to F, they are separated by a continuous function. X is normal, or T4, if it is Hausdorff and if any two disjoint closed subsets of X are separated by neighbourhoods. (In fact, a space is normal if and only if any two disjoint closed sets can be separated by a continuous function; this is Urysohn's lemma.) X is completely normal, or T5 or completely T4, if it is T1 and if any two separated sets are separated by neighbourhoods. A completely normal space must also be normal. X is perfectly normal, or T6 or perfectly T4, if it is T1 and if any two disjoint closed sets are precisely separated by a continuous function. A perfectly normal Hausdorff space must also be completely normal Hausdorff. The Tietze extension theorem: In a normal space, every continuous real-valued function defined on a closed subspace can be extended to a continuous map defined on the whole space. == Countability axioms == An axiom of countability is a property of certain mathematical objects (usually in a category) that requires the existence of a countable set with certain properties, while without it such sets might not exist. Important countability axioms for topological spaces: sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set first-countable space: every point has a countable neighbourhood basis (local base) second-countable space: the topology has a countable base separable space: there exists a countable dense subspace Lindelöf space: every open cover has a countable subcover σ-compact space: there exists a countable cover by compact spaces Relations: Every first countable space is sequential. Every second-countable space is first-countable, separable, and Lindelöf. Every σ-compact space is Lindelöf. A metric space is first-countable. For metric spaces second-countability, separability, and the Lindelöf property are all equivalent. == Metric spaces == A metric space is an ordered pair ( M , d ) {\displaystyle (M,d)} where M {\displaystyle M} is a set and d {\displaystyle d} is a metric on M {\displaystyle M} , i.e., a function d : M × M → R {\displaystyle d\colon M\times M\rightarrow \mathbb {R} } such that for any x , y , z ∈ M {\displaystyle x,y,z\in M} , the following holds: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} (non-negative), d ( x , y ) = 0 {\displaystyle d(x,y)=0\,} iff x = y {\displaystyle x=y\,} (identity of indiscernibles), d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)\,} (symmetry) and d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} (triangle inequality) . The function d {\displaystyle d} is also called distance function or simply distance. Often, d {\displaystyle d} is omitted and one just writes M {\displaystyle M} for a metric space if it is clear from the context what metric is used. Every metric space is paracompact and Hausdorff, and thus normal. The metrization theorems provide necessary and sufficient conditions for a topology to come from a metric. == Baire category theorem == The Baire category theorem says: If X is a complete metric space or a locally compact Hausdorff space, then the interior of every union of countably many nowhere dense sets is empty. Any open subspace of a Baire space is itself a Baire space. == Main areas of research == === Continuum theory === A continuum (pl continua) is a nonempty compact connected metric space, or less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. These objects arise frequently in nearly all areas of topology and analysis, and their properties are strong enough to yield many 'geometric' features. === Dynamical systems === Topological dynamics concerns the behavior of a space and its subspaces over time when subjected to continuous change. Many examples with applications to physics and other areas of math include fluid dynamics, billiards and flows on manifolds. The topological characteristics of fractals in fractal geometry, of Julia sets and the Mandelbrot set arising in complex dynamics, and of attractors in differential equations are often critical to understanding these systems. === Pointless topology === Pointless topology (also called point-free or pointfree topology) is an approach to topology that avoids mentioning points. The name 'pointless topology' is due to John von Neumann. The ideas of pointless topology are closely related to mereotopologies, in which regions (sets) are treated as foundational without explicit reference to underlying point sets. === Dimension theory === Dimension theory is a branch of general topology dealing with dimensional invariants of topological spaces. === Topological algebras === A topological algebra A over a topological field K is a topological vector space together with a continuous multiplication ⋅ : A × A ⟶ A {\displaystyle \cdot :A\times A\longrightarrow A} ( a , b ) ⟼ a ⋅ b {\displaystyle (a,b)\longmapsto a\cdot b} that makes it an algebra over K. A unital associative topological algebra is a topological ring. The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). === Metrizability theory === In topology and related areas of mathematics, a metrizable space is a topological space that is homeomorphic to a metric space. That is, a topological space ( X , τ ) {\displaystyle (X,\tau )} is said to be metrizable if there is a metric d : X × X → [ 0 , ∞ ) {\displaystyle d\colon X\times X\to [0,\infty )} such that the topology induced by d is τ {\displaystyle \tau } . Metrization theorems are theorems that give sufficient conditions for a topological space to be metrizable. === Set-theoretic topology === Set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC). A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. == See also == List of examples in general topology Glossary of general topology for detailed definitions List of general topology topics for related articles Category of topological spaces == References == == Further reading == Some standard books on general topology include: Bourbaki, Topologie Générale (General Topology), ISBN 0-387-19374-X. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Stephen Willard, General Topology, ISBN 0-486-43479-6. James Munkres, Topology, ISBN 0-13-181629-2. George F. Simmons, Introduction to Topology and Modern Analysis, ISBN 1-575-24238-9. Paul L. Shick, Topology: Point-Set and Geometric, ISBN 0-470-09605-5. Ryszard Engelking, General Topology, ISBN 3-88538-006-4. Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 O.Ya. Viro, O.A. Ivanov, V.M. Kharlamov and N.Yu. Netsvetaev, Elementary Topology: Textbook in Problems, ISBN 978-0-8218-4506-6. The arXiv subject code is math.GN. == External links == Media related to General topology at Wikimedia Commons
Wikipedia/General_topology
In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called arcs, links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics. == Definitions == Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. === Graph === In one restricted but very common sense of the term, a graph is an ordered pair G = ( V , E ) {\displaystyle G=(V,E)} comprising: V {\displaystyle V} , a set of vertices (also called nodes or points); E ⊆ { { x , y } ∣ x , y ∈ V and x ≠ y } {\displaystyle E\subseteq \{\{x,y\}\mid x,y\in V\;{\textrm {and}}\;x\neq y\}} , a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with two distinct vertices). To avoid ambiguity, this type of object may be called an undirected simple graph. In the edge { x , y } {\displaystyle \{x,y\}} , the vertices x {\displaystyle x} and y {\displaystyle y} are called the endpoints of the edge. The edge is said to join x {\displaystyle x} and y {\displaystyle y} and to be incident on x {\displaystyle x} and on y {\displaystyle y} . A vertex may exist in a graph and not belong to an edge. Under this definition, multiple edges, in which two or more edges connect the same vertices, are not allowed. In one more general sense of the term allowing multiple edges, a graph is an ordered triple G = ( V , E , ϕ ) {\displaystyle G=(V,E,\phi )} comprising: V {\displaystyle V} , a set of vertices (also called nodes or points); E {\displaystyle E} , a set of edges (also called links or lines); ϕ : E → { { x , y } ∣ x , y ∈ V and x ≠ y } {\displaystyle \phi :E\to \{\{x,y\}\mid x,y\in V\;{\textrm {and}}\;x\neq y\}} , an incidence function mapping every edge to an unordered pair of vertices (that is, an edge is associated with two distinct vertices). To avoid ambiguity, this type of object may be called an undirected multigraph. A loop is an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex x {\displaystyle x} to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph) { x , x } = { x } {\displaystyle \{x,x\}=\{x\}} which is not in { { x , y } ∣ x , y ∈ V and x ≠ y } {\displaystyle \{\{x,y\}\mid x,y\in V\;{\textrm {and}}\;x\neq y\}} . To allow loops, the definitions must be expanded. For undirected simple graphs, the definition of E {\displaystyle E} should be modified to E ⊆ { { x , y } ∣ x , y ∈ V } {\displaystyle E\subseteq \{\{x,y\}\mid x,y\in V\}} . For undirected multigraphs, the definition of ϕ {\displaystyle \phi } should be modified to ϕ : E → { { x , y } ∣ x , y ∈ V } {\displaystyle \phi :E\to \{\{x,y\}\mid x,y\in V\}} . To avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively. V {\displaystyle V} and E {\displaystyle E} are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, V {\displaystyle V} is often assumed to be non-empty, but E {\displaystyle E} is allowed to be the empty set. The order of a graph is | V | {\displaystyle |V|} , its number of vertices. The size of a graph is | E | {\displaystyle |E|} , its number of edges. The degree or valency of a vertex is the number of edges that are incident to it, where a loop is counted twice. The degree of a graph is the maximum of the degrees of its vertices. In an undirected simple graph of order n, the maximum degree of each vertex is n − 1 and the maximum size of the graph is ⁠n(n − 1)/2⁠. The edges of an undirected simple graph permitting loops G {\displaystyle G} induce a symmetric homogeneous relation ∼ {\displaystyle \sim } on the vertices of G {\displaystyle G} that is called the adjacency relation of G {\displaystyle G} . Specifically, for each edge ( x , y ) {\displaystyle (x,y)} , its endpoints x {\displaystyle x} and y {\displaystyle y} are said to be adjacent to one another, which is denoted x ∼ y {\displaystyle x\sim y} . === Directed graph === A directed graph or digraph is a graph in which edges have orientations. In one restricted but very common sense of the term, a directed graph is an ordered pair G = ( V , E ) {\displaystyle G=(V,E)} comprising: V {\displaystyle V} , a set of vertices (also called nodes or points); E ⊆ { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle E\subseteq \left\{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\right\}} , a set of edges (also called directed edges, directed links, directed lines, arrows or arcs) which are ordered pairs of vertices (that is, an edge is associated with two distinct vertices). To avoid ambiguity, this type of object may be called a directed simple graph. In set theory and graph theory, V n {\displaystyle V^{n}} denotes the set of n-tuples of elements of V , {\displaystyle V,} that is, ordered sequences of n {\displaystyle n} elements that are not necessarily distinct. In the edge ( x , y ) {\displaystyle (x,y)} directed from x {\displaystyle x} to y {\displaystyle y} , the vertices x {\displaystyle x} and y {\displaystyle y} are called the endpoints of the edge, x {\displaystyle x} the tail of the edge and y {\displaystyle y} the head of the edge. The edge is said to join x {\displaystyle x} and y {\displaystyle y} and to be incident on x {\displaystyle x} and on y {\displaystyle y} . A vertex may exist in a graph and not belong to an edge. The edge ( y , x ) {\displaystyle (y,x)} is called the inverted edge of ( x , y ) {\displaystyle (x,y)} . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head. In one more general sense of the term allowing multiple edges, a directed graph is an ordered triple G = ( V , E , ϕ ) {\displaystyle G=(V,E,\phi )} comprising: V {\displaystyle V} , a set of vertices (also called nodes or points); E {\displaystyle E} , a set of edges (also called directed edges, directed links, directed lines, arrows or arcs); ϕ : E → { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \phi :E\to \left\{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\right\}} , an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices). To avoid ambiguity, this type of object may be called a directed multigraph. A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex x {\displaystyle x} to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) ( x , x ) {\displaystyle (x,x)} which is not in { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \left\{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\right\}} . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of E {\displaystyle E} should be modified to E ⊆ { ( x , y ) ∣ ( x , y ) ∈ V 2 } {\displaystyle E\subseteq \left\{(x,y)\mid (x,y)\in V^{2}\right\}} . For directed multigraphs, the definition of ϕ {\displaystyle \phi } should be modified to ϕ : E → { ( x , y ) ∣ ( x , y ) ∈ V 2 } {\displaystyle \phi :E\to \left\{(x,y)\mid (x,y)\in V^{2}\right\}} . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively. The edges of a directed simple graph permitting loops G {\displaystyle G} is a homogeneous relation ~ on the vertices of G {\displaystyle G} that is called the adjacency relation of G {\displaystyle G} . Specifically, for each edge ( x , y ) {\displaystyle (x,y)} , its endpoints x {\displaystyle x} and y {\displaystyle y} are said to be adjacent to one another, which is denoted x {\displaystyle x} ~ y {\displaystyle y} . == Applications == Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is called network science. === Computer science === Within computer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data. === Linguistics === Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs. Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others. === Physics and chemistry === Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand." In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Chemical graph theory uses the molecular graph as a means to model molecules. Graphs and networks are excellent models to study and understand phase transitions and critical phenomena. Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied via percolation theory. === Social sciences === Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs. Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together. === Biology === Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species. Graphs are also commonly used in molecular biology and genomics to model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types in single-cell transcriptome analysis. Another use is to model genes or proteins in a pathway and study the relationships between them, such as metabolic pathways and gene regulatory networks. Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures. Graph theory is also used in connectomics; nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them. === Mathematics === In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity. === Other topics === A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs. == History == The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huilier, and represents the beginning of the branch of mathematics known as topology. More than one century after Euler's paper on the bridges of Königsberg and while Listing was introducing the concept of topology, Cayley was led by an interest in particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications for theoretical chemistry. The techniques he used mainly concern the enumeration of graphs with particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937. These were generalized by De Bruijn in 1959. Cayley linked his results on trees with contemporary studies of chemical composition. The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory. In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams: "[…] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. […] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. […]" (italics as in the original). The first textbook on graph theory was written by Dénes Kőnig, and published in 1936. Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject", and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize. One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory. The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers. A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas. The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits. The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results. == Representation == A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph. === Visual: Graph drawing === Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow. A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others. The pioneering work of W. T. Tutte was very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings. Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied. There are other techniques to visualize a graph away from vertices and edges, including circle packings, intersection graph, and other visualizations of the adjacency matrix. === Tabular: Graph data structures === The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation. List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to. Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph. The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices. == Problems == === Enumeration === There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973). === Subgraphs, induced subgraphs, and minors === A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too. Finding maximal subgraphs of a certain kind is often an NP-complete problem. For example: Finding the largest complete subgraph is called the clique problem (NP-complete). One special case of subgraph isomorphism is the graph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time. A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example: Finding the largest edgeless induced subgraph or independent set is called the independent set problem (NP-complete). Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example, Wagner's Theorem states: A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (see the Three-cottage problem) nor the complete graph K5. A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. A subdivision or homeomorphism of a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such as planarity. For example, Kuratowski's Theorem states: A graph is planar if it contains as a subdivision neither the complete bipartite graph K3,3 nor the complete graph K5. Another problem in subdivision containment is the Kelmans–Seymour conjecture: Every 5-vertex-connected graph that is not planar contains a subdivision of the 5-vertex complete graph K5. Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs. For example: The reconstruction conjecture === Graph coloring === Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following: Four-color theorem Strong perfect graph theorem Erdős–Faber–Lovász conjecture Total coloring conjecture, also called Behzad's conjecture (unsolved) List coloring conjecture (unsolved) Hadwiger conjecture (graph theory) (unsolved) === Subsumption and unification === Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known. For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure. === Route problems === Hamiltonian path problem Minimum spanning tree Route inspection problem (also called the "Chinese postman problem") Seven bridges of Königsberg Shortest path problem Steiner tree Three-cottage problem Traveling salesman problem (NP-hard) === Network flow === There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example: Max flow min cut theorem === Visibility problems === Museum guard problem === Covering problems === Covering problems in graphs may refer to various set cover problems on subsets of vertices/subgraphs. Dominating set problem is the special case of set cover problem where sets are the closed neighborhoods. Vertex cover problem is the special case of set cover problem where sets to cover are every edges. The original set cover problem, also called hitting set, can be described as a vertex cover in a hypergraph. === Decomposition problems === Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph Kn into n − 1 specified trees having, respectively, 1, 2, 3, ..., n − 1 edges. Some specific decomposition problems and similar problems that have been studied include: Arboricity, a decomposition into as few forests as possible Cycle double cover, a collection of cycles covering each edge exactly twice Edge coloring, a decomposition into as few matchings as possible Graph factorization, a decomposition of a regular graph into regular subgraphs of given degrees === Graph classes === Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below: Enumerating the members of a class Characterizing a class in terms of forbidden substructures Ascertaining relationships among classes (e.g. does one property of graphs imply another) Finding efficient algorithms to decide membership in a class Finding representations for members of a class == See also == Gallery of named graphs Glossary of graph theory List of graph theory topics List of unsolved problems in graph theory Publications in graph theory Graph algorithm Graph theorists === Subareas === Algebraic graph theory Geometric graph theory Extremal graph theory Probabilistic graph theory Topological graph theory Graph drawing == Notes == == References == Lowell W. Beineke; Bjarne Toft; and Robin J. Wilson: Milestones in Graph Theory: A Century of Progress, AMS/MAA, (SPECTRUM, v.108), ISBN 978-1-4704-6431-8 (2025). Bender, Edward A.; Williamson, S. Gill (2010). Lists, Decisions and Graphs. With an Introduction to Probability. Berge, Claude (1958). Théorie des graphes et ses applications. Paris: Dunod. English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition, Dover, New York 2001. Biggs, N.; Lloyd, E.; Wilson, R. (1986). Graph Theory, 1736–1936. Oxford University Press. Bondy, J. A.; Murty, U. S. R. (2008). Graph Theory. Springer. ISBN 978-1-84628-969-9. Bollobás, Béla; Riordan, O. M. (2003). Mathematical results on scale-free random graphs in "Handbook of Graphs and Networks" (S. Bornholdt and H.G. Schuster (eds)) (1st ed.). Weinheim: Wiley VCH. Chartrand, Gary (1985). Introductory Graph Theory. Dover. ISBN 0-486-24775-9. Deo, Narsingh (1974). Graph Theory with Applications to Engineering and Computer Science (PDF). Englewood, New Jersey: Prentice-Hall. ISBN 0-13-363473-6. Archived (PDF) from the original on 2019-05-17. Gibbons, Alan (1985). Algorithmic Graph Theory. Cambridge University Press. Golumbic, Martin (1980). Algorithmic Graph Theory and Perfect Graphs. Academic Press. Harary, Frank (1969). Graph Theory. Reading, Massachusetts: Addison-Wesley. Harary, Frank; Palmer, Edgar M. (1973). Graphical Enumeration. New York, New York: Academic Press. Mahadev, N. V. R.; Peled, Uri N. (1995). Threshold Graphs and Related Topics. North-Holland. Newman, Mark (2010). Networks: An Introduction. Oxford University Press. Kepner, Jeremy; Gilbert, John (2011). Graph Algorithms in The Language of Linear Algebra. Philadelphia, Pennsylvania: SIAM. ISBN 978-0-89871-990-1. == External links == "Graph theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Graph theory tutorial Archived 2012-01-16 at the Wayback Machine A searchable database of small connected graphs House of Graphs — searchable database of graphs with a drawing-based search feature. Image gallery: graphs at the Wayback Machine (archived February 6, 2006) Concise, annotated list of graph theory resources for researchers rocs — a graph theory IDE The Social Life of Routers — non-technical paper discussing graphs of people and computers Graph Theory Software — tools to teach and learn graph theory Online books, and library resources in your library and in other libraries about graph theory A list of graph algorithms Archived 2019-07-13 at the Wayback Machine with references and links to graph library implementations === Online textbooks === Phase Transitions in Combinatorial Optimization Problems, Section 3: Introduction to Graphs (2006) by Hartmann and Weigt Digraphs: Theory Algorithms and Applications 2007 by Jorgen Bang-Jensen and Gregory Gutin Graph Theory, by Reinhard Diestel
Wikipedia/Graph_theory
In mathematical logic, algebraic logic is the reasoning obtained by manipulating equations with free variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description of models appropriate for the study of various logics (in the form of classes of algebras that constitute the algebraic semantics for these deductive systems) and connected problems like representation and duality. Well known results like the representation theorem for Boolean algebras and Stone duality fall under the umbrella of classical algebraic logic (Czelakowski 2003). Works in the more recent abstract algebraic logic (AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using the Leibniz operator (Czelakowski 2003). == Calculus of relations == A homogeneous binary relation is found in the power set of X × X for some set X, while a heterogeneous relation is found in the power set of X × Y, where X ≠ Y. Whether a given relation holds for two individuals is one bit of information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered by inclusion, and lattice of these sets becomes an algebra through relative multiplication or composition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion." The conversion refers to the converse relation that always exists, contrary to function theory. A given relation may be represented by a logical matrix; then the converse relation is represented by the transpose matrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained by matrix multiplication using Boolean arithmetic. === Example === An example of calculus of relations arises in erotetics, the theory of questions. In the universe of utterances there are statements S and questions Q. There are two relations π and α from Q to S: q α a holds when a is a direct answer to question q. The other relation, q π p holds when p is a presupposition of question q. The converse relation πT runs from S to Q so that the composition πTα is a homogeneous relation on S. The art of putting the right question to elicit a sufficient answer is recognized in Socratic method dialogue. === Functions === The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relation R that satisfies the formula R T R ⊆ I , {\displaystyle R^{T}R\subseteq I,} where I is the identity relation on the range of R. The injective property corresponds to univalence of R T {\displaystyle R^{T}} , or the formula R R T ⊆ I , {\displaystyle RR^{T}\subseteq I,} where this time I is the identity on the domain of R. But a univalent relation is only a partial function, while a univalent total relation is a function. The formula for totality is I ⊆ R R T . {\displaystyle I\subseteq RR^{T}.} Charles Loewner and Gunther Schmidt use the term mapping for a total, univalent relation. The facility of complementary relations inspired Augustus De Morgan and Ernst Schröder to introduce equivalences using R ¯ {\displaystyle {\bar {R}}} for the complement of relation R. These equivalences provide alternative formulas for univalent relations ( R I ¯ ⊆ R ¯ {\displaystyle R{\bar {I}}\subseteq {\bar {R}}} ), and total relations ( R ¯ ⊆ R I ¯ {\displaystyle {\bar {R}}\subseteq R{\bar {I}}} ). Therefore, mappings satisfy the formula R ¯ = R I ¯ . {\displaystyle {\bar {R}}=R{\bar {I}}.} Schmidt uses this principle as "slipping below negation from the left". For a mapping f, f A ¯ = f A ¯ . {\displaystyle f{\bar {A}}={\overline {fA}}.} === Abstraction === The relation algebra structure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer opened the frontier of abstract algebraic logic. == Algebras as models of logics == Algebraic logic treats algebraic structures, often bounded lattices, as models (interpretations) of certain logics, making logic a branch of order theory. In algebraic logic: Variables are tacitly universally quantified over some universe of discourse. There are no existentially quantified variables or open formulas; Terms are built up from variables using primitive and defined operations. There are no connectives; Formulas, built from terms in the usual way, can be equated if they are logically equivalent. To express a tautology, equate a formula with a truth value; The rules of proof are the substitution of equals for equals, and uniform replacement. Modus ponens remains valid, but is seldom employed. In the table below, the left column contains one or more logical or mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are either Boolean algebras or proper extensions thereof. Modal and other nonclassical logics are typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyond first-order logic in at least some respects include: Combinatory logic, having the expressive power of set theory; Relation algebra, arguably the paradigmatic algebraic logic, can express Peano arithmetic and most axiomatic set theories, including the canonical ZFC. == History == Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memoranda Leibniz wrote in the 1680s, some of which were published in the 19th century and translated into English by Clarence Lewis in 1918.: 291–305  But nearly all of Leibniz's known work on algebraic logic was published only in 1903 after Louis Couturat discovered it in Leibniz's Nachlass. Parkinson (1966) and Loemker (1969) translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors were George Boole and Augustus De Morgan. In 1870 Charles Sanders Peirce published the first of several works on the logic of relatives. Alexander Macfarlane published his Principles of the Algebra of Logic in 1879, and in 1883, Christine Ladd, a student of Peirce at Johns Hopkins University, published "On the Algebra of Logic". Logic turned more algebraic when binary relations were combined with composition of relations. For sets A and B, a relation over A and B is represented as a member of the power set of A×B with properties described by Boolean algebra. The "calculus of relations" is arguably the culmination of Leibniz's approach to logic. At the Hochschule Karlsruhe the calculus of relations was described by Ernst Schröder. In particular he formulated Schröder rules, though De Morgan had anticipated them with his Theorem K. In 1903 Bertrand Russell developed the calculus of relations and logicism as his version of pure mathematics based on the operations of the calculus as primitive notions. The "Boole–Schröder algebra of logic" was developed at University of California, Berkeley in a textbook by Clarence Lewis in 1918. He treated the logic of relations as derived from the propositional functions of two or more variables. Hugh MacColl, Gottlob Frege, Giuseppe Peano, and A. N. Whitehead all shared Leibniz's dream of combining symbolic logic, mathematics, and philosophy. Some writings by Leopold Löwenheim and Thoralf Skolem on algebraic logic appeared after the 1910–13 publication of Principia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations". According to Helena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed the logical matrix method. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic." Brady (2000) discusses the rich historical connections between algebraic logic and model theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition. Alfred Tarski, the founder of set theoretic model theory as a major branch of contemporary mathematical logic, also: Initiated abstract algebraic logic with relation algebras Invented cylindric algebra Co-discovered Lindenbaum–Tarski algebra. In the practice of the calculus of relations, Jacques Riguet used the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of a difunctional relation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem of N. M. Ferrers follows from interpretation of the transpose of a staircase. Riguet generated rectangular relations by taking the outer product of logical vectors; these contribute to the non-enlargeable rectangles of formal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized in Lenzen (2004). To see how present-day work in logic and metaphysics can draw inspiration from, and shed light on, Leibniz's thought, see Zalta (2000). == See also == Boolean algebra Codd's theorem Computer algebra Universal algebra == References == == Sources == Brady, Geraldine (2000). From Peirce to Skolem: A Neglected Chapter in the History of Logic. Studies in the History and Philosophy of Mathematics. Amsterdam, Netherlands: North-Holland/Elsevier Science BV. ISBN 9780080532028. Czelakowski, Janusz (2003). "Review: Algebraic Methods in Philosophical Logic by J. Michael Dunn and Gary M. Hardegree". The Bulletin of Symbolic Logic. 9. Association for Symbolic Logic, Cambridge University Press. ISSN 1079-8986. JSTOR 3094793. Lenzen, Wolfgang, 2004, "Leibniz’s Logic" in Gabbay, D., and Woods, J., eds., Handbook of the History of Logic, Vol. 3: The Rise of Modern Logic from Leibniz to Frege. North-Holland: 1-84. Loemker, Leroy (1969) [First edition 1956], Leibniz: Philosophical Papers and Letters (2nd ed.), Reidel. Parkinson, G.H.R (1966). Leibniz: Logical Papers. Oxford University Press. Zalta, E. N., 2000, "A (Leibnizian) Theory of Concepts," Philosophiegeschichte und logische Analyse / Logical Analysis and History of Philosophy 3: 137-183. == Further reading == J. Michael Dunn; Gary M. Hardegree (2001). Algebraic Methods in Philosophical Logic. Oxford University Press. ISBN 978-0-19-853192-0. Good introduction for readers with prior exposure to non-classical logics but without much background in order theory and/or universal algebra; the book covers these prerequisites at length. This book however has been criticized for poor and sometimes incorrect presentation of AAL results. Review by Janusz Czelakowski Hajnal Andréka, István Németi and Ildikó Sain (2001). "Algebraic logic". In Dov M. Gabbay, Franz Guenthner (ed.). Handbook of Philosophical Logic, vol 2 (2nd ed.). Springer. ISBN 978-0-7923-7126-7. Draft. Ramon Jansana (2011), "Propositional Consequence Relations and Algebraic Logic". Stanford Encyclopedia of Philosophy. Mainly about abstract algebraic logic. Stanley Burris (2015), "The Algebra of Logic Tradition". Stanford Encyclopedia of Philosophy. Willard Quine, 1976, "Algebraic Logic and Predicate Functors" pages 283 to 307 in The Ways of Paradox, Harvard University Press. Historical perspective Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots. Princeton University Press. Irving Anellis & N. Houser (1991) "Nineteenth Century Roots of Algebraic Logic and Universal Algebra", pages 1–36 in Algebraic Logic, Colloquia Mathematica Societatis János Bolyai # 54, János Bolyai Mathematical Society & Elsevier ISBN 0444885439 == External links == Algebraic logic at PhilPapers
Wikipedia/Algebraic_logic
Linear algebra is the branch of mathematics concerning linear equations such as a 1 x 1 + ⋯ + a n x n = b , {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,} linear maps such as ( x 1 , … , x n ) ↦ a 1 x 1 + ⋯ + a n x n , {\displaystyle (x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n},} and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to function spaces. Linear algebra is also used in most sciences and fields of engineering because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point. == History == The procedure (using counting rods) for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations. The first systematic methods for solving linear systems used determinants and were first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy. In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb. Linear algebra grew with ideas noted in the complex plane. For instance, two numbers w and z in C {\displaystyle \mathbb {C} } have a difference w – z, and the line segments wz and 0(w − z) are of the same length and direction. The segments are equipollent. The four-dimensional system H {\displaystyle \mathbb {H} } of quaternions was discovered by W.R. Hamilton in 1843. The term vector was introduced as v = xi + yj + zk representing a point in space. The quaternion difference p – q also produces a segment equipollent to pq. Other hypercomplex number systems also used the idea of a linear space with a basis. Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants". Benjamin Peirce published his Linear Associative Algebra (1872), and his son Charles Sanders Peirce extended the work later. The telegraph required an explanatory system, and the 1873 publication by James Clerk Maxwell of A Treatise on Electricity and Magnetism instituted a field theory of forces and required differential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations. The first modern and more precise definition of a vector space was introduced by Peano in 1888; by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century when many ideas and methods of previous centuries were generalized as abstract algebra. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modeling and simulations. == Vector spaces == Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through vector spaces is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract. A vector space over a field F (often the field of the real numbers or of the complex numbers) is a set V equipped with two binary operations. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The axioms that addition and scalar multiplication must satisfy are the following. (In the list below, u, v and w are arbitrary elements of V, and a and b are arbitrary scalars in the field F.) The first four axioms mean that V is an abelian group under addition. The elements of a specific vector space may have various natures; for example, they could be tuples, sequences, functions, polynomials, or a matrices. Linear algebra is concerned with the properties of such objects that are common to all vector spaces. === Linear maps === Linear maps are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear map (also called, in some contexts, linear transformation or linear mapping) is a map T : V → W {\displaystyle T:V\to W} that is compatible with addition and scalar multiplication, that is T ( u + v ) = T ( u ) + T ( v ) , T ( a v ) = a T ( v ) {\displaystyle T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v} ),\quad T(a\mathbf {v} )=aT(\mathbf {v} )} for any vectors u,v in V and scalar a in F. An equivalent condition is that for any vectors u, v in V and scalars a, b in F, one has T ( a u + b v ) = a T ( u ) + b T ( v ) {\displaystyle T(a\mathbf {u} +b\mathbf {v} )=aT(\mathbf {u} )+bT(\mathbf {v} )} . When V = W are the same vector space, a linear map T : V → V is also known as a linear operator on V. A bijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is an isomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm. === Subspaces, span, and basis === The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space V over a field F is a subset W of V such that u + v and au are in W, for every u, v in W, and every a in F. (These conditions suffice for implying that W is a vector space.) For example, given a linear map T : V → W, the image T(V) of V, and the inverse image T−1(0) of 0 (called kernel or null space), are linear subspaces of W and V, respectively. Another important way of forming a subspace is to consider linear combinations of a set S of vectors: the set of all sums a 1 v 1 + a 2 v 2 + ⋯ + a k v k , {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k},} where v1, v2, ..., vk are in S, and a1, a2, ..., ak are in F form a linear subspace called the span of S. The span of S is also the intersection of all linear subspaces containing S. In other words, it is the smallest (for the inclusion relation) linear subspace containing S. A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set S of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements of S is to take zero for every coefficient ai. A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set S is linearly dependent (that is not linearly independent), then some element w of S is in the span of the other elements of S, and the span would remain the same if one were to remove w from S. One may continue to remove elements of S until getting a linearly independent spanning set. Such a linearly independent set that spans a vector space V is called a basis of V. The importance of bases lies in the fact that they are simultaneously minimal-generating sets and maximal independent sets. More precisely, if S is a linearly independent set, and T is a spanning set such that S ⊆ T, then there is a basis B such that S ⊆ B ⊆ T. Any two bases of a vector space V have the same cardinality, which is called the dimension of V; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field F are isomorphic if and only if they have the same dimension. If any basis of V (and therefore every basis) has a finite number of elements, V is a finite-dimensional vector space. If U is a subspace of V, then dim U ≤ dim V. In the case where V is finite-dimensional, the equality of the dimensions implies U = V. If U1 and U2 are subspaces of V, then dim ⁡ ( U 1 + U 2 ) = dim ⁡ U 1 + dim ⁡ U 2 − dim ⁡ ( U 1 ∩ U 2 ) , {\displaystyle \dim(U_{1}+U_{2})=\dim U_{1}+\dim U_{2}-\dim(U_{1}\cap U_{2}),} where U1 + U2 denotes the span of U1 ∪ U2. == Matrices == Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra. Let V be a finite-dimensional vector space over a field F, and (v1, v2, ..., vm) be a basis of V (thus m is the dimension of V). By definition of a basis, the map ( a 1 , … , a m ) ↦ a 1 v 1 + ⋯ a m v m F m → V {\displaystyle {\begin{aligned}(a_{1},\ldots ,a_{m})&\mapsto a_{1}\mathbf {v} _{1}+\cdots a_{m}\mathbf {v} _{m}\\F^{m}&\to V\end{aligned}}} is a bijection from Fm, the set of the sequences of m elements of F, onto V. This is an isomorphism of vector spaces, if Fm is equipped with its standard structure of vector space, where vector addition and scalar multiplication are done component by component. This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinate vector (a1, ..., am) or by the column matrix [ a 1 ⋮ a m ] . {\displaystyle {\begin{bmatrix}a_{1}\\\vdots \\a_{m}\end{bmatrix}}.} If W is another finite dimensional vector space (possibly the same), with a basis (w1, ..., wn), a linear map f from W to V is well defined by its values on the basis elements, that is (f(w1), ..., f(wn)). Thus, f is well represented by the list of the corresponding column matrices. That is, if f ( w j ) = a 1 , j v 1 + ⋯ + a m , j v m , {\displaystyle f(w_{j})=a_{1,j}v_{1}+\cdots +a_{m,j}v_{m},} for j = 1, ..., n, then f is represented by the matrix [ a 1 , 1 ⋯ a 1 , n ⋮ ⋱ ⋮ a m , 1 ⋯ a m , n ] , {\displaystyle {\begin{bmatrix}a_{1,1}&\cdots &a_{1,n}\\\vdots &\ddots &\vdots \\a_{m,1}&\cdots &a_{m,n}\end{bmatrix}},} with m rows and n columns. Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing the same concepts. Two matrices that encode the same linear transformation in different bases are called similar. It can be proved that two matrices are similar if and only if one can transform one into the other by elementary row and column operations. For a matrix representing a linear map from W to V, the row operations correspond to change of bases in V and the column operations correspond to change of bases in W. Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map from W to V, there are bases such that a part of the basis of W is mapped bijectively on a part of the basis of V, and that the remaining basis elements of W, if any, are mapped to zero. Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results. == Linear systems == A finite set of linear equations in a finite set of variables, for example, x1, x2, ..., xn, or x, y, ..., z is called a system of linear equations or a linear system. Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory have been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems. For example, let be a linear system. To such a system, one may associate its matrix M = [ 2 1 − 1 − 3 − 1 2 − 2 1 2 ] . {\displaystyle M=\left[{\begin{array}{rrr}2&1&-1\\-3&-1&2\\-2&1&2\end{array}}\right].} and its right member vector v = [ 8 − 11 − 3 ] . {\displaystyle \mathbf {v} ={\begin{bmatrix}8\\-11\\-3\end{bmatrix}}.} Let T be the linear transformation associated with the matrix M. A solution of the system (S) is a vector X = [ x y z ] {\displaystyle \mathbf {X} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}} such that T ( X ) = v , {\displaystyle T(\mathbf {X} )=\mathbf {v} ,} that is an element of the preimage of v by T. Let (S′) be the associated homogeneous system, where the right-hand sides of the equations are put to zero: The solutions of (S′) are exactly the elements of the kernel of T or, equivalently, M. The Gaussian-elimination consists of performing elementary row operations on the augmented matrix [ M v ] = [ 2 1 − 1 8 − 3 − 1 2 − 11 − 2 1 2 − 3 ] {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]} for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is [ M v ] = [ 1 0 0 2 0 1 0 3 0 0 1 − 1 ] , {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right],} showing that the system (S) has the unique solution x = 2 y = 3 z = − 1. {\displaystyle {\begin{aligned}x&=2\\y&=3\\z&=-1.\end{aligned}}} It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses. == Endomorphisms and square matrices == A linear endomorphism is a linear map that maps a vector space V to itself. If V has a basis of n elements, such an endomorphism is represented by a square matrix of size n. Concerning general linear maps, linear endomorphisms, and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other parts of mathematics. === Determinant === The determinant of a square matrix A is defined to be ∑ σ ∈ S n ( − 1 ) σ a 1 σ ( 1 ) ⋯ a n σ ( n ) , {\displaystyle \sum _{\sigma \in S_{n}}(-1)^{\sigma }a_{1\sigma (1)}\cdots a_{n\sigma (n)},} where Sn is the group of all permutations of n elements, σ is a permutation, and (−1)σ the parity of the permutation. A matrix is invertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field). Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but, except for n = 2 or 3, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm. The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense since this determinant is independent of the choice of the basis. === Eigenvalues and eigenvectors === If f is a linear endomorphism of a vector space V over a field F, an eigenvector of f is a nonzero vector v of V such that f(v) = av for some scalar a in F. This scalar a is an eigenvalue of f. If the dimension of V is finite, and a basis has been chosen, f and v may be represented, respectively, by a square matrix M and a column matrix z; the equation defining eigenvectors and eigenvalues becomes M z = a z . {\displaystyle Mz=az.} Using the identity matrix I, whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten ( M − a I ) z = 0. {\displaystyle (M-aI)z=0.} As z is supposed to be nonzero, this means that M – aI is a singular matrix, and thus that its determinant det (M − aI) equals zero. The eigenvalues are thus the roots of the polynomial det ( x I − M ) . {\displaystyle \det(xI-M).} If V is of dimension n, this is a monic polynomial of degree n, called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, n eigenvalues. If a basis exists that consists only of eigenvectors, the matrix of f on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable. A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being [ 0 1 0 0 ] {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}} (it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero). When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need to extend the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extension of the field of scalar for containing all eigenvalues and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1. == Duality == A linear form is a linear map from a vector space V over a field F to the field of scalars F, viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of V, and usually denoted V* or V′. If v1, ..., vn is a basis of V (this implies that V is finite-dimensional), then one can define, for i = 1, ..., n, a linear map vi* such that vi*(vi) = 1 and vi*(vj) = 0 if j ≠ i. These linear maps form a basis of V*, called the dual basis of v1, ..., vn. (If V is not finite-dimensional, the vi* may be defined similarly; they are linearly independent, but do not form a basis.) For v in V, the map f → f ( v ) {\displaystyle f\to f(\mathbf {v} )} is a linear form on V*. This defines the canonical linear map from V into (V*)*, the dual of V*, called the double dual or bidual of V. This canonical map is an isomorphism if V is finite-dimensional, and this allows identifying V with its bidual. (In the infinite-dimensional case, the canonical map is injective, but not surjective.) There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra–ket notation ⟨ f , x ⟩ {\displaystyle \langle f,\mathbf {x} \rangle } for denoting f(x). === Dual map === Let f : V → W {\displaystyle f:V\to W} be a linear map. For every linear form h on W, the composite function h ∘ f is a linear form on V. This defines a linear map f ∗ : W ∗ → V ∗ {\displaystyle f^{*}:W^{*}\to V^{*}} between the dual spaces, which is called the dual or the transpose of f. If V and W are finite-dimensional, and M is the matrix of f in terms of some ordered bases, then the matrix of f* over the dual bases is the transpose MT of M, obtained by exchanging rows and columns. If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in bra–ket notation by ⟨ h T , M v ⟩ = ⟨ h T M , v ⟩ . {\displaystyle \langle h^{\mathsf {T}},M\mathbf {v} \rangle =\langle h^{\mathsf {T}}M,\mathbf {v} \rangle .} To highlight this symmetry, the two members of this equality are sometimes written ⟨ h T ∣ M ∣ v ⟩ . {\displaystyle \langle h^{\mathsf {T}}\mid M\mid \mathbf {v} \rangle .} === Inner-product spaces === Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map. ⟨ ⋅ , ⋅ ⟩ : V × V → F {\displaystyle \langle \cdot ,\cdot \rangle :V\times V\to F} that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F: Conjugate symmetry: ⟨ u , v ⟩ = ⟨ v , u ⟩ ¯ . {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle ={\overline {\langle \mathbf {v} ,\mathbf {u} \rangle }}.} In R {\displaystyle \mathbb {R} } , it is symmetric. Linearity in the first argument: ⟨ a u , v ⟩ = a ⟨ u , v ⟩ . ⟨ u + v , w ⟩ = ⟨ u , w ⟩ + ⟨ v , w ⟩ . {\displaystyle {\begin{aligned}\langle a\mathbf {u} ,\mathbf {v} \rangle &=a\langle \mathbf {u} ,\mathbf {v} \rangle .\\\langle \mathbf {u} +\mathbf {v} ,\mathbf {w} \rangle &=\langle \mathbf {u} ,\mathbf {w} \rangle +\langle \mathbf {v} ,\mathbf {w} \rangle .\end{aligned}}} Positive-definiteness: ⟨ v , v ⟩ ≥ 0 {\displaystyle \langle \mathbf {v} ,\mathbf {v} \rangle \geq 0} with equality only for v = 0. We can define the length of a vector v in V by ‖ v ‖ 2 = ⟨ v , v ⟩ , {\displaystyle \|\mathbf {v} \|^{2}=\langle \mathbf {v} ,\mathbf {v} \rangle ,} and we can prove the Cauchy–Schwarz inequality: | ⟨ u , v ⟩ | ≤ ‖ u ‖ ⋅ ‖ v ‖ . {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\cdot \|\mathbf {v} \|.} In particular, the quantity | ⟨ u , v ⟩ | ‖ u ‖ ⋅ ‖ v ‖ ≤ 1 , {\displaystyle {\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |}{\|\mathbf {u} \|\cdot \|\mathbf {v} \|}}\leq 1,} and so we can call this quantity the cosine of the angle between the two vectors. Two vectors are orthogonal if ⟨u, v⟩ = 0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if v = a1 v1 + ⋯ + an vn, then a i = ⟨ v , v i ⟩ . {\displaystyle a_{i}=\langle \mathbf {v} ,\mathbf {v} _{i}\rangle .} The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying ⟨ T u , v ⟩ = ⟨ u , T ∗ v ⟩ . {\displaystyle \langle T\mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,T^{*}\mathbf {v} \rangle .} If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V. == Relationship with geometry == There is a strong relationship between linear algebra and geometry, which started with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra. Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified, and studied in terms of linear maps. This is also the case of homographies and Möbius transformations when considered as transformations of a projective space. Until the end of the 19th century, geometric spaces were defined by axioms relating points, lines, and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space). It has been shown that the two approaches are essentially equivalent. In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields. Presently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at the elementary level, as a subfield of linear algebra. == Usage and applications == Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories. === Functional analysis === Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions) and Fourier analysis (orthogonal basis). === Scientific computation === Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, to adapt them to the specificities of the computer (cache size, number of available cores, ...). Since the 1960s there have been processors with specialized instructions for optimizing the operations of linear algebra, optional array processors under the control of a conventional processor, supercomputers designed for array processing and conventional processors augmented with vector registers. Some contemporary processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra. === Geometry of ambient space === The modeling of ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains. In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra. === Study of complex systems === Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because they make parametrization more manageable. In both cases, very large matrices are generally involved. Weather forecasting (or more specifically, parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earth atmosphere is divided into cells of, say, 100 km of width and 100 km of height. === Fluid mechanics, fluid dynamics, and thermal energy systems === Linear algebra, a branch of mathematics dealing with vector spaces and linear mappings between these spaces, plays a critical role in various engineering disciplines, including fluid mechanics, fluid dynamics, and thermal energy systems. Its application in these fields is multifaceted and indispensable for solving complex problems. In fluid mechanics, linear algebra is integral to understanding and solving problems related to the behavior of fluids. It assists in the modeling and simulation of fluid flow, providing essential tools for the analysis of fluid dynamics problems. For instance, linear algebraic techniques are used to solve systems of differential equations that describe fluid motion. These equations, often complex and non-linear, can be linearized using linear algebra methods, allowing for simpler solutions and analyses. In the field of fluid dynamics, linear algebra finds its application in computational fluid dynamics (CFD), a branch that uses numerical analysis and data structures to solve and analyze problems involving fluid flows. CFD relies heavily on linear algebra for the computation of fluid flow and heat transfer in various applications. For example, the Navier–Stokes equations, fundamental in fluid dynamics, are often solved using techniques derived from linear algebra. This includes the use of matrices and vectors to represent and manipulate fluid flow fields. Furthermore, linear algebra plays a crucial role in thermal energy systems, particularly in power systems analysis. It is used to model and optimize the generation, transmission, and distribution of electric power. Linear algebraic concepts such as matrix operations and eigenvalue problems are employed to enhance the efficiency, reliability, and economic performance of power systems. The application of linear algebra in this context is vital for the design and operation of modern power systems, including renewable energy sources and smart grids. Overall, the application of linear algebra in fluid mechanics, fluid dynamics, and thermal energy systems is an example of the profound interconnection between mathematics and engineering. It provides engineers with the necessary tools to model, analyze, and solve complex problems in these domains, leading to advancements in technology and industry. == Extensions and generalizations == This section presents several related topics that do not appear generally in elementary textbooks on linear algebra but are commonly considered, in advanced mathematics, as parts of linear algebra. === Module theory === The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring R, and this gives the structure called a module over R, or R-module. The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if R is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring. Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules. Modules over the integers can be identified with abelian groups, since the multiplication by an integer may be identified as a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring. There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than similar algorithms over a field. For more details, see Linear equation over a ring. === Multilinear algebra and tensors === In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of several different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V* consisting of linear maps f : V → F where F is the field of scalars. Multilinear maps T : Vn → F can be described via tensor products of elements of V*. If, in addition to vector addition and scalar multiplication, there is a bilinear vector product V × V → V, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials). === Topological vector spaces === Vector spaces that are not finite-dimensional often require additional structure to be tractable. A normed vector space is a vector space along with a function called a norm, which measures the "size" of elements. The norm induces a metric, which measures the distance between elements, and induces a topology, which allows for a definition of continuous maps. The metric also allows for a definition of limits and completeness – a normed vector space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are Lp spaces, which are Banach spaces, and especially the L2 space of square-integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods. == See also == Fundamental matrix (computer vision) Geometric algebra Linear programming Linear regression, a statistical estimation method Numerical linear algebra Outline of linear algebra Transformation matrix == Explanatory notes == == Citations == == General and cited sources == == Further reading == === History === Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly 86 (1979), pp. 809–817. Grassmann, Hermann (1844), Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, Leipzig: O. Wigand === Introductory textbooks === Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Bretscher, Otto (2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0-13-145334-0 Farin, Gerald; Hansford, Dianne (2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1-56881-234-2 Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Kolman, Bernard; Hill, David R. (2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0-13-229654-0 Lay, David C. (2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, ISBN 978-0-13-185785-8 Murty, Katta G. (2014) Computational and Algorithmic Linear Algebra and n-Dimensional Geometry, World Scientific Publishing, ISBN 978-981-4366-62-5. Chapter 1: Systems of Simultaneous Linear Equations Noble, B. & Daniel, J.W. (2nd Ed. 1977) [1], Pearson Higher Education, ISBN 978-0130413437. Poole, David (2010), Linear Algebra: A Modern Introduction (3rd ed.), Cengage – Brooks/Cole, ISBN 978-0-538-73545-2 Ricardo, Henry (2010), A Modern Introduction To Linear Algebra (1st ed.), CRC Press, ISBN 978-1-4398-0040-9 Sadun, Lorenzo (2008), Applied Linear Algebra: the decoupling principle (2nd ed.), AMS, ISBN 978-0-8218-4441-0 Strang, Gilbert (2016), Introduction to Linear Algebra (5th ed.), Wellesley-Cambridge Press, ISBN 978-09802327-7-6 The Manga Guide to Linear Algebra (2012), by Shin Takahashi, Iroha Inoue and Trend-Pro Co., Ltd., ISBN 978-1-59327-413-9 === Advanced textbooks === Bhatia, Rajendra (November 15, 1996), Matrix Analysis, Graduate Texts in Mathematics, Springer, ISBN 978-0-387-94846-1 Demmel, James W. (August 1, 1997), Applied Numerical Linear Algebra, SIAM, ISBN 978-0-89871-389-3 Dym, Harry (2007), Linear Algebra in Action, AMS, ISBN 978-0-8218-3813-6 Gantmacher, Felix R. (2005), Applications of the Theory of Matrices, Dover Publications, ISBN 978-0-486-44554-0 Gantmacher, Felix R. (1990), Matrix Theory Vol. 1 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-1376-8 Gantmacher, Felix R. (2000), Matrix Theory Vol. 2 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-2664-5 Gelfand, Israel M. (1989), Lectures on Linear Algebra, Dover Publications, ISBN 978-0-486-66082-0 Glazman, I. M.; Ljubic, Ju. I. (2006), Finite-Dimensional Linear Analysis, Dover Publications, ISBN 978-0-486-45332-3 Golan, Johnathan S. (January 2007), The Linear Algebra a Beginning Graduate Student Ought to Know (2nd ed.), Springer, ISBN 978-1-4020-5494-5 Golan, Johnathan S. (August 1995), Foundations of Linear Algebra, Kluwer, ISBN 0-7923-3614-3 Greub, Werner H. (October 16, 1981), Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-0-8018-5414-9 Hoffman, Kenneth; Kunze, Ray (1971), Linear algebra (2nd ed.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., MR 0276251 Halmos, Paul R. (August 20, 1993), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-90093-3 Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (September 7, 2018), Linear Algebra (5th ed.), Pearson, ISBN 978-0-13-486024-4 Horn, Roger A.; Johnson, Charles R. (February 23, 1990), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 Horn, Roger A.; Johnson, Charles R. (June 24, 1994), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1 Lang, Serge (March 9, 2004), Linear Algebra, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-0-387-96412-6 Marcus, Marvin; Minc, Henryk (2010), A Survey of Matrix Theory and Matrix Inequalities, Dover Publications, ISBN 978-0-486-67102-4 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on October 31, 2009 Mirsky, L. (1990), An Introduction to Linear Algebra, Dover Publications, ISBN 978-0-486-66434-7 Shafarevich, I. R.; Remizov, A. O (2012), Linear Algebra and Geometry, Springer, ISBN 978-3-642-30993-9 Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover Publications, ISBN 978-0-486-63518-7 Shores, Thomas S. (December 6, 2006), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-33194-2 Smith, Larry (May 28, 1998), Linear Algebra, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-98455-1 Trefethen, Lloyd N.; Bau, David (1997), Numerical Linear Algebra, SIAM, ISBN 978-0-898-71361-9 === Study guides and outlines === Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0-8220-5331-6 Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0-07-136200-9 Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw–Hill, ISBN 978-0-07-038023-3 McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw–Hill Professional, ISBN 978-0-07-146579-3 Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0-8018-9125-0 == External links == === Online Resources === MIT Linear Algebra Video Lectures, a series of 34 recorded lectures by Professor Gilbert Strang (Spring 2010) International Linear Algebra Society "Linear algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linear Algebra on MathWorld Matrix and Linear Algebra Terms on Earliest Known Uses of Some of the Words of Mathematics Earliest Uses of Symbols for Matrices and Vectors on Earliest Uses of Various Mathematical Symbols Essence of linear algebra, a video presentation from 3Blue1Brown of the basics of linear algebra, with emphasis on the relationship between the geometric, the matrix and the abstract points of view === Online books === Beezer, Robert A. (2009) [2004]. A First Course in Linear Algebra. Gainesville, Florida: University Press of Florida. ISBN 9781616100049. Connell, Edwin H. (2004) [1999]. Elements of Abstract and Linear Algebra. University of Miami, Coral Gables, Florida: Self-published. Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Margalit, Dan; Rabinoff, Joseph (2019). Interactive Linear Algebra. Georgia Institute of Technology, Atlanta, Georgia: Self-published. Matthews, Keith R. (2013) [1991]. Elementary Linear Algebra. University of Queensland, Brisbane, Australia: Self-published. Mikaelian, Vahagn H. (2020) [2017]. Linear Algebra: Theory and Algorithms. Yerevan, Armenia: Self-published – via ResearchGate. Sharipov, Ruslan, Course of linear algebra and multidimensional geometry Treil, Sergei, Linear Algebra Done Wrong
Wikipedia/Linear_algebra
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common in mathematical models and scientific laws; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers, and many numerical methods have been developed to determine solutions with a given degree of accuracy. The theory of dynamical systems analyzes the qualitative aspects of solutions, such as their average behavior over a long time interval. == History == Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Newton listed three kinds of differential equations: d y d x = f ( x ) d y d x = f ( x , y ) x 1 ∂ y ∂ x 1 + x 2 ∂ y ∂ x 2 = y {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=f(x)\\[4pt]{\frac {dy}{dx}}&=f(x,y)\\[4pt]x_{1}{\frac {\partial y}{\partial x_{1}}}&+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}} In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function. He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form y ′ + P ( x ) y = Q ( x ) y n {\displaystyle y'+P(x)y=Q(x)y^{n}\,} for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum. == Example == In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity. == Types == Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts. === Ordinary differential equations === An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals. Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function). As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer. === Partial differential equations === A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness. === Non-linear differential equations === A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution. Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations. === Equation order and degree === The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on. When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation y ′ + y 2 = 0 {\displaystyle y'+y^{2}=0} is of degree one for the first meaning but not for the second one. Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation. === Examples === In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones. Heterogeneous first-order linear constant coefficient ordinary differential equation: d u d x = c u + x 2 . {\displaystyle {\frac {du}{dx}}=cu+x^{2}.} Homogeneous second-order linear ordinary differential equation: d 2 u d x 2 − x d u d x + u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.} Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator: d 2 u d x 2 + ω 2 u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.} Heterogeneous first-order nonlinear ordinary differential equation: d u d x = u 2 + 4. {\displaystyle {\frac {du}{dx}}=u^{2}+4.} Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L: L d 2 u d x 2 + g sin ⁡ u = 0. {\displaystyle L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.} In the next group of examples, the unknown function u depends on two variables x and t or x and y. Homogeneous first-order linear partial differential equation: ∂ u ∂ t + t ∂ u ∂ x = 0. {\displaystyle {\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.} Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation: ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.} Homogeneous third-order non-linear partial differential equation, the KdV equation: ∂ u ∂ t = 6 u ∂ u ∂ x − ∂ 3 u ∂ x 3 . {\displaystyle {\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.} == Existence of solutions == Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest. For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point ( a , b ) {\displaystyle (a,b)} in the xy-plane, define some rectangular region Z {\displaystyle Z} , such that Z = [ l , m ] × [ n , p ] {\displaystyle Z=[l,m]\times [n,p]} and ( a , b ) {\displaystyle (a,b)} is in the interior of Z {\displaystyle Z} . If we are given a differential equation d y d x = g ( x , y ) {\textstyle {\frac {dy}{dx}}=g(x,y)} and the condition that y = b {\displaystyle y=b} when x = a {\displaystyle x=a} , then there is locally a solution to this problem if g ( x , y ) {\displaystyle g(x,y)} and ∂ g ∂ x {\textstyle {\frac {\partial g}{\partial x}}} are both continuous on Z {\displaystyle Z} . This solution exists on some interval with its center at a {\displaystyle a} . The solution may not be unique. (See Ordinary differential equation for other results.) However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order: f n ( x ) d n y d x n + ⋯ + f 1 ( x ) d y d x + f 0 ( x ) y = g ( x ) {\displaystyle f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)} such that y ( x 0 ) = y 0 , y ′ ( x 0 ) = y 0 ′ , y ″ ( x 0 ) = y 0 ″ , … {\displaystyle {\begin{aligned}y(x_{0})&=y_{0},&y'(x_{0})&=y'_{0},&y''(x_{0})&=y''_{0},&\ldots \end{aligned}}} For any nonzero f n ( x ) {\displaystyle f_{n}(x)} , if { f 0 , f 1 , … } {\displaystyle \{f_{0},f_{1},\ldots \}} and g {\displaystyle g} are continuous on some interval containing x 0 {\displaystyle x_{0}} , y {\displaystyle y} exists and is unique. == Related concepts == A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times. Integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation. A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations. A stochastic partial differential equation (SPDE) is an equation that generalizes SDEs to include space-time noise processes, with applications in quantum field theory and statistical mechanics. An ultrametric pseudo-differential equation is an equation which contains p-adic numbers in an ultrametric space. Mathematical models that involve ultrametric pseudo-differential equations use pseudo-differential operators instead of differential operators. A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form. == Connection to difference equations == The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation. == Applications == The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation. The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations. == Software == Some CAS software can solve differential equations. These are the commands used in the leading programs: Maple: dsolve Mathematica: DSolve[] Maxima: ode2(equation, y, x) SageMath: desolve() SymPy: sympy.solvers.ode.dsolve(equation) Xcas: desolve(y'=k*y,y) == See also == == References == == Further reading == Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277. Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson. Boyce, W.; DiPrima, R.; Meade, D. (2017). Elementary Differential Equations and Boundary Value Problems. Wiley. Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill. Ince, E. L. (1956). Ordinary Differential Equations. Dover. Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2. Porter, R. I. (1978). "XIX Differential Equations". Further Elementary Analysis. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. Daniel Zwillinger (12 May 2014). Handbook of Differential Equations. Elsevier Science. ISBN 978-1-4832-6396-0. == External links == Media related to Differential equations at Wikimedia Commons Lectures on Differential Equations MIT Open CourseWare Videos Online Notes / Differential Equations Paul Dawkins, Lamar University Differential Equations, S.O.S. Mathematics Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks. Mathematical Assistant on Web Symbolic ODE tool, using Maxima Exact Solutions of Ordinary Differential Equations Collection of ODE and DAE models of physical systems Archived 2008-12-19 at the Wayback Machine MATLAB models Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations. MathDiscuss Video playlist on differential equations
Wikipedia/Differential_equation
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology. == Overview == Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban. == Historical background == The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation W = K log m (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log Sn = n log S, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. == Quantities of information == Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because lim p → 0 + p log ⁡ p = 0 {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for any logarithmic base. === Entropy of an information source === Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by H = − ∑ i p i log 2 ⁡ ( p i ) {\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})} where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy HX of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known. The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N ⋅ H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N ⋅ H. If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If X {\displaystyle \mathbb {X} } is the set of all messages {x1, ..., xn} that X could be, and p(x) is the probability of some x ∈ X {\displaystyle x\in \mathbb {X} } , then the entropy, H, of X is defined: H ( X ) = E X [ I ( x ) ] = − ∑ x ∈ X p ( x ) log ⁡ p ( x ) . {\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).} (Here, I(x) is the self-information, which is the entropy contribution of an individual message, and E X {\displaystyle \mathbb {E} _{X}} is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n. The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: H b ( p ) = − p log 2 ⁡ p − ( 1 − p ) log 2 ⁡ ( 1 − p ) . {\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).} === Joint entropy === The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies. For example, if (X, Y) represents the position of a chess piece—X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. H ( X , Y ) = E X , Y [ − log ⁡ p ( x , y ) ] = − ∑ x , y p ( x , y ) log ⁡ p ( x , y ) {\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,} Despite similar notation, joint entropy should not be confused with cross-entropy. === Conditional entropy (equivocation) === The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y: H ( X | Y ) = E Y [ H ( X | y ) ] = − ∑ y ∈ Y p ( y ) ∑ x ∈ X p ( x | y ) log ⁡ p ( x | y ) = − ∑ x , y p ( x , y ) log ⁡ p ( x | y ) . {\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).} Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: H ( X | Y ) = H ( X , Y ) − H ( Y ) . {\displaystyle H(X|Y)=H(X,Y)-H(Y).\,} === Mutual information (transinformation) === Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by: I ( X ; Y ) = E X , Y [ S I ( x , y ) ] = ∑ x , y p ( x , y ) log ⁡ p ( x , y ) p ( x ) p ( y ) {\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}} where SI (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that I ( X ; Y ) = H ( X ) − H ( X | Y ) . {\displaystyle I(X;Y)=H(X)-H(X|Y).\,} That is, knowing Y, we can save an average of I(X; Y) bits in encoding X compared to not knowing Y. Mutual information is symmetric: I ( X ; Y ) = I ( Y ; X ) = H ( X ) + H ( Y ) − H ( X , Y ) . {\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,} Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: I ( X ; Y ) = E p ( y ) [ D K L ( p ( X | Y = y ) ‖ p ( X ) ) ] . {\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].} In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: I ( X ; Y ) = D K L ( p ( X , Y ) ‖ p ( X ) p ( Y ) ) . {\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).} Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. === Kullback–Leibler divergence (information gain) === The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution ⁠ p ( X ) {\displaystyle p(X)} ⁠, and an arbitrary probability distribution ⁠ q ( X ) {\displaystyle q(X)} ⁠. If we compress data in a manner that assumes ⁠ q ( X ) {\displaystyle q(X)} ⁠ is the distribution underlying some data, when, in reality, ⁠ p ( X ) {\displaystyle p(X)} ⁠ is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined D K L ( p ( X ) ‖ q ( X ) ) = ∑ x ∈ X − p ( x ) log ⁡ q ( x ) − ∑ x ∈ X − p ( x ) log ⁡ p ( x ) = ∑ x ∈ X p ( x ) log ⁡ p ( x ) q ( x ) . {\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.} Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠. If Alice knows the true distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠, while Bob believes (has a prior) that the distribution is ⁠ q ( x ) {\displaystyle q(x)} ⁠, then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. === Directed Information === Directed information, I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} , is an information theory measure that quantifies the information flow from the random process X n = { X 1 , X 2 , … , X n } {\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}} to the random process Y n = { Y 1 , Y 2 , … , Y n } {\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}} . The term directed information was coined by James Massey and is defined as I ( X n → Y n ) ≜ ∑ i = 1 n I ( X i ; Y i | Y i − 1 ) {\displaystyle I(X^{n}\to Y^{n})\triangleq \sum _{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1})} , where I ( X i ; Y i | Y i − 1 ) {\displaystyle I(X^{i};Y_{i}|Y^{i-1})} is the conditional mutual information I ( X 1 , X 2 , . . . , X i ; Y i | Y 1 , Y 2 , . . . , Y i − 1 ) {\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})} . In contrast to mutual information, directed information is not symmetric. The I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} measures the information bits that are transmitted causally from X n {\displaystyle X^{n}} to Y n {\displaystyle Y^{n}} . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information, real-time control communication settings, and in statistical physics. === Other quantities === Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. == Coding theory == Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. === Source theory === Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. ==== Rate ==== Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: r = lim n → ∞ H ( X n | X n − 1 , X n − 2 , X n − 3 , … ) ; {\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );} that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: r = lim n → ∞ 1 n H ( X 1 , X 2 , … X n ) ; {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});} that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. The information rate is defined as: r = lim n → ∞ 1 n I ( X 1 , X 2 , … X n ; Y 1 , Y 2 , … Y n ) ; {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});} It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding. === Channel capacity === Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: → Message W Encoder f n → E n c o d e d s e q u e n c e X n Channel p ( y | x ) → R e c e i v e d s e q u e n c e Y n Decoder g n → E s t i m a t e d m e s s a g e W ^ {\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}} Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by: C = max f I ( X ; Y ) . {\displaystyle C=\max _{f}I(X;Y).\!} This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. ==== Capacity of particular channel models ==== A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use. ==== Channels with memory and directed information ==== In practice many channels have memory. Namely, at time i {\displaystyle i} the channel is given by the conditional probability P ( y i | x i , x i − 1 , x i − 2 , . . . , x 1 , y i − 1 , y i − 2 , . . . , y 1 ) {\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})} . It is often more comfortable to use the notation x i = ( x i , x i − 1 , x i − 2 , . . . , x 1 ) {\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})} and the channel become P ( y i | x i , y i − 1 ) {\displaystyle P(y_{i}|x^{i},y^{i-1})} . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information). === Fungible information === Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. == Applications to other fields == === Intelligence uses and secrecy applications === Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. === Pseudorandom number generation === Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. === Seismic exploration === One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. === Semiotics === Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.: 171 : 137  Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing.": 91  Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. === Integrated process organization of neural information === Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as functional clusters (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or effective information (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis). === Miscellaneous applications === Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling. == See also == === Applications === === History === Hartley, R.V.L. History of information theory Shannon, C.E. Timeline of information theory Yockey, H.P. Andrey Kolmogorov === Theory === === Concepts === == References == == Further reading == === The classic work === === Other journal articles === === Textbooks on information theory === === Other books === == External links == "Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Archived 2018-06-12 at the Wayback Machine
Wikipedia/Information_theory
In mathematics, the fundamental theorem of Galois theory is a result that describes the structure of certain types of field extensions in relation to groups. It was proved by Évariste Galois in his development of Galois theory. In its most basic form, the theorem asserts that given a field extension E/F that is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are fields K satisfying F ⊆ K ⊆ E; they are also called subextensions of E/F.) == Explicit description of the correspondence == For finite extensions, the correspondence can be described explicitly as follows. For any subgroup H of Gal(E/F), the corresponding fixed field, denoted EH, is the set of those elements of E which are fixed by every automorphism in H. For any intermediate field K of E/F, the corresponding subgroup is Aut(E/K), that is, the set of those automorphisms in Gal(E/F) which fix every element of K. The fundamental theorem says that this correspondence is a one-to-one correspondence if (and only if) E/F is a Galois extension. For example, the topmost field E corresponds to the trivial subgroup of Gal(E/F), and the base field F corresponds to the whole group Gal(E/F). The notation Gal(E/F) is only used for Galois extensions. If E/F is Galois, then Gal(E/F) = Aut(E/F). If E/F is not Galois, then the "correspondence" gives only an injective (but not surjective) map from { subgroups of Aut ( E / F ) } {\displaystyle \{{\text{subgroups of Aut}}(E/F)\}} to { subfields of E / F } {\displaystyle \{{\text{subfields of }}E/F\}} , and a surjective (but not injective) map in the reverse direction. In particular, if E/F is not Galois, then F is not the fixed field of any subgroup of Aut(E/F). == Properties of the correspondence == The correspondence has the following useful properties. It is inclusion-reversing. The inclusion of subgroups H1 ⊆ H2 holds if and only if the inclusion of fields EH1 ⊇ EH2 holds. Degrees of extensions are related to orders of groups, in a manner consistent with the inclusion-reversing property. Specifically, if H is a subgroup of Gal(E/F), then |H| = [E:EH] and |Gal(E/F)|/|H| = [EH:F]. The field EH is a normal extension of F (or, equivalently, Galois extension, since any subextension of a separable extension is separable) if and only if H is a normal subgroup of Gal(E/F). In this case, the restriction of the elements of Gal(E/F) to EH induces an isomorphism between Gal(EH/F) and the quotient group Gal(E/F)/H. == Example 1 == Consider the field K = Q ( 2 , 3 ) = [ Q ( 2 ) ] ( 3 ) . {\displaystyle K=\mathbb {Q} \left({\sqrt {2}},{\sqrt {3}}\right)=\left[\mathbb {Q} ({\sqrt {2}})\right]\!({\sqrt {3}}).} Since K is constructed from the base field Q {\displaystyle \mathbb {Q} } by adjoining √2, then √3, each element of K can be written as: ( a + b 2 ) + ( c + d 2 ) 3 , a , b , c , d ∈ Q . {\displaystyle (a+b{\sqrt {2}})+(c+d{\sqrt {2}}){\sqrt {3}},\qquad a,b,c,d\in \mathbb {Q} .} Its Galois group G = Gal ( K / Q ) {\displaystyle G={\text{Gal}}(K/\mathbb {Q} )} comprises the automorphisms of K which fix a. Such automorphisms must send √2 to √2 or –√2, and send √3 to √3 or –√3, since they permute the roots of any irreducible polynomial. Suppose that f exchanges √2 and –√2, so f ( ( a + b 2 ) + ( c + d 2 ) 3 ) = ( a − b 2 ) + ( c − d 2 ) 3 = a − b 2 + c 3 − d 6 , {\displaystyle f\left((a+b{\sqrt {2}})+(c+d{\sqrt {2}}){\sqrt {3}}\right)=(a-b{\sqrt {2}})+(c-d{\sqrt {2}}){\sqrt {3}}=a-b{\sqrt {2}}+c{\sqrt {3}}-d{\sqrt {6}},} and g exchanges √3 and –√3, so g ( ( a + b 2 ) + ( c + d 2 ) 3 ) = ( a + b 2 ) − ( c + d 2 ) 3 = a + b 2 − c 3 − d 6 . {\displaystyle g\left((a+b{\sqrt {2}})+(c+d{\sqrt {2}}){\sqrt {3}}\right)=(a+b{\sqrt {2}})-(c+d{\sqrt {2}}){\sqrt {3}}=a+b{\sqrt {2}}-c{\sqrt {3}}-d{\sqrt {6}}.} These are clearly automorphisms of K, respecting its addition and multiplication. There is also the identity automorphism e which fixes each element, and the composition of f and g which changes the signs on both radicals: ( f g ) ( ( a + b 2 ) + ( c + d 2 ) 3 ) = ( a − b 2 ) − ( c − d 2 ) 3 = a − b 2 − c 3 + d 6 . {\displaystyle (fg)\left((a+b{\sqrt {2}})+(c+d{\sqrt {2}}){\sqrt {3}}\right)=(a-b{\sqrt {2}})-(c-d{\sqrt {2}}){\sqrt {3}}=a-b{\sqrt {2}}-c{\sqrt {3}}+d{\sqrt {6}}.} Since the order of the Galois group is equal to the degree of the field extension, | G | = [ K : Q ] = 4 {\displaystyle |G|=[K:\mathbb {Q} ]=4} , there can be no further automorphisms: G = { 1 , f , g , f g } , {\displaystyle G=\left\{1,f,g,fg\right\},} which is isomorphic to the Klein four-group. Its five subgroups correspond to the fields intermediate between the base Q {\displaystyle \mathbb {Q} } and the extension K. The trivial subgroup {1} corresponds to the entire extension field K. The entire group G corresponds to the base field Q . {\displaystyle \mathbb {Q} .} The subgroup {1, f} corresponds to the subfield Q ( 3 ) , {\displaystyle \mathbb {Q} ({\sqrt {3}}),} since f fixes √3. The subgroup {1, g} corresponds to the subfield Q ( 2 ) , {\displaystyle \mathbb {Q} ({\sqrt {2}}),} since g fixes √2. The subgroup {1, fg} corresponds to the subfield Q ( 6 ) , {\displaystyle \mathbb {Q} ({\sqrt {6}}),} since fg fixes √6. == Example 2 == The following is the simplest case where the Galois group is not abelian. Consider the splitting field K of the irreducible polynomial x 3 − 2 {\displaystyle x^{3}-2} over Q {\displaystyle \mathbb {Q} } ; that is, K = Q ( θ , ω ) {\displaystyle K=\mathbb {Q} (\theta ,\omega )} where θ is a cube root of 2, and ω is a cube root of 1 (but not 1 itself). If we consider K inside the complex numbers, we may take θ = 2 3 {\displaystyle \theta ={\sqrt[{3}]{2}}} , the real cube root of 2, and ω = − 1 2 + i 3 2 . {\displaystyle \omega =-{\tfrac {1}{2}}+i{\tfrac {\sqrt {3}}{2}}.} Since ω has minimal polynomial x 2 + x + 1 {\displaystyle x^{2}+x+1} , the extension Q ⊂ K {\displaystyle \mathbb {Q} \subset K} has degree: [ K : Q ] = [ K : Q ( θ ) ] ⋅ [ Q ( θ ) : Q ] = 2 ⋅ 3 = 6 , {\displaystyle [\,K:\mathbb {Q} \,]=[\,K:\mathbb {Q} (\,\theta \,)\,]\cdot [\,\mathbb {Q} (\,\theta \,):\mathbb {Q} \,]=2\cdot 3=6,} with Q {\displaystyle \mathbb {Q} } -basis { 1 , θ , θ 2 , ω , ω θ , ω θ 2 } {\displaystyle \{1,\theta ,\theta ^{2},\omega ,\omega \theta ,\omega \theta ^{2}\}} as in the previous example. Therefore the Galois group G = G a l ( K / Q ) {\displaystyle G=\mathrm {Gal} (K/\mathbb {Q} )} has six elements, determined by all permutations of the three roots of x 3 − 2 {\displaystyle x^{3}-2} : α 1 = θ , α 2 = ω θ , α 3 = ω 2 θ . {\displaystyle \alpha _{1}=\theta ,\ \alpha _{2}=\omega \theta ,\ \alpha _{3}=\omega ^{2}\theta .} Since there are only 3! = 6 such permutations, G must be isomorphic to the symmetric group of all permutations of three objects. The group can be generated by two automorphisms f and g defined by: f ( θ ) = ω θ , f ( ω ) = ω , {\displaystyle f(\theta )=\omega \theta ,\quad f(\omega )=\omega ,} g ( θ ) = θ , g ( ω ) = ω 2 , {\displaystyle g(\theta )=\theta ,\quad g(\omega )=\omega ^{2},} and G = { 1 , f , f 2 , g , g f , g f 2 } {\displaystyle G=\left\{1,f,f^{2},g,gf,gf^{2}\right\}} , obeying the relations f 3 = g 2 = ( g f ) 2 = 1 {\displaystyle f^{3}=g^{2}=(gf)^{2}=1} . Their effect as permutations of α 1 , α 2 , α 3 {\displaystyle \alpha _{1},\alpha _{2},\alpha _{3}} is (in cycle notation): f = ( 123 ) , g = ( 23 ) {\displaystyle f=(123),g=(23)} . Also, g can be considered as the complex conjugation mapping. The subgroups of G and corresponding subfields are as follows: As always, the trivial group {1} corresponds to the whole field K, while the entire group G to the base field Q {\displaystyle \mathbb {Q} } . The unique subgroup of order 3, H = { 1 , f , f 2 } {\displaystyle H=\{1,f,f^{2}\}} , corresponds to the subfield Q ( ω ) {\displaystyle \mathbb {Q} (\omega )} of degree two, since the subgroup has index two in G: i.e. [ Q ( ω ) : Q ] = | G | | H | = 2 {\displaystyle [\mathbb {Q} (\omega ):\mathbb {Q} ]={\tfrac {|G|}{|H|}}=2} . Also, this subgroup is normal, so the subfield is normal over Q {\displaystyle \mathbb {Q} } , being the splitting field of x 2 + x + 1 {\displaystyle x^{2}+x+1} . Its Galois group over the base field is the quotient group G / H = { [ 1 ] , [ g ] } {\displaystyle G/H=\{[1],[g]\}} , where [g] denotes the coset of g modulo H; that is, its only non-trivial automorphism is the complex conjugation g. There are three subgroups of order 2, { 1 , g } , { 1 , g f } {\displaystyle \{1,g\},\{1,gf\}} and { 1 , g f 2 } , {\displaystyle \{1,gf^{2}\},} corresponding respectively to the subfields Q ( θ ) , Q ( ω θ ) , Q ( ω 2 θ ) . {\displaystyle \mathbb {Q} (\theta ),\mathbb {Q} (\omega \theta ),\mathbb {Q} (\omega ^{2}\theta ).} These subfields have degree 3 over Q {\displaystyle \mathbb {Q} } since the subgroups have index 3 in G. The subgroups are not normal in G, so the subfields are not Galois or normal over Q {\displaystyle \mathbb {Q} } . In fact, each subfield contains only a single one of the roots α 1 , α 2 , α 3 {\displaystyle \alpha _{1},\alpha _{2},\alpha _{3}} , so none has any non-trivial automorphisms. == Example 3 == Let E = Q ( λ ) {\displaystyle E=\mathbb {Q} (\lambda )} be the field of rational functions in the indeterminate λ, and consider the group of automorphisms: G = { λ , 1 1 − λ , λ − 1 λ , 1 λ , λ λ − 1 , 1 − λ } ⊂ A u t ( E ) ; {\displaystyle G=\left\{\lambda ,{\frac {1}{1-\lambda }},{\frac {\lambda -1}{\lambda }},{\frac {1}{\lambda }},{\frac {\lambda }{\lambda -1}},1-\lambda \right\}\subset \mathrm {Aut} (E);} here we denote an automorphism ϕ : E → E {\displaystyle \phi :E\to E} by its value ϕ ( λ ) {\displaystyle \phi (\lambda )} , so that f ( λ ) ↦ f ( ϕ ( λ ) ) {\displaystyle f(\lambda )\mapsto f(\phi (\lambda ))} . This group is isomorphic to S 3 {\displaystyle S_{3}} (see: six cross-ratios). Let F {\displaystyle F} be the fixed field of G {\displaystyle G} , so that G a l ( E / F ) = G {\displaystyle {\rm {Gal}}(E/F)=G} . If H {\displaystyle H} is a subgroup of G {\displaystyle G} , then the coefficients of the polynomial P ( T ) := ∏ h ∈ H ( T − h ) ∈ E [ T ] {\displaystyle P(T):=\prod _{h\in H}(T-h)\in E[T]} generate the fixed field of H {\displaystyle H} . The Galois correspondence implies that every subfield of E / F {\displaystyle E/F} can be constructed this way. For example, for H = { λ , 1 − λ } {\displaystyle H=\{\lambda ,1-\lambda \}} , the fixed field is Q ( λ ( 1 − λ ) ) {\displaystyle \mathbb {Q} (\lambda (1-\lambda ))} and if H = { λ , 1 λ } {\displaystyle H=\{\lambda ,{\tfrac {1}{\lambda }}\}} then the fixed field is Q ( λ + 1 λ ) {\displaystyle \mathbb {Q} (\lambda +{\tfrac {1}{\lambda }})} . The fixed field of G {\displaystyle G} is the base field F = Q ( j ) , {\displaystyle F=\mathbb {Q} (j),} where j is the j-invariant written in terms of the modular lambda function: j = 256 ( 1 − λ ( 1 − λ ) ) 3 ( λ ( 1 − λ ) ) 2 = 256 ( 1 − λ + λ 2 ) 3 λ 2 ( 1 − λ ) 2 . {\displaystyle j={\frac {256(1-\lambda (1-\lambda ))^{3}}{(\lambda (1-\lambda ))^{2}}}={\frac {256(1-\lambda +\lambda ^{2})^{3}}{\lambda ^{2}(1-\lambda )^{2}}}\ .} Similar examples can be constructed for each of the symmetry groups of the platonic solids as these also have faithful actions on the projective line P 1 ( C ) {\displaystyle \mathbb {P} ^{1}(\mathbb {C} )} and hence on C ( x ) {\displaystyle \mathbb {C} (x)} . == Example 4 == Here we give an example of a finite extension E / F {\displaystyle E/F} which is not Galois, and with this we show that (the fundamental theorem of) Galois theory no longer works when E / F {\displaystyle E/F} is not Galois. Let E = Q ( 2 3 ) {\displaystyle E=\mathbb {Q} ({\sqrt[{3}]{2}})} and F = Q {\displaystyle F=\mathbb {Q} } . Then E / F {\displaystyle E/F} is a finite extension, but not a splitting field over F {\displaystyle F} (since the minimal polynomials of 2 3 {\displaystyle {\sqrt[{3}]{2}}} has two complex roots that do not lie in E {\displaystyle E} ). Any f ∈ G = G a l ( E / F ) {\displaystyle f\in G=\mathrm {Gal} (E/F)} is completely determined by f ( 2 3 ) {\displaystyle f({\sqrt[{3}]{2}})} and that 2 = f ( 2 3 ) 3 ⟹ f = 1 {\displaystyle 2=f({\sqrt[{3}]{2}})^{3}\implies f=1} Thus, G = { 1 } {\displaystyle G=\{1\}} , is the trivial group. In particular, | G | = 1 < 3 = [ E : F ] {\displaystyle |G|=1<3=[E:F]} . This shows that E / F {\displaystyle E/F} is not Galois. Now, G {\displaystyle G} has only one subgroup, i.e., itself. The only intermediate field that contains F = Q {\displaystyle F=\mathbb {Q} } is E = Q ( 2 3 ) {\displaystyle E=\mathbb {Q} ({\sqrt[{3}]{2}})} . It follows that the Galois correspondence fails. == Applications == The theorem classifies the intermediate fields of E/F in terms of group theory. This translation between intermediate fields and subgroups is key to showing that the general quintic equation is not solvable by radicals (see Abel–Ruffini theorem). One first determines the Galois groups of radical extensions (extensions of the form F(α) where α is an n-th root of some element of F), and then uses the fundamental theorem to show that solvable extensions correspond to solvable groups. Theories such as Kummer theory and class field theory are predicated on the fundamental theorem. == Infinite case == Given an infinite algebraic extension we can still define it to be Galois if it is normal and separable. The problem that one encounters in the infinite case is that the bijection in the fundamental theorem does not hold as we get too many subgroups generally. More precisely if we just take every subgroup we can in general find two different subgroups that fix the same intermediate field. Therefore we amend this by introducing a topology on the Galois group. Let E / F {\displaystyle E/F} be a Galois extension (possibly infinite) and let G = Gal ( E / F ) {\displaystyle G={\text{Gal}}(E/F)} be the Galois group of the extension. Let Int F ( E / F ) = { G i = Gal ( L i / F ) | L i / F is a finite Galois extension and L i ⊆ E } {\displaystyle {\text{Int}}_{\text{F}}(E/F)=\{G_{i}={\text{Gal}}(L_{i}/F)~|~L_{i}/F{\text{ is a finite Galois extension and }}L_{i}\subseteq E\}} be the set of the Galois groups of all finite intermediate Galois extensions. Note that for all i ∈ I {\displaystyle i\in I} we can define the maps φ i : G → G i {\displaystyle \varphi _{i}:G\rightarrow G_{i}} by σ ↦ σ | L i {\displaystyle \sigma \mapsto \sigma _{|L_{i}}} . We then define the Krull topology on G {\displaystyle G} to be weakest topology such that for all i ∈ I {\displaystyle i\in I} the maps φ i : G → G i {\displaystyle \varphi _{i}:G\rightarrow G_{i}} are continuous, where we endow each G i {\displaystyle G_{i}} with the discrete topology. Stated differently G ≅ lim ← ⁡ G i {\displaystyle G\cong \varprojlim G_{i}} as an inverse limit of topological groups (where again each G i {\displaystyle G_{i}} is endowed with the discrete topology). This makes G {\displaystyle G} a profinite group (in fact every profinite group can be realised as the Galois group of a Galois extension, see for example ). Note that when E / F {\displaystyle E/F} is finite, the Krull topology is the discrete topology. Now that we have defined a topology on the Galois group we can restate the fundamental theorem for infinite Galois extensions. Let F ( E / F ) {\displaystyle {\mathcal {F}}(E/F)} denote the set of all intermediate field extensions of E / F {\displaystyle E/F} and let C ( G ) {\displaystyle {\mathcal {C}}(G)} denote the set of all closed subgroups of G = Gal ( E / F ) {\displaystyle G={\text{Gal}}(E/F)} endowed with the Krull topology. Then there exists a bijection between F ( E / F ) {\displaystyle {\mathcal {F}}(E/F)} and C ( G ) {\displaystyle {\mathcal {C}}(G)} given by the map Φ : F ( E / F ) → C ( G ) {\displaystyle \Phi :{\mathcal {F}}(E/F)\rightarrow {\mathcal {C}}(G)} defined by L ↦ Gal ( E / L ) {\displaystyle L\mapsto {\text{Gal}}(E/L)} and the map Γ : C ( G ) → F ( E / F ) {\displaystyle \Gamma :{\mathcal {C}}(G)\rightarrow {\mathcal {F}}(E/F)} defined by N ↦ Fix E ( N ) := { a ∈ E | σ ( a ) = a for all σ ∈ N } {\displaystyle N\mapsto {\text{Fix}}_{E}(N):=\{a\in E~|~\sigma (a)=a{\text{ for all }}\sigma \in N\}} . One important thing one needs to check is that Φ {\displaystyle \Phi } is a well-defined map, that is that Φ ( L ) {\displaystyle \Phi (L)} is a closed subgroup of G {\displaystyle G} for all intermediate fields L {\displaystyle L} . This is proved in Ribes–Zalesskii, Theorem 2.11.3. == See also == Galois connection == References == == Further reading == Milne, J. S. (2022). Fields and Galois Theory. Kea Books, Ann Arbor, MI. ISBN 979-8-218-07399-2. == External links == Media related to Fundamental theorem of Galois theory at Wikimedia Commons proof of fundamental theorem of Galois theory at PlanetMath. The Stacks Project authors. "Theorem 9.21.7 (Fundamental theorem of Galois theory)". The Stacks Project authors. "Theorem 9.22.4 (Fundamental theorem of infinite Galois theory)".
Wikipedia/Fundamental_theorem_of_Galois_theory
Order theory is a branch of mathematics that investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary. == Background and motivation == Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the standard order on the natural numbers e.g. "2 is less than 3", "10 is greater than 5", or "Does Tom have fewer cookies than Sally?". This intuitive concept can be extended to orders on other sets of numbers, such as the integers and the reals. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general (although one usually is also interested in the actual difference of two numbers, which is not given by the order). Other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, intuitive feel of sequence or relative quantity. In other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the subset relation, e.g., "Pediatricians are physicians," and "Circles are merely special-case ellipses." Some orders, like "less-than" on the natural numbers and alphabetical order on words, have a special property: each element can be compared to any other element, i.e. it is smaller (earlier) than, larger (later) than, or identical to. However, many other orders do not. Consider for example the subset order on a collection of sets: though the set of birds and the set of dogs are both subsets of the set of animals, neither the birds nor the dogs constitutes a subset of the other. Those orders like the "subset-of" relation for which there exist incomparable elements are called partial orders; orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting. This is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes much sense, because one can derive numerous theorems in the general setting, without focusing on the details of any particular order. These insights can then be readily transferred to many less abstract applications. Driven by the wide practical usage of orders, numerous special kinds of ordered sets have been defined, some of which have grown into mathematical fields of their own. In addition, order theory does not restrict itself to the various classes of ordering relations, but also considers appropriate functions between them. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found. == Basic definitions == This section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. === Partially ordered sets === Orders are special binary relations. Suppose that P is a set and that ≤ is a relation on P ('relation on a set' is taken to mean 'relation amongst its inhabitants', i.e. ≤ is a subset of the cartesian product P × P). Then ≤ is a partial order if it is reflexive, antisymmetric, and transitive, that is, if for all a, b and c in P, we have that: a ≤ a (reflexivity) if a ≤ b and b ≤ a then a = b (antisymmetry) if a ≤ b and b ≤ c then a ≤ c (transitivity). A set with a partial order on it is called a partially ordered set, poset, or just ordered set if the intended meaning is clear. By checking these properties, one immediately sees that the well-known orders on natural numbers, integers, rational numbers and reals are all orders in the above sense. However, these examples have the additional property that any two elements are comparable, that is, for all a and b in P, we have that: a ≤ b or b ≤ a. A partial order with this property is called a total order. These orders can also be called linear orders or chains. While many familiar orders are linear, the subset order on sets provides an example where this is not the case. Another example is given by the divisibility (or "is-a-factor-of") relation |. For two natural numbers n and m, we write n|m if n divides m without remainder. One easily sees that this yields a partial order. For example neither 3 divides 13 nor 13 divides 3, so 3 and 13 are not comparable elements of the divisibility relation on the set of integers. The identity relation = on any set is also a partial order in which every two distinct elements are incomparable. It is also the only relation that is both a partial order and an equivalence relation because it satisfies both the antisymmetry property of partial orders and the symmetry property of equivalence relations. Many advanced properties of posets are interesting mainly for non-linear orders. === Visualizing a poset === Hasse diagrams can visually represent the elements and relations of a partial ordering. These are graph drawings where the vertices are the elements of the poset and the ordering relation is indicated by both the edges and the relative positioning of the vertices. Orders are drawn bottom-up: if an element x is smaller than (precedes) y then there exists a path from x to y that is directed upwards. It is often necessary for the edges connecting elements to cross each other, but elements must never be located within an edge. An instructive exercise is to draw the Hasse diagram for the set of natural numbers that are smaller than or equal to 13, ordered by | (the divides relation). Even some infinite sets can be diagrammed by superimposing an ellipsis (...) on a finite sub-order. This works well for the natural numbers, but it fails for the reals, where there is no immediate successor above 0; however, quite often one can obtain an intuition related to diagrams of a similar kind. === Special elements within an order === In a partially ordered set there may be some elements that play a special role. The most basic example is given by the least element of a poset. For example, 1 is the least element of the positive integers and the empty set is the least set under the subset order. Formally, an element m is a least element if: m ≤ a, for all elements a of the order. The notation 0 is frequently found for the least element, even when no numbers are concerned. However, in orders on sets of numbers, this notation might be inappropriate or ambiguous, since the number 0 is not always least. An example is given by the above divisibility order |, where 1 is the least element since it divides all other numbers. In contrast, 0 is the number that is divided by all other numbers. Hence it is the greatest element of the order. Other frequent terms for the least and greatest elements is bottom and top or zero and unit. Least and greatest elements may fail to exist, as the example of the real numbers shows. But if they exist, they are always unique. In contrast, consider the divisibility relation | on the set {2,3,4,5,6}. Although this set has neither top nor bottom, the elements 2, 3, and 5 have no elements below them, while 4, 5 and 6 have none above. Such elements are called minimal and maximal, respectively. Formally, an element m is minimal if: a ≤ m implies a = m, for all elements a of the order. Exchanging ≤ with ≥ yields the definition of maximality. As the example shows, there can be many maximal elements and some elements may be both maximal and minimal (e.g. 5 above). However, if there is a least element, then it is the only minimal element of the order. Again, in infinite posets maximal elements do not always exist - the set of all finite subsets of a given infinite set, ordered by subset inclusion, provides one of many counterexamples. An important tool to ensure the existence of maximal elements under certain conditions is Zorn's Lemma. Subsets of partially ordered sets inherit the order. We already applied this by considering the subset {2,3,4,5,6} of the natural numbers with the induced divisibility ordering. Now there are also elements of a poset that are special with respect to some subset of the order. This leads to the definition of upper bounds. Given a subset S of some poset P, an upper bound of S is an element b of P that is above all elements of S. Formally, this means that s ≤ b, for all s in S. Lower bounds again are defined by inverting the order. For example, −5 is a lower bound of the natural numbers as a subset of the integers. Given a set of sets, an upper bound for these sets under the subset ordering is given by their union. In fact, this upper bound is quite special: it is the smallest set that contains all of the sets. Hence, we have found the least upper bound of a set of sets. This concept is also called supremum or join, and for a set S one writes sup(S) or ⋁ S {\displaystyle \bigvee S} for its least upper bound. Conversely, the greatest lower bound is known as infimum or meet and denoted inf(S) or ⋀ S {\displaystyle \bigwedge S} . These concepts play an important role in many applications of order theory. For two elements x and y, one also writes x ∨ y {\displaystyle x\vee y} and x ∧ y {\displaystyle x\wedge y} for sup({x,y}) and inf({x,y}), respectively. For example, 1 is the infimum of the positive integers as a subset of integers. For another example, consider again the relation | on natural numbers. The least upper bound of two numbers is the smallest number that is divided by both of them, i.e. the least common multiple of the numbers. Greatest lower bounds in turn are given by the greatest common divisor. === Duality === In the previous definitions, we often noted that a concept can be defined by just inverting the ordering in a former definition. This is the case for "least" and "greatest", for "minimal" and "maximal", for "upper bound" and "lower bound", and so on. This is a general situation in order theory: A given order can be inverted by just exchanging its direction, pictorially flipping the Hasse diagram top-down. This yields the so-called dual, inverse, or opposite order. Every order theoretic definition has its dual: it is the notion one obtains by applying the definition to the inverse order. Since all concepts are symmetric, this operation preserves the theorems of partial orders. For a given mathematical result, one can just invert the order and replace all definitions by their duals and one obtains another valid theorem. This is important and useful, since one obtains two theorems for the price of one. Some more details and examples can be found in the article on duality in order theory. === Constructing new orders === There are many ways to construct orders out of given orders. The dual order is one example. Another important construction is the cartesian product of two partially ordered sets, taken together with the product order on pairs of elements. The ordering is defined by (a, x) ≤ (b, y) if (and only if) a ≤ b and x ≤ y. (Notice carefully that there are three distinct meanings for the relation symbol ≤ in this definition.) The disjoint union of two posets is another typical example of order construction, where the order is just the (disjoint) union of the original orders. Every partial order ≤ gives rise to a so-called strict order <, by defining a < b if a ≤ b and not b ≤ a. This transformation can be inverted by setting a ≤ b if a < b or a = b. The two concepts are equivalent although in some circumstances one can be more convenient to work with than the other. == Functions between orders == It is reasonable to consider functions between partially ordered sets having certain additional properties that are related to the ordering relations of the two sets. The most fundamental condition that occurs in this context is monotonicity. A function f from a poset P to a poset Q is monotone, or order-preserving, if a ≤ b in P implies f(a) ≤ f(b) in Q (Noting that, strictly, the two relations here are different since they apply to different sets.). The converse of this implication leads to functions that are order-reflecting, i.e. functions f as above for which f(a) ≤ f(b) implies a ≤ b. On the other hand, a function may also be order-reversing or antitone, if a ≤ b implies f(a) ≥ f(b). An order-embedding is a function f between orders that is both order-preserving and order-reflecting. Examples for these definitions are found easily. For instance, the function that maps a natural number to its successor is clearly monotone with respect to the natural order. Any function from a discrete order, i.e. from a set ordered by the identity order "=", is also monotone. Mapping each natural number to the corresponding real number gives an example for an order embedding. The set complement on a powerset is an example of an antitone function. An important question is when two orders are "essentially equal", i.e. when they are the same up to renaming of elements. Order isomorphisms are functions that define such a renaming. An order-isomorphism is a monotone bijective function that has a monotone inverse. This is equivalent to being a surjective order-embedding. Hence, the image f(P) of an order-embedding is always isomorphic to P, which justifies the term "embedding". A more elaborate type of functions is given by so-called Galois connections. Monotone Galois connections can be viewed as a generalization of order-isomorphisms, since they constitute of a pair of two functions in converse directions, which are "not quite" inverse to each other, but that still have close relationships. Another special type of self-maps on a poset are closure operators, which are not only monotonic, but also idempotent, i.e. f(x) = f(f(x)), and extensive (or inflationary), i.e. x ≤ f(x). These have many applications in all kinds of "closures" that appear in mathematics. Besides being compatible with the mere order relations, functions between posets may also behave well with respect to special elements and constructions. For example, when talking about posets with least element, it may seem reasonable to consider only monotonic functions that preserve this element, i.e. which map least elements to least elements. If binary infima ∧ exist, then a reasonable property might be to require that f(x ∧ y) = f(x) ∧ f(y), for all x and y. All of these properties, and indeed many more, may be compiled under the label of limit-preserving functions. Finally, one can invert the view, switching from functions of orders to orders of functions. Indeed, the functions between two posets P and Q can be ordered via the pointwise order. For two functions f and g, we have f ≤ g if f(x) ≤ g(x) for all elements x of P. This occurs for example in domain theory, where function spaces play an important role. == Special types of orders == Many of the structures that are studied in order theory employ order relations with further properties. In fact, even some relations that are not partial orders are of special interest. Mainly the concept of a preorder has to be mentioned. A preorder is a relation that is reflexive and transitive, but not necessarily antisymmetric. Each preorder induces an equivalence relation between elements, where a is equivalent to b, if a ≤ b and b ≤ a. Preorders can be turned into orders by identifying all elements that are equivalent with respect to this relation. Several types of orders can be defined from numerical data on the items of the order: a total order results from attaching distinct real numbers to each item and using the numerical comparisons to order the items; instead, if distinct items are allowed to have equal numerical scores, one obtains a strict weak ordering. Requiring two scores to be separated by a fixed threshold before they may be compared leads to the concept of a semiorder, while allowing the threshold to vary on a per-item basis produces an interval order. An additional simple but useful property leads to so-called well-founded, for which all non-empty subsets have a minimal element. Generalizing well-orders from linear to partial orders, a set is well partially ordered if all its non-empty subsets have a finite number of minimal elements. Many other types of orders arise when the existence of infima and suprema of certain sets is guaranteed. Focusing on this aspect, usually referred to as completeness of orders, one obtains: Bounded posets, i.e. posets with a least and greatest element (which are just the supremum and infimum of the empty subset), Lattices, in which every non-empty finite set has a supremum and infimum, Complete lattices, where every set has a supremum and infimum, and Directed complete partial orders (dcpos), that guarantee the existence of suprema of all directed subsets and that are studied in domain theory. Partial orders with complements, or poc sets, are posets with a unique bottom element 0, as well as an order-reversing involution ∗ {\displaystyle *} such that a ≤ a ∗ ⟹ a = 0. {\displaystyle a\leq a^{*}\implies a=0.} However, one can go even further: if all finite non-empty infima exist, then ∧ can be viewed as a total binary operation in the sense of universal algebra. Hence, in a lattice, two operations ∧ and ∨ are available, and one can define new properties by giving identities, such as x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z), for all x, y, and z. This condition is called distributivity and gives rise to distributive lattices. There are some other important distributivity laws which are discussed in the article on distributivity in order theory. Some additional order structures that are often specified via algebraic operations and defining identities are Heyting algebras and Boolean algebras, which both introduce a new operation ~ called negation. Both structures play a role in mathematical logic and especially Boolean algebras have major applications in computer science. Finally, various structures in mathematics combine orders with even more algebraic operations, as in the case of quantales, that allow for the definition of an addition operation. Many other important properties of posets exist. For example, a poset is locally finite if every closed interval [a, b] in it is finite. Locally finite posets give rise to incidence algebras which in turn can be used to define the Euler characteristic of finite bounded posets. == Subsets of ordered sets == In an ordered set, one can define many types of special subsets based on the given order. A simple example are upper sets; i.e. sets that contain all elements that are above them in the order. Formally, the upper closure of a set S in a poset P is given by the set {x in P | there is some y in S with y ≤ x}. A set that is equal to its upper closure is called an upper set. Lower sets are defined dually. More complicated lower subsets are ideals, which have the additional property that each two of their elements have an upper bound within the ideal. Their duals are given by filters. A related concept is that of a directed subset, which like an ideal contains upper bounds of finite subsets, but does not have to be a lower set. Furthermore, it is often generalized to preordered sets. A subset which is – as a sub-poset – linearly ordered, is called a chain. The opposite notion, the antichain, is a subset that contains no two comparable elements; i.e. that is a discrete order. == Related mathematical areas == Although most mathematical areas use orders in one or the other way, there are also a few theories that have relationships which go far beyond mere application. Together with their major points of contact with order theory, some of these are to be presented below. === Universal algebra === As already mentioned, the methods and formalisms of universal algebra are an important tool for many order theoretic considerations. Beside formalizing orders in terms of algebraic structures that satisfy certain identities, one can also establish other connections to algebra. An example is given by the correspondence between Boolean algebras and Boolean rings. Other issues are concerned with the existence of free constructions, such as free lattices based on a given set of generators. Furthermore, closure operators are important in the study of universal algebra. === Topology === In topology, orders play a very prominent role. In fact, the collection of open sets provides a classical example of a complete lattice, more precisely a complete Heyting algebra (or "frame" or "locale"). Filters and nets are notions closely related to order theory and the closure operator of sets can be used to define a topology. Beyond these relations, topology can be looked at solely in terms of the open set lattices, which leads to the study of pointless topology. Furthermore, a natural preorder of elements of the underlying set of a topology is given by the so-called specialization order, that is actually a partial order if the topology is T0. Conversely, in order theory, one often makes use of topological results. There are various ways to define subsets of an order which can be considered as open sets of a topology. Considering topologies on a poset (X, ≤) that in turn induce ≤ as their specialization order, the finest such topology is the Alexandrov topology, given by taking all upper sets as opens. Conversely, the coarsest topology that induces the specialization order is the upper topology, having the complements of principal ideals (i.e. sets of the form {y in X | y ≤ x} for some x) as a subbase. Additionally, a topology with specialization order ≤ may be order consistent, meaning that their open sets are "inaccessible by directed suprema" (with respect to ≤). The finest order consistent topology is the Scott topology, which is coarser than the Alexandrov topology. A third important topology in this spirit is the Lawson topology. There are close connections between these topologies and the concepts of order theory. For example, a function preserves directed suprema if and only if it is continuous with respect to the Scott topology (for this reason this order theoretic property is also called Scott-continuity). === Category theory === The visualization of orders with Hasse diagrams has a straightforward generalization: instead of displaying lesser elements below greater ones, the direction of the order can also be depicted by giving directions to the edges of a graph. In this way, each order is seen to be equivalent to a directed acyclic graph, where the nodes are the elements of the poset and there is a directed path from a to b if and only if a ≤ b. Dropping the requirement of being acyclic, one can also obtain all preorders. When equipped with all transitive edges, these graphs in turn are just special categories, where elements are objects and each set of morphisms between two elements is at most singleton. Functions between orders become functors between categories. Many ideas of order theory are just concepts of category theory in small. For example, an infimum is just a categorical product. More generally, one can capture infima and suprema under the abstract notion of a categorical limit (or colimit, respectively). Another place where categorical ideas occur is the concept of a (monotone) Galois connection, which is just the same as a pair of adjoint functors. But category theory also has its impact on order theory on a larger scale. Classes of posets with appropriate functions as discussed above form interesting categories. Often one can also state constructions of orders, like the product order, in terms of categories. Further insights result when categories of orders are found categorically equivalent to other categories, for example of topological spaces. This line of research leads to various representation theorems, often collected under the label of Stone duality. == History == As explained before, orders are ubiquitous in mathematics. However, the earliest explicit mentionings of partial orders are probably to be found not before the 19th century. In this context the works of George Boole are of great importance. Moreover, works of Charles Sanders Peirce, Richard Dedekind, and Ernst Schröder also consider concepts of order theory. Contributors to ordered geometry were listed in a 1961 textbook: It was Pasch in 1882, who first pointed out that a geometry of order could be developed without reference to measurement. His system of axioms was gradually improved by Peano (1889), Hilbert (1899), and Veblen (1904). In 1901 Bertrand Russell wrote "On the Notion of Order" exploring the foundations of the idea through generation of series. He returned to the topic in part IV of The Principles of Mathematics (1903). Russell noted that binary relation aRb has a sense proceeding from a to b with the converse relation having an opposite sense, and sense "is the source of order and series." (p 95) He acknowledges Immanuel Kant was "aware of the difference between logical opposition and the opposition of positive and negative". He wrote that Kant deserves credit as he "first called attention to the logical importance of asymmetric relations." The term poset as an abbreviation for partially ordered set is attributed to Garrett Birkhoff in the second edition of his influential book Lattice Theory. == See also == Causal sets Cyclic order Hierarchy (mathematics) Incidence algebra == Notes == == References == Birkhoff, Garrett (1940). Lattice Theory. Vol. 25 (3rd Revised ed.). American Mathematical Society. ISBN 978-0-8218-1025-5. {{cite book}}: ISBN / Date incompatibility (help) Burris, S. N.; Sankappanavar, H. P. (1981). A Course in Universal Algebra. Springer. ISBN 978-0-387-90578-5. Davey, B. A.; Priestley, H. A. (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 0-521-78451-4. Gierz, G.; Hofmann, K. H.; Keimel, K.; Mislove, M.; Scott, D. S. (2003). Continuous Lattices and Domains. Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 978-0-521-80338-0. == External links == Orders at ProvenMath partial order, linear order, well order, initial segment; formal definitions and proofs within the axioms of set theory. Nagel, Felix (2013). Set Theory and Topology. An Introduction to the Foundations of Analysis
Wikipedia/Order_theory
A non-associative algebra (or distributive algebra) is an algebra over a field where the binary multiplication operation is not assumed to be associative. That is, an algebraic structure A is a non-associative algebra over a field K if it is a vector space over K and is equipped with a K-bilinear binary multiplication operation A × A → A which may or may not be associative. Examples include Lie algebras, Jordan algebras, the octonions, and three-dimensional Euclidean space equipped with the cross product operation. Since it is not assumed that the multiplication is associative, using parentheses to indicate the order of multiplications is necessary. For example, the expressions (ab)(cd), (a(bc))d and a(b(cd)) may all yield different answers. While this use of non-associative means that associativity is not assumed, it does not mean that associativity is disallowed. In other words, "non-associative" means "not necessarily associative", just as "noncommutative" means "not necessarily commutative" for noncommutative rings. An algebra is unital or unitary if it has an identity element e with ex = x = xe for all x in the algebra. For example, the octonions are unital, but Lie algebras never are. The nonassociative algebra structure of A may be studied by associating it with other associative algebras which are subalgebras of the full algebra of K-endomorphisms of A as a K-vector space. Two such are the derivation algebra and the (associative) enveloping algebra, the latter being in a sense "the smallest associative algebra containing A". More generally, some authors consider the concept of a non-associative algebra over a commutative ring R: An R-module equipped with an R-bilinear binary multiplication operation. If a structure obeys all of the ring axioms apart from associativity (for example, any R-algebra), then it is naturally a Z {\displaystyle \mathbb {Z} } -algebra, so some authors refer to non-associative Z {\displaystyle \mathbb {Z} } -algebras as non-associative rings. == Algebras satisfying identities == Ring-like structures with two binary operations and no other restrictions are a broad class, one which is too general to study. For this reason, the best-known kinds of non-associative algebras satisfy identities, or properties, which simplify multiplication somewhat. These include the following ones. === Usual properties === Let x, y and z denote arbitrary elements of the algebra A over the field K. Let powers to positive (non-zero) integer be recursively defined by x1 ≝ x and either xn+1 ≝ xnx (right powers) or xn+1 ≝ xxn (left powers) depending on authors. Unital: there exist an element e so that ex = x = xe; in that case we can define x0 ≝ e. Associative: (xy)z = x(yz). Commutative: xy = yx. Anticommutative: xy = −yx. Jacobi identity: (xy)z + (yz)x + (zx)y = 0 or x(yz) + y(zx) + z(xy) = 0 depending on authors. Jordan identity: (x2y)x = x2(yx) or (xy)x2 = x(yx2) depending on authors. Alternative: (xx)y = x(xy) (left alternative) and (yx)x = y(xx) (right alternative). Flexible: (xy)x = x(yx). nth power associative with n ≥ 2: xn−kxk = xn for all integers k so that 0 < k < n. Third power associative: x2x = xx2. Fourth power associative: x3x = x2x2 = xx3 (compare with fourth power commutative below). Power associative: the subalgebra generated by any element is associative, i.e., nth power associative for all n ≥ 2. nth power commutative with n ≥ 2: xn−kxk = xkxn−k for all integers k so that 0 < k < n. Third power commutative: x2x = xx2. Fourth power commutative: x3x = xx3 (compare with fourth power associative above). Power commutative: the subalgebra generated by any element is commutative, i.e., nth power commutative for all n ≥ 2. Nilpotent of index n ≥ 2: the product of any n elements, in any association, vanishes, but not for some n−1 elements: x1x2…xn = 0 and there exist n−1 elements so that y1y2…yn−1 ≠ 0 for a specific association. Nil of index n ≥ 2: power associative and xn = 0 and there exist an element y so that yn−1 ≠ 0. === Relations between properties === For K of any characteristic: Associative implies alternative. Any two out of the three properties left alternative, right alternative, and flexible, imply the third one. Thus, alternative implies flexible. Alternative implies Jordan identity. Commutative implies flexible. Anticommutative implies flexible. Alternative implies power associative. Flexible implies third power associative. Second power associative and second power commutative are always true. Third power associative and third power commutative are equivalent. nth power associative implies nth power commutative. Nil of index 2 implies anticommutative. Nil of index 2 implies Jordan identity. Nilpotent of index 3 implies Jacobi identity. Nilpotent of index n implies nil of index N with 2 ≤ N ≤ n. Unital and nil of index n are incompatible. If K ≠ GF(2) or dim(A) ≤ 3: Jordan identity and commutative together imply power associative. If char(K) ≠ 2: Right alternative implies power associative. Similarly, left alternative implies power associative. Unital and Jordan identity together imply flexible. Jordan identity and flexible together imply power associative. Commutative and anticommutative together imply nilpotent of index 2. Anticommutative implies nil of index 2. Unital and anticommutative are incompatible. If char(K) ≠ 3: Unital and Jacobi identity are incompatible. If char(K) ∉ {2,3,5}: Commutative and x4 = x2x2 (one of the two identities defining fourth power associative) together imply power associative. If char(K) = 0: Third power associative and x4 = x2x2 (one of the two identities defining fourth power associative) together imply power associative. If char(K) = 2: Commutative and anticommutative are equivalent. === Associator === The associator on A is the K-multilinear map [ ⋅ , ⋅ , ⋅ ] : A × A × A → A {\displaystyle [\cdot ,\cdot ,\cdot ]:A\times A\times A\to A} given by [x,y,z] = (xy)z − x(yz). It measures the degree of nonassociativity of A {\displaystyle A} , and can be used to conveniently express some possible identities satisfied by A. Let x, y and z denote arbitrary elements of the algebra. Associative: [x,y,z] = 0. Alternative: [x,x,y] = 0 (left alternative) and [y,x,x] = 0 (right alternative). It implies that permuting any two terms changes the sign: [x,y,z] = −[x,z,y] = −[z,y,x] = −[y,x,z]; the converse holds only if char(K) ≠ 2. Flexible: [x,y,x] = 0. It implies that permuting the extremal terms changes the sign: [x,y,z] = −[z,y,x]; the converse holds only if char(K) ≠ 2. Jordan identity: [x2,y,x] = 0 or [x,y,x2] = 0 depending on authors. Third power associative: [x,x,x] = 0. The nucleus is the set of elements that associate with all others: that is, the n in A such that [n,A,A] = [A,n,A] = [A,A,n] = {0}. The nucleus is an associative subring of A. === Center === The center of A is the set of elements that commute and associate with everything in A, that is the intersection of C ( A ) = { n ∈ A | n r = r n ∀ r ∈ A } {\displaystyle C(A)=\{n\in A\ |\ nr=rn\,\forall r\in A\,\}} with the nucleus. It turns out that for elements of C(A) it is enough that two of the sets ( [ n , A , A ] , [ A , n , A ] , [ A , A , n ] ) {\displaystyle ([n,A,A],[A,n,A],[A,A,n])} are { 0 } {\displaystyle \{0\}} for the third to also be the zero set. == Examples == Euclidean space R3 with multiplication given by the vector cross product is an example of an algebra which is anticommutative and not associative. The cross product also satisfies the Jacobi identity. Lie algebras are algebras satisfying anticommutativity and the Jacobi identity. Algebras of vector fields on a differentiable manifold (if K is R or the complex numbers C) or an algebraic variety (for general K); Jordan algebras are algebras which satisfy the commutative law and the Jordan identity. Every associative algebra gives rise to a Lie algebra by using the commutator as Lie bracket. In fact every Lie algebra can either be constructed this way, or is a subalgebra of a Lie algebra so constructed. Every associative algebra over a field of characteristic other than 2 gives rise to a Jordan algebra by defining a new multiplication x*y = (xy+yx)/2. In contrast to the Lie algebra case, not every Jordan algebra can be constructed this way. Those that can are called special. Alternative algebras are algebras satisfying the alternative property. The most important examples of alternative algebras are the octonions (an algebra over the reals), and generalizations of the octonions over other fields. All associative algebras are alternative. Up to isomorphism, the only finite-dimensional real alternative, division algebras (see below) are the reals, complexes, quaternions and octonions. Power-associative algebras, are those algebras satisfying the power-associative identity. Examples include all associative algebras, all alternative algebras, Jordan algebras over a field other than GF(2) (see previous section), and the sedenions. The hyperbolic quaternion algebra over R, which was an experimental algebra before the adoption of Minkowski space for special relativity. More classes of algebras: Graded algebras. These include most of the algebras of interest to multilinear algebra, such as the tensor algebra, symmetric algebra, and exterior algebra over a given vector space. Graded algebras can be generalized to filtered algebras. Division algebras, in which multiplicative inverses exist. The finite-dimensional alternative division algebras over the field of real numbers have been classified. They are the real numbers (dimension 1), the complex numbers (dimension 2), the quaternions (dimension 4), and the octonions (dimension 8). The quaternions and octonions are not commutative. Of these algebras, all are associative except for the octonions. Quadratic algebras, which require that xx = re + sx, for some elements r and s in the ground field, and e a unit for the algebra. Examples include all finite-dimensional alternative algebras, and the algebra of real 2-by-2 matrices. Up to isomorphism the only alternative, quadratic real algebras without divisors of zero are the reals, complexes, quaternions, and octonions. The Cayley–Dickson algebras (where K is R), which begin with: the complex numbers C (a commutative and associative algebra); the quaternions H (an associative algebra); the octonions O (an alternative algebra); the sedenions S; the trigintaduonions T and the infinite sequence of Cayley-Dickson algebras (power-associative algebras). Hypercomplex algebras are all finite-dimensional unital R-algebras, they thus include Cayley-Dickson algebras and many more. The Poisson algebras are considered in geometric quantization. They carry two multiplications, turning them into commutative algebras and Lie algebras in different ways. Genetic algebras are non-associative algebras used in mathematical genetics. Triple systems == Properties == There are several properties that may be familiar from ring theory, or from associative algebras, which are not always true for non-associative algebras. Unlike the associative case, elements with a (two-sided) multiplicative inverse might also be a zero divisor. For example, all non-zero elements of the sedenions have a two-sided inverse, but some of them are also zero divisors. == Free non-associative algebra == The free non-associative algebra on a set X over a field K is defined as the algebra with basis consisting of all non-associative monomials, finite formal products of elements of X retaining parentheses. The product of monomials u, v is just (u)(v). The algebra is unital if one takes the empty product as a monomial. Kurosh proved that every subalgebra of a free non-associative algebra is free. == Associated algebras == An algebra A over a field K is in particular a K-vector space and so one can consider the associative algebra EndK(A) of K-linear vector space endomorphism of A. We can associate to the algebra structure on A two subalgebras of EndK(A), the derivation algebra and the (associative) enveloping algebra. === Derivation algebra === A derivation on A is a map D with the property D ( x ⋅ y ) = D ( x ) ⋅ y + x ⋅ D ( y ) . {\displaystyle D(x\cdot y)=D(x)\cdot y+x\cdot D(y)\ .} The derivations on A form a subspace DerK(A) in EndK(A). The commutator of two derivations is again a derivation, so that the Lie bracket gives DerK(A) a structure of Lie algebra. === Enveloping algebra === There are linear maps L and R attached to each element a of an algebra A: L ( a ) : x ↦ a x ; R ( a ) : x ↦ x a . {\displaystyle L(a):x\mapsto ax;\ \ R(a):x\mapsto xa\ .} Here each element L ( a ) , R ( a ) {\displaystyle L(a),R(a)} is regarded as an element of EndK(A). The associative enveloping algebra or multiplication algebra of A is the sub-associative algebra of EndK(A) generated by the left and right linear maps L ( a ) , R ( a ) {\displaystyle L(a),R(a)} . The centroid of A is the centraliser of the enveloping algebra in the endomorphism algebra EndK(A). An algebra is central if its centroid consists of the K-scalar multiples of the identity. Some of the possible identities satisfied by non-associative algebras may be conveniently expressed in terms of the linear maps: Commutative: each L(a) is equal to the corresponding R(a); Associative: any L commutes with any R; Flexible: every L(a) commutes with the corresponding R(a); Jordan: every L(a) commutes with R(a2); Alternative: every L(a)2 = L(a2) and similarly for the right. The quadratic representation Q is defined by Q ( a ) : x ↦ 2 a ⋅ ( a ⋅ x ) − ( a ⋅ a ) ⋅ x {\displaystyle Q(a):x\mapsto 2a\cdot (a\cdot x)-(a\cdot a)\cdot x\ } , or equivalently, Q ( a ) = 2 L 2 ( a ) − L ( a 2 ) . {\displaystyle Q(a)=2L^{2}(a)-L(a^{2})\ .} The article on universal enveloping algebras describes the canonical construction of enveloping algebras, as well as the PBW-type theorems for them. For Lie algebras, such enveloping algebras have a universal property, which does not hold, in general, for non-associative algebras. The best-known example is, perhaps the Albert algebra, an exceptional Jordan algebra that is not enveloped by the canonical construction of the enveloping algebra for Jordan algebras. == See also == List of algebras Commutative non-associative magmas, which give rise to non-associative algebras == Citations == == Notes == == References == Albert, A. Adrian (2003) [1939]. Structure of algebras. American Mathematical Society Colloquium Publ. Vol. 24 (Corrected reprint of the revised 1961 ed.). New York: American Mathematical Society. ISBN 0-8218-1024-3. Zbl 0023.19901. Albert, A. Adrian (1948a). "Power-associative rings". Transactions of the American Mathematical Society. 64: 552–593. doi:10.2307/1990399. ISSN 0002-9947. JSTOR 1990399. MR 0027750. Zbl 0033.15402. Albert, A. Adrian (1948b). "On right alternative algebras". Annals of Mathematics. 50: 318–328. doi:10.2307/1969457. JSTOR 1969457. Bremner, Murray; Murakami, Lúcia; Shestakov, Ivan (2013) [2006]. "Chapter 86: Nonassociative Algebras" (PDF). In Hogben, Leslie (ed.). Handbook of Linear Algebra (2nd ed.). CRC Press. ISBN 978-1-498-78560-0. Herstein, I. N., ed. (2011) [1965]. Some Aspects of Ring Theory: Lectures given at a Summer School of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Varenna (Como), Italy, August 23-31, 1965. C.I.M.E. Summer Schools. Vol. 37 (reprint ed.). Springer-Verlag. ISBN 3-6421-1036-3. Jacobson, Nathan (1968). Structure and representations of Jordan algebras. American Mathematical Society Colloquium Publications, Vol. XXXIX. Providence, R.I.: American Mathematical Society. ISBN 978-0-821-84640-7. MR 0251099. Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by J. Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001. Koecher, Max (1999). Krieg, Aloys; Walcher, Sebastian (eds.). The Minnesota notes on Jordan algebras and their applications. Lecture Notes in Mathematics. Vol. 1710. Berlin: Springer-Verlag. ISBN 3-540-66360-6. Zbl 1072.17513. Kokoris, Louis A. (1955). "Power-associative rings of characteristic two". Proceedings of the American Mathematical Society. 6 (5). American Mathematical Society: 705–710. doi:10.2307/2032920. Kurosh, A.G. (1947). "Non-associative algebras and free products of algebras". Mat. Sbornik. 20 (62). MR 0020986. Zbl 0041.16803. McCrimmon, Kevin (2004). A taste of Jordan algebras. Universitext. Berlin, New York: Springer-Verlag. doi:10.1007/b97489. ISBN 978-0-387-95447-9. MR 2014924. Zbl 1044.17001. Errata. Mikheev, I.M. (1976). "Right nilpotency in right alternative rings". Siberian Mathematical Journal. 17 (1): 178–180. doi:10.1007/BF00969304. Okubo, Susumu (2005) [1995]. Introduction to Octonion and Other Non-Associative Algebras in Physics. Montroll Memorial Lecture Series in Mathematical Physics. Vol. 2. Cambridge University Press. doi:10.1017/CBO9780511524479. ISBN 0-521-01792-0. Zbl 0841.17001. Rosenfeld, Boris (1997). Geometry of Lie groups. Mathematics and its Applications. Vol. 393. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-4390-5. Zbl 0867.53002. Rowen, Louis Halle (2008). Graduate Algebra: Noncommutative View. Graduate studies in mathematics. American Mathematical Society. ISBN 0-8218-8408-5. Schafer, Richard D. (1995) [1966]. An Introduction to Nonassociative Algebras. Dover. ISBN 0-486-68813-5. Zbl 0145.25601. Zhevlakov, Konstantin A.; Slin'ko, Arkadii M.; Shestakov, Ivan P.; Shirshov, Anatoly I. (1982) [1978]. Rings that are nearly associative. Translated by Smith, Harry F. ISBN 0-12-779850-1.
Wikipedia/Non-associative_algebras
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations. The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach. In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theories of measure, integration, and probability to infinite-dimensional spaces, also known as infinite dimensional analysis. == Normed vector spaces == The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, machine learning, partial differential equations, and Fourier analysis. More generally, functional analysis includes the study of Fréchet spaces and other topological vector spaces not endowed with a norm. An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras. === Hilbert spaces === Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to ℓ 2 ( ℵ 0 ) {\displaystyle \ell ^{\,2}(\aleph _{0})\,} . Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven. === Banach spaces === General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis. Examples of Banach spaces are L p {\displaystyle L^{p}} -spaces for any real number p ≥ 1 {\displaystyle p\geq 1} . Given also a measure μ {\displaystyle \mu } on set X {\displaystyle X} , then L p ( X ) {\displaystyle L^{p}(X)} , sometimes also denoted L p ( X , μ ) {\displaystyle L^{p}(X,\mu )} or L p ( μ ) {\displaystyle L^{p}(\mu )} , has as its vectors equivalence classes [ f ] {\displaystyle [\,f\,]} of measurable functions whose absolute value's p {\displaystyle p} -th power has finite integral; that is, functions f {\displaystyle f} for which one has ∫ X | f ( x ) | p d μ ( x ) < ∞ . {\displaystyle \int _{X}\left|f(x)\right|^{p}\,d\mu (x)<\infty .} If μ {\displaystyle \mu } is the counting measure, then the integral may be replaced by a sum. That is, we require ∑ x ∈ X | f ( x ) | p < ∞ . {\displaystyle \sum _{x\in X}\left|f(x)\right|^{p}<\infty .} Then it is not necessary to deal with equivalence classes, and the space is denoted ℓ p ( X ) {\displaystyle \ell ^{p}(X)} , written more simply ℓ p {\displaystyle \ell ^{p}} in the case when X {\displaystyle X} is the set of non-negative integers. In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article. Also, the notion of derivative can be extended to arbitrary functions between Banach spaces. See, for instance, the Fréchet derivative article. == Linear functional analysis == == Major and foundational results == There are four major theorems which are sometimes called the four pillars of functional analysis: the Hahn–Banach theorem the open mapping theorem the closed graph theorem the uniform boundedness principle, also known as the Banach–Steinhaus theorem. Important results of functional analysis include: === Uniform boundedness principle === The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus but it was also proven independently by Hans Hahn. === Spectral theorem === There are many theorems known as the spectral theorem, but one in particular has many applications in functional analysis. This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure. There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now f {\displaystyle f} may be complex-valued. === Hahn–Banach theorem === The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". === Open mapping theorem === The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely, The proof uses the Baire category theorem, and completeness of both X {\displaystyle X} and Y {\displaystyle Y} is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if X {\displaystyle X} and Y {\displaystyle Y} are taken to be Fréchet spaces. === Closed graph theorem === === Other topics === == Foundations of mathematics considerations == Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, the Schauder basis, is usually more relevant in functional analysis. Many theorems require the Hahn–Banach theorem, usually proved using the axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice. == Points of view == Functional analysis includes the following tendencies: Abstract analysis. An approach to analysis based on topological groups, topological rings, and topological vector spaces. Geometry of Banach spaces contains many topics. One is combinatorial approach connected with Jean Bourgain; another is a characterization of Banach spaces in which various forms of the law of large numbers hold. Noncommutative geometry. Developed by Alain Connes, partly building on earlier notions, such as George Mackey's approach to ergodic theory. Connection with quantum mechanics. Either narrowly defined as in mathematical physics, or broadly interpreted by, for example, Israel Gelfand, to include most types of representation theory. == See also == List of functional analysis topics Spectral theory == References == == Further reading == Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker's Guide, 3rd ed., Springer 2007, ISBN 978-3-540-32696-0. Online doi:10.1007/3-540-29587-9 (by subscription) Bachman, G., Narici, L.: Functional analysis, Academic Press, 1966. (reprint Dover Publications) Banach S. Theory of Linear Operations Archived 2021-10-28 at the Wayback Machine. Volume 38, North-Holland Mathematical Library, 1987, ISBN 0-444-70184-2 Brezis, H.: Analyse Fonctionnelle, Dunod ISBN 978-2-10-004314-9 or ISBN 978-2-10-049336-4 Conway, J. B.: A Course in Functional Analysis, 2nd edition, Springer-Verlag, 1994, ISBN 0-387-97245-5 Dunford, N. and Schwartz, J.T.: Linear Operators, General Theory, John Wiley & Sons, and other 3 volumes, includes visualization charts Edwards, R. E.: Functional Analysis, Theory and Applications, Hold, Rinehart and Winston, 1965. Eidelman, Yuli, Vitali Milman, and Antonis Tsolomitis: Functional Analysis: An Introduction, American Mathematical Society, 2004. Friedman, A.: Foundations of Modern Analysis, Dover Publications, Paperback Edition, July 21, 2010 Giles, J.R.: Introduction to the Analysis of Normed Linear Spaces, Cambridge University Press, 2000 Hirsch F., Lacombe G. - "Elements of Functional Analysis", Springer 1999. Hutson, V., Pym, J.S., Cloud M.J.: Applications of Functional Analysis and Operator Theory, 2nd edition, Elsevier Science, 2005, ISBN 0-444-51790-1 Kantorovitz, S.,Introduction to Modern Analysis, Oxford University Press, 2003,2nd ed.2006. Kolmogorov, A.N and Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis, Dover Publications, 1999 Kreyszig, E.: Introductory Functional Analysis with Applications, Wiley, 1989. Lax, P.: Functional Analysis, Wiley-Interscience, 2002, ISBN 0-471-55604-1 Lebedev, L.P. and Vorovich, I.I.: Functional Analysis in Mechanics, Springer-Verlag, 2002 Michel, Anthony N. and Charles J. Herget: Applied Algebra and Functional Analysis, Dover, 1993. Pietsch, Albrecht: History of Banach spaces and linear operators, Birkhäuser Boston Inc., 2007, ISBN 978-0-8176-4367-6 Reed, M., Simon, B.: "Functional Analysis", Academic Press 1980. Riesz, F. and Sz.-Nagy, B.: Functional Analysis, Dover Publications, 1990 Rudin, W.: Functional Analysis, McGraw-Hill Science, 1991 Saxe, Karen: Beginning Functional Analysis, Springer, 2001 Schechter, M.: Principles of Functional Analysis, AMS, 2nd edition, 2001 Shilov, Georgi E.: Elementary Functional Analysis, Dover, 1996. Sobolev, S.L.: Applications of Functional Analysis in Mathematical Physics, AMS, 1963 Vogt, D., Meise, R.: Introduction to Functional Analysis, Oxford University Press, 1997. Yosida, K.: Functional Analysis, Springer-Verlag, 6th edition, 1980 == External links == "Functional analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna. Lecture Notes on Functional Analysis by Yevgeny Vilensky, New York University. Lecture videos on functional analysis by Greg Morrow Archived 2017-04-01 at the Wayback Machine from University of Colorado Colorado Springs
Wikipedia/Functional_analysis
In mathematics, a loop in a topological space X is a continuous function f from the unit interval I = [0,1] to X such that f(0) = f(1). In other words, it is a path whose initial point is equal to its terminal point. A loop may also be seen as a continuous map f from the pointed unit circle S1 into X, because S1 may be regarded as a quotient of I under the identification of 0 with 1. The set of all loops in X forms a space called the loop space of X. == See also == Free loop Loop group Loop space Loop algebra Fundamental group Quasigroup == References ==
Wikipedia/Loop_(topology)
In calculus and related areas of mathematics, a linear function from the real numbers to the real numbers is a function whose graph (in Cartesian coordinates) is a non-vertical line in the plane. The characteristic property of linear functions is that when the input variable is changed, the change in the output is proportional to the change in the input. Linear functions are related to linear equations. == Properties == A linear function is a polynomial function in which the variable x has degree at most one: f ( x ) = a x + b {\displaystyle f(x)=ax+b} . Such a function is called linear because its graph, the set of all points ( x , f ( x ) ) {\displaystyle (x,f(x))} in the Cartesian plane, is a line. The coefficient a is called the slope of the function and of the line (see below). If the slope is a = 0 {\displaystyle a=0} , this is a constant function f ( x ) = b {\displaystyle f(x)=b} defining a horizontal line, which some authors exclude from the class of linear functions. With this definition, the degree of a linear polynomial would be exactly one, and its graph would be a line that is neither vertical nor horizontal. However, in this article, a ≠ 0 {\displaystyle a\neq 0} is not required, so constant functions will be considered linear. If b = 0 {\displaystyle b=0} then the linear function is said to be homogeneous. Such function defines a line that passes through the origin of the coordinate system, that is, the point ( x , y ) = ( 0 , 0 ) {\displaystyle (x,y)=(0,0)} . In advanced mathematics texts, the term linear function often denotes specifically homogeneous linear functions, while the term affine function is used for the general case, which includes b ≠ 0 {\displaystyle b\neq 0} . The natural domain of a linear function f ( x ) {\displaystyle f(x)} , the set of allowed input values for x, is the entire set of real numbers, x ∈ R . {\displaystyle x\in \mathbb {R} .} One can also consider such functions with x in an arbitrary field, taking the coefficients a, b in that field. The graph y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} is a non-vertical line having exactly one intersection with the y-axis, its y-intercept point ( x , y ) = ( 0 , b ) . {\displaystyle (x,y)=(0,b).} The y-intercept value y = f ( 0 ) = b {\displaystyle y=f(0)=b} is also called the initial value of f ( x ) . {\displaystyle f(x).} If a ≠ 0 , {\displaystyle a\neq 0,} the graph is a non-horizontal line having exactly one intersection with the x-axis, the x-intercept point ( x , y ) = ( − b a , 0 ) . {\displaystyle (x,y)=(-{\tfrac {b}{a}},0).} The x-intercept value x = − b a , {\displaystyle x=-{\tfrac {b}{a}},} the solution of the equation f ( x ) = 0 , {\displaystyle f(x)=0,} is also called the root or zero of f ( x ) . {\displaystyle f(x).} == Slope == The slope of a nonvertical line is a number that measures how steeply the line is slanted (rise-over-run). If the line is the graph of the linear function f ( x ) = a x + b {\displaystyle f(x)=ax+b} , this slope is given by the constant a. The slope measures the constant rate of change of f ( x ) {\displaystyle f(x)} per unit change in x: whenever the input x is increased by one unit, the output changes by a units: f ( x + 1 ) = f ( x ) + a {\displaystyle f(x{+}1)=f(x)+a} , and more generally f ( x + Δ x ) = f ( x ) + a Δ x {\displaystyle f(x{+}\Delta x)=f(x)+a\Delta x} for any number Δ x {\displaystyle \Delta x} . If the slope is positive, a > 0 {\displaystyle a>0} , then the function f ( x ) {\displaystyle f(x)} is increasing; if a < 0 {\displaystyle a<0} , then f ( x ) {\displaystyle f(x)} is decreasing In calculus, the derivative of a general function measures its rate of change. A linear function f ( x ) = a x + b {\displaystyle f(x)=ax+b} has a constant rate of change equal to its slope a, so its derivative is the constant function f ′ ( x ) = a {\displaystyle f\,'(x)=a} . The fundamental idea of differential calculus is that any smooth function f ( x ) {\displaystyle f(x)} (not necessarily linear) can be closely approximated near a given point x = c {\displaystyle x=c} by a unique linear function. The derivative f ′ ( c ) {\displaystyle f\,'(c)} is the slope of this linear function, and the approximation is: f ( x ) ≈ f ′ ( c ) ( x − c ) + f ( c ) {\displaystyle f(x)\approx f\,'(c)(x{-}c)+f(c)} for x ≈ c {\displaystyle x\approx c} . The graph of the linear approximation is the tangent line of the graph y = f ( x ) {\displaystyle y=f(x)} at the point ( c , f ( c ) ) {\displaystyle (c,f(c))} . The derivative slope f ′ ( c ) {\displaystyle f\,'(c)} generally varies with the point c. Linear functions can be characterized as the only real functions whose derivative is constant: if f ′ ( x ) = a {\displaystyle f\,'(x)=a} for all x, then f ( x ) = a x + b {\displaystyle f(x)=ax+b} for b = f ( 0 ) {\displaystyle b=f(0)} . == Slope-intercept, point-slope, and two-point forms == A given linear function f ( x ) {\displaystyle f(x)} can be written in several standard formulas displaying its various properties. The simplest is the slope-intercept form: f ( x ) = a x + b {\displaystyle f(x)=ax+b} , from which one can immediately see the slope a and the initial value f ( 0 ) = b {\displaystyle f(0)=b} , which is the y-intercept of the graph y = f ( x ) {\displaystyle y=f(x)} . Given a slope a and one known value f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} , we write the point-slope form: f ( x ) = a ( x − x 0 ) + y 0 {\displaystyle f(x)=a(x{-}x_{0})+y_{0}} . In graphical terms, this gives the line y = f ( x ) {\displaystyle y=f(x)} with slope a passing through the point ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} . The two-point form starts with two known values f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} and f ( x 1 ) = y 1 {\displaystyle f(x_{1})=y_{1}} . One computes the slope a = y 1 − y 0 x 1 − x 0 {\displaystyle a={\tfrac {y_{1}-y_{0}}{x_{1}-x_{0}}}} and inserts this into the point-slope form: f ( x ) = y 1 − y 0 x 1 − x 0 ( x − x 0 ) + y 0 {\displaystyle f(x)={\tfrac {y_{1}-y_{0}}{x_{1}-x_{0}}}(x{-}x_{0}\!)+y_{0}} . Its graph y = f ( x ) {\displaystyle y=f(x)} is the unique line passing through the points ( x 0 , y 0 ) , ( x 1 , y 1 ) {\displaystyle (x_{0},y_{0}\!),(x_{1},y_{1}\!)} . The equation y = f ( x ) {\displaystyle y=f(x)} may also be written to emphasize the constant slope: y − y 0 x − x 0 = y 1 − y 0 x 1 − x 0 {\displaystyle {\frac {y-y_{0}}{x-x_{0}}}={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}} . == Relationship with linear equations == Linear functions commonly arise from practical problems involving variables x , y {\displaystyle x,y} with a linear relationship, that is, obeying a linear equation A x + B y = C {\displaystyle Ax+By=C} . If B ≠ 0 {\displaystyle B\neq 0} , one can solve this equation for y, obtaining y = − A B x + C B = a x + b , {\displaystyle y=-{\tfrac {A}{B}}x+{\tfrac {C}{B}}=ax+b,} where we denote a = − A B {\displaystyle a=-{\tfrac {A}{B}}} and b = C B {\displaystyle b={\tfrac {C}{B}}} . That is, one may consider y as a dependent variable (output) obtained from the independent variable (input) x via a linear function: y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} . In the xy-coordinate plane, the possible values of ( x , y ) {\displaystyle (x,y)} form a line, the graph of the function f ( x ) {\displaystyle f(x)} . If B = 0 {\displaystyle B=0} in the original equation, the resulting line x = C A {\displaystyle x={\tfrac {C}{A}}} is vertical, and cannot be written as y = f ( x ) {\displaystyle y=f(x)} . The features of the graph y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} can be interpreted in terms of the variables x and y. The y-intercept is the initial value y = f ( 0 ) = b {\displaystyle y=f(0)=b} at x = 0 {\displaystyle x=0} . The slope a measures the rate of change of the output y per unit change in the input x. In the graph, moving one unit to the right (increasing x by 1) moves the y-value up by a: that is, f ( x + 1 ) = f ( x ) + a {\displaystyle f(x{+}1)=f(x)+a} . Negative slope a indicates a decrease in y for each increase in x. For example, the linear function y = − 2 x + 4 {\displaystyle y=-2x+4} has slope a = − 2 {\displaystyle a=-2} , y-intercept point ( 0 , b ) = ( 0 , 4 ) {\displaystyle (0,b)=(0,4)} , and x-intercept point ( 2 , 0 ) {\displaystyle (2,0)} . === Example === Suppose salami and sausage cost €6 and €3 per kilogram, and we wish to buy €12 worth. How much of each can we purchase? If x kilograms of salami and y kilograms of sausage costs a total of €12 then, €6×x + €3×y = €12. Solving for y gives the point-slope form y = − 2 x + 4 {\displaystyle y=-2x+4} , as above. That is, if we first choose the amount of salami x, the amount of sausage can be computed as a function y = f ( x ) = − 2 x + 4 {\displaystyle y=f(x)=-2x+4} . Since salami costs twice as much as sausage, adding one kilo of salami decreases the sausage by 2 kilos: f ( x + 1 ) = f ( x ) − 2 {\displaystyle f(x{+}1)=f(x)-2} , and the slope is −2. The y-intercept point ( x , y ) = ( 0 , 4 ) {\displaystyle (x,y)=(0,4)} corresponds to buying only 4 kg of sausage; while the x-intercept point ( x , y ) = ( 2 , 0 ) {\displaystyle (x,y)=(2,0)} corresponds to buying only 2 kg of salami. Note that the graph includes points with negative values of x or y, which have no meaning in terms of the original variables (unless we imagine selling meat to the butcher). Thus we should restrict our function f ( x ) {\displaystyle f(x)} to the domain 0 ≤ x ≤ 2 {\displaystyle 0\leq x\leq 2} . Also, we could choose y as the independent variable, and compute x by the inverse linear function: x = g ( y ) = − 1 2 y + 2 {\displaystyle x=g(y)=-{\tfrac {1}{2}}y+2} over the domain 0 ≤ y ≤ 4 {\displaystyle 0\leq y\leq 4} . == Relationship with other classes of functions == If the coefficient of the variable is not zero (a ≠ 0), then a linear function is represented by a degree 1 polynomial (also called a linear polynomial), otherwise it is a constant function – also a polynomial function, but of zero degree. A straight line, when drawn in a different kind of coordinate system may represent other functions. For example, it may represent an exponential function when its values are expressed in the logarithmic scale. It means that when log(g(x)) is a linear function of x, the function g is exponential. With linear functions, increasing the input by one unit causes the output to increase by a fixed amount, which is the slope of the graph of the function. With exponential functions, increasing the input by one unit causes the output to increase by a fixed multiple, which is known as the base of the exponential function. If both arguments and values of a function are in the logarithmic scale (i.e., when log(y) is a linear function of log(x)), then the straight line represents a power law: log r ⁡ y = a log r ⁡ x + b ⇒ y = r b ⋅ x a {\displaystyle \log _{r}y=a\log _{r}x+b\quad \Rightarrow \quad y=r^{b}\cdot x^{a}} On the other hand, the graph of a linear function in terms of polar coordinates: r = f ( θ ) = a θ + b {\displaystyle r=f(\theta )=a\theta +b} is an Archimedean spiral if a ≠ 0 {\displaystyle a\neq 0} and a circle otherwise. == See also == Affine map, a generalization Arithmetic progression, a linear function of integer argument == Notes == == References == Stewart, James (2012), Calculus: Early Transcendentals (7E ed.), Brooks/Cole, ISBN 978-0-538-49790-9 Swokowski, Earl W. (1983), Calculus with analytic geometry (Alternate ed.), Boston: Prindle, Weber & Schmidt, ISBN 0871503417 == External links == https://web.archive.org/web/20130524101825/http://www.math.okstate.edu/~noell/ebsm/linear.html https://web.archive.org/web/20180722042342/https://corestandards.org/assets/CCSSI_Math%20Standards.pdf
Wikipedia/Linear_function_(calculus)
Mathematical physics is the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics. == Scope == There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world. === Classical mechanics === Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles). === Partial differential equations === Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. === Quantum theory === The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty. === Relativity and quantum relativistic theories === The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important. === Statistical mechanics === Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics. == Usage == The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being "the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature". === Mathematical vs. theoretical physics === The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics. On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians. Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation). The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory. == Prominent mathematical physicists == === Before Newton === There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance. In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion. An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having made use of experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, two central concepts of what today is known as classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object. René Descartes developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance helped bring the demise of Aristotelian physics. Descartes used mathematical reasoning as a model for science, and developed analytic geometry, which in time allowed the plotting of locations in 3D space (Cartesian coordinates) and marking their progressions along the flow of time. Christiaan Huygens, a talented mathematician and physicist and older contemporary of Newton, was the first to successfully idealize a physical problem by a set of mathematical parameters in Horologium Oscillatorum (1673), and the first to fully mathematize a mechanistic explanation of an unobservable physical phenomenon in Traité de la Lumière (1690). He is thus considered a forerunner of theoretical physics and one of the founders of modern mathematical physics. === Newtonian physics and post Newtonian === The prevailing framework for science in the 16th and early 17th centuries was one borrowed from Ancient Greek mathematics, where geometrical shapes formed the building blocks to describe and think about space, and time was often thought as a separate entity. With the introduction of algebra into geometry, and with it the idea of a coordinate system, time and space could now be thought as axes belonging to the same plane. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries. By the middle of the 17th century, important concepts such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in mathematics and physics. He was extremely successful in his application of calculus and other methods to the study of motion. Newton's theory of motion, culminating in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity. In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms. Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism. A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field. The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics. === Relativistic === By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928]. In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared. Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object. Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.) === Quantum === Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, mathematician Henri Poincaré published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron. === List of prominent contributors to mathematical physics in the 20th century === Prominent contributors to the 20th century's mathematical physics include (ordered by birth date): William Thomson (Lord Kelvin) (1824–1907) Oliver Heaviside (1850–1925) Jules Henri Poincaré (1854–1912) David Hilbert (1862–1943) Arnold Sommerfeld (1868–1951) Constantin Carathéodory (1873–1950) Albert Einstein (1879–1955) Emmy Noether (1882–1935) Max Born (1882–1970) George David Birkhoff (1884–1944) Hermann Weyl (1885–1955) Satyendra Nath Bose (1894–1974) Louis de Broglie (1892–1987) Norbert Wiener (1894–1964) John Lighton Synge (1897–1995) Mário Schenberg (1914–1990) Wolfgang Pauli (1900–1958) Paul Dirac (1902–1984) Eugene Wigner (1902–1995) Andrey Kolmogorov (1903–1987) Lars Onsager (1903–1976) John von Neumann (1903–1957) Sin-Itiro Tomonaga (1906–1979) Hideki Yukawa (1907–1981) Nikolay Nikolayevich Bogolyubov (1909–1992) Subrahmanyan Chandrasekhar (1910–1995) Mark Kac (1914–1984) Julian Schwinger (1918–1994) Richard Phillips Feynman (1918–1988) Irving Ezra Segal (1918–1998) Ryogo Kubo (1920–1995) Arthur Strong Wightman (1922–2013) Chen-Ning Yang (1922–) Rudolf Haag (1922–2016) Freeman John Dyson (1923–2020) Martin Gutzwiller (1925–2014) Abdus Salam (1926–1996) Jürgen Moser (1928–1999) Michael Francis Atiyah (1929–2019) Joel Louis Lebowitz (1930–) Roger Penrose (1931–) Elliott Hershel Lieb (1932–) Yakir Aharonov (1932–) Sheldon Glashow (1932–) Steven Weinberg (1933–2021) Ludvig Dmitrievich Faddeev (1934–2017) David Ruelle (1935–) Yakov Grigorevich Sinai (1935–) Vladimir Igorevich Arnold (1937–2010) Arthur Michael Jaffe (1937–) Roman Wladimir Jackiw (1939–) Leonard Susskind (1940–) Rodney James Baxter (1940–) Michael Victor Berry (1941–) Giovanni Gallavotti (1941–) Stephen William Hawking (1942–2018) Jerrold Eldon Marsden (1942–2010) Michael C. Reed (1942–) John Michael Kosterlitz (1943–) Israel Michael Sigal (1945–) Alexander Markovich Polyakov (1945–) Barry Simon (1946–) Herbert Spohn (1946–) John Lawrence Cardy (1947–) Giorgio Parisi (1948-) Abhay Ashtekar (1949-) Edward Witten (1951–) F. Duncan Haldane (1951–) Ashoke Sen (1956–) Juan Martín Maldacena (1968–) == See also == International Association of Mathematical Physics Notable publications in mathematical physics List of mathematical physics journals Gauge theory (mathematics) Relationship between mathematics and physics Theoretical, computational and philosophical physics == Notes == == References == Zaslow, Eric (2005), Physmatics, arXiv:physics/0506153, Bibcode:2005physics...6153Z == Further reading == === Generic works === Allen, Jont (2020), An Invitation to Mathematical Physics and its History, Springer, Bibcode:2020imph.book.....A, ISBN 978-3-030-53758-6 Courant, Richard; Hilbert, David (1989), Methods of Mathematical Physics, Vol 1–2, Interscience Publishers, Bibcode:1989mmp..book.....C Françoise, Jean P.; Naber, Gregory L.; Tsun, Tsou S. (2006), Encyclopedia of Mathematical Physics, Elsevier, ISBN 978-0-1251-2660-1 Joos, Georg; Freeman, Ira M. (1987), Theoretical Physics (3rd ed.), Dover Publications, ISBN 0-486-65227-0 Kato, Tosio (1995), Perturbation Theory for Linear Operators (2nd ed.), Springer-Verlag, ISBN 3-540-58661-X Margenau, Henry; Murphy, George M. (2009), The Mathematics of Physics and Chemistry (2nd ed.), Young Press, ISBN 978-1444627473 Masani, Pesi R. (1976–1986), Norbert Wiener: Collected Works with Commentaries, Vol 1–4, The MIT Press Morse, Philip M.; Feshbach, Herman (1999), Methods of Theoretical Physics, Vol 1–2, McGraw Hill, ISBN 0-07-043316-X Thirring, Walter E. (1978–1983), A Course in Mathematical Physics, Vol 1–4, Springer-Verlag Tikhomirov, Vladimir M. (1991–1993), Selected Works of A. N. Kolmogorov, Vol 1–3, Kluwer Academic Publishers Titchmarsh, Edward C. (1985), The Theory of Functions (2nd ed.), Oxford University Press === Textbooks for undergraduate studies === Arfken, George B.; Weber, Hans J.; Harris, Frank E. (2013), Mathematical Methods for Physicists: A Comprehensive Guide (7th ed.), Academic Press, ISBN 978-0-12-384654-9, (Mathematical Methods for Physicists, Solutions for Mathematical Methods for Physicists (7th ed.), archive.org) Bayın, Selçuk Ş. (2018), Mathematical Methods in Science and Engineering (2nd ed.), Wiley, ISBN 9781119425397 Boas, Mary L. (2006), Mathematical Methods in the Physical Sciences (3rd ed.), Wiley, ISBN 978-0-471-19826-0 Butkov, Eugene (1968), Mathematical Physics, Addison-Wesley Hassani, Sadri (2009), Mathematical Methods for Students of Physics and Related Fields, (2nd ed.), New York, Springer, eISBN 978-0-387-09504-2 Jeffreys, Harold; Swirles Jeffreys, Bertha (1956), Methods of Mathematical Physics (3rd ed.), Cambridge University Press Marsh, Adam (2018), "Mathematics for Physics: An Illustrated Handbook", Contemporary Physics, 59 (3), World Scientific: 329, Bibcode:2018ConPh..59..329N, doi:10.1080/00107514.2018.1501430, ISBN 978-981-3233-91-1 Mathews, Jon; Walker, Robert L. (1970), Mathematical Methods of Physics (2nd ed.), W. A. Benjamin, Bibcode:1970mmp..book.....M, ISBN 0-8053-7002-1 Menzel, Donald H. (1961), Mathematical Physics, Dover Publications, ISBN 0-486-60056-4 {{citation}}: ISBN / Date incompatibility (help) Riley, Ken F.; Hobson, Michael P.; Bence, Stephen J. (2006), Mathematical Methods for Physics and Engineering (3rd ed.), Cambridge University Press, ISBN 978-0-521-86153-3 Stakgold, Ivar (2000), Boundary Value Problems of Mathematical Physics, Vol 1-2., Society for Industrial and Applied Mathematics, ISBN 0-89871-456-7 Starkovich, Steven P. (2021), The Structures of Mathematical Physics: An Introduction, Springer, Bibcode:2021smpa.book.....S, ISBN 978-3-030-73448-0 === Textbooks for graduate studies === Blanchard, Philippe; Brüning, Erwin (2015), Mathematical Methods in Physics: Distributions, Hilbert Space Operators, Variational Methods, and Applications in Quantum Physics (2nd ed.), Springer, Bibcode:2015mmpd.book.....B, ISBN 978-3-319-14044-5 Cahill, Kevin (2019), Physical Mathematics (2nd ed.), Cambridge University Press, ISBN 978-1-108-47003-2 Geroch, Robert (1985), Mathematical Physics, University of Chicago Press, ISBN 0-226-28862-5 Hassani, Sadri (2013), Mathematical Physics: A Modern Introduction to its Foundations (2nd ed.), Springer-Verlag, Bibcode:2013mpmi.book.....H, ISBN 978-3-319-01194-3 Marathe, Kishore (2010), Topics in Physical Mathematics, Springer-Verlag, ISBN 978-1-84882-938-1 Milstein, Grigori N.; Tretyakov, Michael V. (2021), Stochastic Numerics for Mathematical Physics (2nd ed.), Springer, ISBN 978-3-030-82039-8 Reed, Michael C.; Simon, Barry (1972–1981), Methods of Modern Mathematical Physics, Vol 1-4, Academic Press Richtmyer, Robert D. (1978–1981), Principles of Advanced Mathematical Physics, Vol 1-2., Springer-Verlag Rudolph, Gerd; Schmidt, Matthias (2013–2017), Differential Geometry and Mathematical Physics, Vol 1-2, Springer Serov, Valery (2017), Fourier Series, Fourier Transform and Their Applications to Mathematical Physics, Springer, ISBN 978-3-319-65261-0 Simon, Barry (2015), A Comprehensive Course in Analysis, Vol 1-5, American Mathematical Society Stakgold, Ivar; Holst, Michael (2011), Green's Functions and Boundary Value Problems (3rd ed.), Wiley, ISBN 978-0-470-60970-5 Stone, Michael; Goldbart, Paul (2009), "Mathematics for Physics: A Guided Tour for Graduate Students", Physics Today, 62 (10), Cambridge University Press: 57, Bibcode:2009PhT....62j..57S, doi:10.1063/1.3248483, ISBN 978-0-521-85403-0 Szekeres, Peter (2004), A Course in Modern Mathematical Physics: Groups, Hilbert Space and Differential Geometry, Cambridge University Press, ISBN 978-0-521-53645-5 Taylor, Michael E. (2011), Partial Differential Equations, Vol 1-3 (2nd ed.), Springer. Whittaker, Edmund T.; Watson, George N. (1950), A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions, with an Account of the Principal Transcendental Functions (4th ed.), Cambridge University Press === Specialized texts in classical physics === Abraham, Ralph; Marsden, Jerrold E. (2008), Foundations of Mechanics: A Mathematical Exposition of Classical Mechanics with an Introduction to the Qualitative Theory of Dynamical Systems (2nd ed.), AMS Chelsea Publishing, ISBN 978-0-8218-4438-0 Adam, John A. (2017), Rays, Waves, and Scattering: Topics in Classical Mathematical Physics, Princeton University Press., ISBN 978-0-691-14837-3 Arnold, Vladimir I. (1997), Mathematical Methods of Classical Mechanics (2nd ed.), Springer-Verlag, ISBN 0-387-96890-3 Bloom, Frederick (1993), Mathematical Problems of Classical Nonlinear Electromagnetic Theory, CRC Press, ISBN 0-582-21021-6 Boyer, Franck; Fabrie, Pierre (2013), Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models, Springer, ISBN 978-1-4614-5974-3 Colton, David; Kress, Rainer (2013), Integral Equation Methods in Scattering Theory, Society for Industrial and Applied Mathematics, ISBN 978-1-611973-15-0 Ciarlet, Philippe G. (1988–2000), Mathematical Elasticity, Vol 1–3, Elsevier Galdi, Giovanni P. (2011), An Introduction to the Mathematical Theory of the Navier-Stokes Equations: Steady-State Problems (2nd ed.), Springer, ISBN 978-0-387-09619-3 Hanson, George W.; Yakovlev, Alexander B. (2002), Operator Theory for Electromagnetics: An Introduction, Springer, ISBN 978-1-4419-2934-1 Kirsch, Andreas; Hettlich, Frank (2015), The Mathematical Theory of Time-Harmonic Maxwell's Equations: Expansion-, Integral-, and Variational Methods, Springer, Bibcode:2015mttm.book.....K, ISBN 978-3-319-11085-1 Knauf, Andreas (2018), Mathematical Physics: Classical Mechanics, Springer, Bibcode:2018mpcm.book.....K, ISBN 978-3-662-55772-3 Lechner, Kurt (2018), Classical Electrodynamics: A Modern Perspective, Springer, ISBN 978-3-319-91808-2 Marsden, Jerrold E.; Ratiu, Tudor S. (1999), Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems (2nd ed.), Springer, ISBN 978-1-4419-3143-6 Müller, Claus (1969), Foundations of the Mathematical Theory of Electromagnetic Waves, Springer-Verlag, ISBN 978-3-662-11775-0 Ramm, Alexander G. (2018), Scattering by Obstacles and Potentials, World Scientific, ISBN 9789813220966 Roach, Gary F.; Stratis, Ioannis G.; Yannacopoulos, Athanasios N. (2012), Mathematical Analysis of Deterministic and Stochastic Problems in Complex Media Electromagnetics, Princeton University Press, Bibcode:2012mads.book.....R, ISBN 978-0-691-14217-3 === Specialized texts in modern physics === Baez, John C.; Muniain, Javier P. (1994), Gauge Fields, Knots, and Gravity, World Scientific, ISBN 981-02-2034-0 Blank, Jiří; Exner, Pavel; Havlíček, Miloslav (2008), Hilbert Space Operators in Quantum Physics (2nd ed.), Springer, Bibcode:2008hsoq.book.....B, ISBN 978-1-4020-8869-8 Engel, Eberhard; Dreizler, Reiner M. (2011), Density Functional Theory: An Advanced Course, Springer-Verlag, ISBN 978-3-642-14089-1 Glimm, James; Jaffe, Arthur (1987), Quantum Physics: A Functional Integral Point of View (2nd ed.), Springer-Verlag, ISBN 0-387-96477-0 Haag, Rudolf (1996), Local Quantum Physics: Fields, Particles, Algebras (2nd ed.), Springer-Verlag, ISBN 3-540-61049-9 Hall, Brian C. (2013), Quantum Theory for Mathematicians, Springer, Bibcode:2013qtm..book.....H, ISBN 978-1-4614-7115-8 Hamilton, Mark J. D. (2017), Mathematical Gauge Theory: With Applications to the Standard Model of Particle Physics, Springer, Bibcode:2017mgta.book.....H, ISBN 978-3-319-68438-3 Hawking, Stephen W.; Ellis, George F. R. (1973), The Large Scale Structure of Space-Time, Cambridge University Press, ISBN 0-521-20016-4 Jackiw, Roman (1995), Diverse Topics in Theoretical and Mathematical Physics, World Scientific, ISBN 9810216963 Landsman, Klaas (2017), Foundations of Quantum Theory: From Classical Concepts to Operator Algebras, Springer, Bibcode:2017fqtf.book.....L, ISBN 978-3-319-51776-6 Moretti, Valter (2017), Spectral Theory and Quantum Mechanics: Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation, Unitext, vol. 110 (2nd ed.), Springer, Bibcode:2017stqm.book.....M, doi:10.1007/978-3-319-70706-8, ISBN 978-3-319-70705-1, S2CID 125121522 Robert, Didier; Combescure, Monique (2021), Coherent States and Applications in Mathematical Physics (2nd ed.), Springer, Bibcode:2021csam.book.....R, ISBN 978-3-030-70844-3 Tasaki, Hal (2020), Physics and mathematics of quantum many-body systems, Springer, ISBN 978-3-030-41265-4, OCLC 1154567924 Teschl, Gerald (2009), Mathematical Methods in Quantum Mechanics: With Applications to Schrödinger Operators, American Mathematical Society, ISBN 978-0-8218-4660-5 Thirring, Walter E. (2002), Quantum Mathematical Physics: Atoms, Molecules and Large Systems (2nd ed.), Springer-Verlag, Bibcode:2002qmpa.book.....T, ISBN 978-3-642-07711-1 von Neumann, John (2018), Mathematical Foundations of Quantum Mechanics, Princeton University Press, ISBN 978-0-691-17856-1 Weyl, Hermann (2014), The Theory of Groups and Quantum Mechanics, Martino Fine Books, ISBN 978-1614275800 Ynduráin, Francisco J. (2006), The Theory of Quark and Gluon Interactions (4th ed.), Springer, Bibcode:2006tqgi.book.....Y, ISBN 978-3642069741 Zeidler, Eberhard (2006–2011), Quantum Field Theory: A Bridge Between Mathematicians and Physicists, Vol 1-3, Springer == External links == Media related to Mathematical physics at Wikimedia Commons
Wikipedia/Mathematical_physics
Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values). This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations. == Algebraic operations == == Algebraic notation == Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression 3 x 2 − 2 x y + c {\displaystyle 3x^{2}-2xy+c} has the following components: A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. a , b , c {\displaystyle a,b,c} ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x , y {\displaystyle x,y} and z) are used to represent variables. They are usually printed in italics. Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, 3 × x 2 {\displaystyle 3\times x^{2}} is written as 3 x 2 {\displaystyle 3x^{2}} , and 2 × x × y {\displaystyle 2\times x\times y} may be written 2 x y {\displaystyle 2xy} . Usually terms with the highest power (exponent), are written on the left, for example, x 2 {\displaystyle x^{2}} is written to the left of x. When a coefficient is one, it is usually omitted (e.g. 1 x 2 {\displaystyle 1x^{2}} is written x 2 {\displaystyle x^{2}} ). Likewise when the exponent (power) is one, (e.g. 3 x 1 {\displaystyle 3x^{1}} is written 3 x {\displaystyle 3x} ). When the exponent is zero, the result is always 1 (e.g. x 0 {\displaystyle x^{0}} is always rewritten to 1). However 0 0 {\displaystyle 0^{0}} , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents. === Alternative notation === Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., x 2 {\displaystyle x^{2}} , in plain text, and in the TeX mark-up language, the caret symbol ^ represents exponentiation, so x 2 {\displaystyle x^{2}} is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x 2 {\displaystyle x^{2}} is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, 3 x {\displaystyle 3x} is written "3*x". == Concepts == === Variables === Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons. Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as C = P + 20 {\displaystyle C=P+20} . Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to 60 × 5 = 300 {\displaystyle 60\times 5=300} seconds. A more general (algebraic) description may state that the number of seconds, s = 60 × m {\displaystyle s=60\times m} , where m is the number of minutes. Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by π = c / d {\displaystyle \pi =c/d} . Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as ( a + b ) = ( b + a ) {\displaystyle (a+b)=(b+a)} . === Simplifying expressions === Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example, Added terms are simplified using coefficients. For example, x + x + x {\displaystyle x+x+x} can be simplified as 3 x {\displaystyle 3x} (where 3 is a numerical coefficient). Multiplied terms are simplified using exponents. For example, x × x × x {\displaystyle x\times x\times x} is represented as x 3 {\displaystyle x^{3}} Like terms are added together, for example, 2 x 2 + 3 a b − x 2 + a b {\displaystyle 2x^{2}+3ab-x^{2}+ab} is written as x 2 + 4 a b {\displaystyle x^{2}+4ab} , because the terms containing x 2 {\displaystyle x^{2}} are added together, and the terms containing a b {\displaystyle ab} are added together. Brackets can be "multiplied out", using the distributive property. For example, x ( 2 x + 3 ) {\displaystyle x(2x+3)} can be written as ( x × 2 x ) + ( x × 3 ) {\displaystyle (x\times 2x)+(x\times 3)} which can be written as 2 x 2 + 3 x {\displaystyle 2x^{2}+3x} Expressions can be factored. For example, 6 x 5 + 3 x 2 {\displaystyle 6x^{5}+3x^{2}} , by dividing both terms by the common factor, 3 x 2 {\displaystyle 3x^{2}} can be written as 3 x 2 ( 2 x 3 + 1 ) {\displaystyle 3x^{2}(2x^{3}+1)} === Equations === An equation states that two expressions are equal using the symbol for equality, = (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle: c 2 = a 2 + b 2 {\displaystyle c^{2}=a^{2}+b^{2}} This equation states that c 2 {\displaystyle c^{2}} , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by a and b. An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as a + b = b + a {\displaystyle a+b=b+a} ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. x 2 − 1 = 8 {\displaystyle x^{2}-1=8} is true only for x = 3 {\displaystyle x=3} and x = − 3 {\displaystyle x=-3} . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: a > b {\displaystyle a>b} where > {\displaystyle >} represents 'greater than', and a < b {\displaystyle a<b} where < {\displaystyle <} represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. ==== Properties of equality ==== By definition, equality is an equivalence relation, meaning it is reflexive (i.e. b = b {\displaystyle b=b} ), symmetric (i.e. if a = b {\displaystyle a=b} then b = a {\displaystyle b=a} ), and transitive (i.e. if a = b {\displaystyle a=b} and b = c {\displaystyle b=c} then a = c {\displaystyle a=c} ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties: if a = b {\displaystyle a=b} and c = d {\displaystyle c=d} then a + c = b + d {\displaystyle a+c=b+d} and a c = b d {\displaystyle ac=bd} ; if a = b {\displaystyle a=b} then a + c = b + c {\displaystyle a+c=b+c} and a c = b c {\displaystyle ac=bc} ; more generally, for any function f, if a = b {\displaystyle a=b} then f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} . ==== Properties of inequality ==== The relations less than < {\displaystyle <} and greater than > {\displaystyle >} have the property of transitivity: If a < b {\displaystyle a<b} and b < c {\displaystyle b<c} then a < c {\displaystyle a<c} ; If a < b {\displaystyle a<b} and c < d {\displaystyle c<d} then a + c < b + d {\displaystyle a+c<b+d} ; If a < b {\displaystyle a<b} and c > 0 {\displaystyle c>0} then a c < b c {\displaystyle ac<bc} ; If a < b {\displaystyle a<b} and c < 0 {\displaystyle c<0} then b c < a c {\displaystyle bc<ac} . By reversing the inequation, < {\displaystyle <} and > {\displaystyle >} can be swapped, for example: a < b {\displaystyle a<b} is equivalent to b > a {\displaystyle b>a} === Substitution === Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for a in the expression a*5 makes a new expression 3*5 with meaning 15. Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if a 2 := a × a {\displaystyle a^{2}:=a\times a} is meant as the definition of a 2 , {\displaystyle a^{2},} as the product of a with itself, substituting 3 for a informs the reader of this statement that 3 2 {\displaystyle 3^{2}} means 3 × 3 = 9. Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement x + 1 = 0, if x is substituted with 1, this implies 1 + 1 = 2 = 0, which is false, which implies that if x + 1 = 0 then x cannot be 1. If x and y are integers, rationals, or real numbers, then xy = 0 implies x = 0 or y = 0. Consider abc = 0. Then, substituting a for x and bc for y, we learn a = 0 or bc = 0. Then we can substitute again, letting x = b and y = c, to show that if bc = 0 then b = 0 or c = 0. Therefore, if abc = 0, then a = 0 or (b = 0 or c = 0), so abc = 0 implies a = 0 or b = 0 or c = 0. If the original fact were stated as "ab = 0 implies a = 0 or b = 0", then when saying "consider abc = 0," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if abc = 0 then a = 0 or b = 0 or c = 0 if, instead of letting a = a and b = bc, one substitutes a for a and b for bc (and with bc = 0, substituting b for a and c for b). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression a into the a term of the original equation, the a substituted does not refer to the a in the statement "ab = 0 implies a = 0 or b = 0." == Solving algebraic equations == The following sections lay out examples of some of the types of algebraic equations that may be encountered. === Linear equations with one variable === Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider: Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child? Equivalent equation: 2 x + 4 = 12 {\displaystyle 2x+4=12} where x represent the child's age To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows: In words: the child is 4 years old. The general form of a linear equation with one variable, can be written as: a x + b = c {\displaystyle ax+b=c} Following the same procedure (i.e. subtract b from both sides, and then divide by a), the general solution is given by x = c − b a {\displaystyle x={\frac {c-b}{a}}} === Linear equations with two variables === A linear equation with two variables has many (i.e. an infinite number of) solutions. For example: Problem in words: A father is 22 years older than his son. How old are they? Equivalent equation: y = x + 22 {\displaystyle y=x+22} where y is the father's age, x is the son's age. That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above. To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that: Problem in words In 10 years, the father will be twice as old as his son. Equivalent equation y + 10 = 2 × ( x + 10 ) y = 2 × ( x + 10 ) − 10 Subtract 10 from both sides y = 2 x + 20 − 10 Multiple out brackets y = 2 x + 10 Simplify {\displaystyle {\begin{aligned}y+10&=2\times (x+10)\\y&=2\times (x+10)-10&&{\text{Subtract 10 from both sides}}\\y&=2x+20-10&&{\text{Multiple out brackets}}\\y&=2x+10&&{\text{Simplify}}\end{aligned}}} Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method): { y = x + 22 First equation y = 2 x + 10 Second equation {\displaystyle {\begin{cases}y=x+22&{\text{First equation}}\\y=2x+10&{\text{Second equation}}\end{cases}}} Subtract the first equation from ( y − y ) = ( 2 x − x ) + 10 − 22 the second in order to remove y 0 = x − 12 Simplify 12 = x Add 12 to both sides x = 12 Rearrange {\displaystyle {\begin{aligned}&&&{\text{Subtract the first equation from}}\\(y-y)&=(2x-x)+10-22&&{\text{the second in order to remove }}y\\0&=x-12&&{\text{Simplify}}\\12&=x&&{\text{Add 12 to both sides}}\\x&=12&&{\text{Rearrange}}\end{aligned}}} In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations. For other ways to solve this kind of equations, see below, System of linear equations. === Quadratic equations === A quadratic equation is one which includes a term with an exponent of 2, for example, x 2 {\displaystyle x^{2}} , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} , where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term a x 2 {\displaystyle ax^{2}} , which is known as the quadratic term. Hence a ≠ 0 {\displaystyle a\neq 0} , and so we may divide by a and rearrange the equation into the standard form x 2 + p x + q = 0 {\displaystyle x^{2}+px+q=0} where p = b a {\displaystyle p={\frac {b}{a}}} and q = c a {\displaystyle q={\frac {c}{a}}} . Solving this, by a process known as completing the square, leads to the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}},} where the symbol "±" indicates that both x = − b + b 2 − 4 a c 2 a and x = − b − b 2 − 4 a c 2 a {\displaystyle x={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\quad {\text{and}}\quad x={\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}} are solutions of the quadratic equation. Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring: x 2 + 3 x − 10 = 0 , {\displaystyle x^{2}+3x-10=0,} which is the same thing as ( x + 5 ) ( x − 2 ) = 0. {\displaystyle (x+5)(x-2)=0.} It follows from the zero-product property that either x = 2 {\displaystyle x=2} or x = − 5 {\displaystyle x=-5} are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example, x 2 + 1 = 0 {\displaystyle x^{2}+1=0} has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as: ( x + 1 ) 2 = 0. {\displaystyle (x+1)^{2}=0.} For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as [ x − ( − 1 ) ] [ x − ( − 1 ) ] = 0. {\displaystyle [x-(-1)][x-(-1)]=0.} ==== Complex numbers ==== All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation x 2 + x + 1 = 0 {\displaystyle x^{2}+x+1=0} has solutions x = − 1 + − 3 2 and x = − 1 − − 3 2 . {\displaystyle x={\frac {-1+{\sqrt {-3}}}{2}}\quad \quad {\text{and}}\quad \quad x={\frac {-1-{\sqrt {-3}}}{2}}.} Since − 3 {\displaystyle {\sqrt {-3}}} is not any real number, both of these solutions for x are complex numbers. === Exponential and logarithmic equations === An exponential equation is one which has the form a x = b {\displaystyle a^{x}=b} for a > 0 {\displaystyle a>0} , which has solution x = log a ⁡ b = ln ⁡ b ln ⁡ a {\displaystyle x=\log _{a}b={\frac {\ln b}{\ln a}}} when b > 0 {\displaystyle b>0} . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if 3 ⋅ 2 x − 1 + 1 = 10 {\displaystyle 3\cdot 2^{x-1}+1=10} then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain 2 x − 1 = 3 {\displaystyle 2^{x-1}=3} whence x − 1 = log 2 ⁡ 3 {\displaystyle x-1=\log _{2}3} or x = log 2 ⁡ 3 + 1. {\displaystyle x=\log _{2}3+1.} A logarithmic equation is an equation of the form l o g a ( x ) = b {\displaystyle log_{a}(x)=b} for a > 0 {\displaystyle a>0} , which has solution x = a b . {\displaystyle x=a^{b}.} For example, if 4 log 5 ⁡ ( x − 3 ) − 2 = 6 {\displaystyle 4\log _{5}(x-3)-2=6} then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get log 5 ⁡ ( x − 3 ) = 2 {\displaystyle \log _{5}(x-3)=2} whence x − 3 = 5 2 = 25 {\displaystyle x-3=5^{2}=25} from which we obtain x = 28. {\displaystyle x=28.} === Radical equations === A radical equation is one that includes a radical sign, which includes square roots, x , {\displaystyle {\sqrt {x}},} cube roots, x 3 {\displaystyle {\sqrt[{3}]{x}}} , and nth roots, x n {\displaystyle {\sqrt[{n}]{x}}} . Recall that an nth root can be rewritten in exponential format, so that x n {\displaystyle {\sqrt[{n}]{x}}} is equivalent to x 1 n {\displaystyle x^{\frac {1}{n}}} . Combined with regular exponents (powers), then x 3 2 {\displaystyle {\sqrt[{2}]{x^{3}}}} (the square root of x cubed), can be rewritten as x 3 2 {\displaystyle x^{\frac {3}{2}}} . So a common form of a radical equation is x m n = a {\displaystyle {\sqrt[{n}]{x^{m}}}=a} (equivalent to x m n = a {\displaystyle x^{\frac {m}{n}}=a} ) where m and n are integers. It has real solution(s): For example, if: ( x + 5 ) 2 / 3 = 4 {\displaystyle (x+5)^{2/3}=4} then x + 5 = ± ( 4 ) 3 , x + 5 = ± 8 , x = − 5 ± 8 , {\displaystyle {\begin{aligned}x+5&=\pm ({\sqrt {4}})^{3},\\x+5&=\pm 8,\\x&=-5\pm 8,\end{aligned}}} and thus x = 3 or x = − 13 {\displaystyle x=3\quad {\text{or}}\quad x=-13} === System of linear equations === There are different methods to solve a system of linear equations with two variables. ==== Elimination method ==== An example of solving a system of linear equations is by using the elimination method: { 4 x + 2 y = 14 2 x − y = 1. {\displaystyle {\begin{cases}4x+2y&=14\\2x-y&=1.\end{cases}}} Multiplying the terms in the second equation by 2: 4 x + 2 y = 14 {\displaystyle 4x+2y=14} 4 x − 2 y = 2. {\displaystyle 4x-2y=2.} Adding the two equations together to get: 8 x = 16 {\displaystyle 8x=16} which simplifies to x = 2. {\displaystyle x=2.} Since the fact that x = 2 {\displaystyle x=2} is known, it is then possible to deduce that y = 3 {\displaystyle y=3} by either of the original two equations (by using 2 instead of x ) The full solution to this problem is then { x = 2 y = 3. {\displaystyle {\begin{cases}x=2\\y=3.\end{cases}}} This is not the only way to solve this specific system; y could have been resolved before x. ==== Substitution method ==== Another way of solving the same system of linear equations is by substitution. { 4 x + 2 y = 14 2 x − y = 1. {\displaystyle {\begin{cases}4x+2y&=14\\2x-y&=1.\end{cases}}} An equivalent for y can be deduced by using one of the two equations. Using the second equation: 2 x − y = 1 {\displaystyle 2x-y=1} Subtracting 2 x {\displaystyle 2x} from each side of the equation: 2 x − 2 x − y = 1 − 2 x − y = 1 − 2 x {\displaystyle {\begin{aligned}2x-2x-y&=1-2x\\-y&=1-2x\end{aligned}}} and multiplying by −1: y = 2 x − 1. {\displaystyle y=2x-1.} Using this y value in the first equation in the original system: 4 x + 2 ( 2 x − 1 ) = 14 4 x + 4 x − 2 = 14 8 x − 2 = 14 {\displaystyle {\begin{aligned}4x+2(2x-1)&=14\\4x+4x-2&=14\\8x-2&=14\end{aligned}}} Adding 2 on each side of the equation: 8 x − 2 + 2 = 14 + 2 8 x = 16 {\displaystyle {\begin{aligned}8x-2+2&=14+2\\8x&=16\end{aligned}}} which simplifies to x = 2 {\displaystyle x=2} Using this value in one of the equations, the same solution as in the previous method is obtained. { x = 2 y = 3. {\displaystyle {\begin{cases}x=2\\y=3.\end{cases}}} This is not the only way to solve this specific system; in this case as well, y could have been solved before x. === Other types of systems of linear equations === ==== Inconsistent systems ==== In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is { x + y = 1 0 x + 0 y = 2 . {\displaystyle {\begin{cases}{\begin{aligned}x+y&=1\\0x+0y&=2\,.\end{aligned}}\end{cases}}} As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution. However, not all inconsistent systems are recognized at first sight. As an example, consider the system { 4 x + 2 y = 12 − 2 x − y = − 4 . {\displaystyle {\begin{cases}{\begin{aligned}4x+2y&=12\\-2x-y&=-4\,.\end{aligned}}\end{cases}}} Multiplying by 2 both sides of the second equation, and adding it to the first one results in 0 x + 0 y = 4 , {\displaystyle 0x+0y=4\,,} which clearly has no solution. ==== Undetermined systems ==== There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for x and y) For example: { 4 x + 2 y = 12 − 2 x − y = − 6 {\displaystyle {\begin{cases}{\begin{aligned}4x+2y&=12\\-2x-y&=-6\end{aligned}}\end{cases}}} Isolating y in the second equation: y = − 2 x + 6 {\displaystyle y=-2x+6} And using this value in the first equation in the system: 4 x + 2 ( − 2 x + 6 ) = 12 4 x − 4 x + 12 = 12 12 = 12 {\displaystyle {\begin{aligned}4x+2(-2x+6)=12\\4x-4x+12=12\\12=12\end{aligned}}} The equality is true, but it does not provide a value for x. Indeed, one can easily verify (by just filling in some values of x) that for any x there is a solution as long as y = − 2 x + 6 {\displaystyle y=-2x+6} . There is an infinite number of solutions for this system. ==== Over- and underdetermined systems ==== Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is { x + 2 y = 10 y − z = 2. {\displaystyle {\begin{cases}{\begin{aligned}x+2y&=10\\y-z&=2.\end{aligned}}\end{cases}}} When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any. A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others. == See also == History of algebra Binary operation Gaussian elimination Mathematics education Number line Polynomial Cancelling out Tarski's high school algebra problem == References == Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, ISBN 978-1-899618-79-8, also online digitized editions 2006, 1822. Charles Smith, A Treatise on Algebra, in Cornell University Library Historical Math Monographs. Redden, John. Elementary Algebra Archived 2016-06-10 at the Wayback Machine. Flat World Knowledge, 2011 == External links == Media related to Elementary algebra at Wikimedia Commons
Wikipedia/Elementary_algebra
In mathematics, a composition algebra A over a field K is a not necessarily associative algebra over K together with a nondegenerate quadratic form N that satisfies N ( x y ) = N ( x ) N ( y ) {\displaystyle N(xy)=N(x)N(y)} for all x and y in A. A composition algebra includes an involution called a conjugation: x ↦ x ∗ . {\displaystyle x\mapsto x^{*}.} The quadratic form N ( x ) = x x ∗ {\displaystyle N(x)=xx^{*}} is called the norm of the algebra. A composition algebra (A, ∗, N) is either a division algebra or a split algebra, depending on the existence of a non-zero v in A such that N(v) = 0, called a null vector. When x is not a null vector, the multiplicative inverse of x is x ∗ N ( x ) {\textstyle {\frac {x^{*}}{N(x)}}} . When there is a non-zero null vector, N is an isotropic quadratic form, and "the algebra splits". == Structure theorem == Every unital composition algebra over a field K can be obtained by repeated application of the Cayley–Dickson construction starting from K (if the characteristic of K is different from 2) or a 2-dimensional composition subalgebra (if char(K) = 2). The possible dimensions of a composition algebra are 1, 2, 4, and 8. 1-dimensional composition algebras only exist when char(K) ≠ 2. Composition algebras of dimension 1 and 2 are commutative and associative. Composition algebras of dimension 2 are either quadratic field extensions of K or isomorphic to K ⊕ K. Composition algebras of dimension 4 are called quaternion algebras. They are associative but not commutative. Composition algebras of dimension 8 are called octonion algebras. They are neither associative nor commutative. For consistent terminology, algebras of dimension 1 have been called unarion, and those of dimension 2 binarion. Every composition algebra is an alternative algebra. Using the doubled form ( _ : _ ): A × A → K by ( a : b ) = n ( a + b ) − n ( a ) − n ( b ) , {\displaystyle (a:b)=n(a+b)-n(a)-n(b),} then the trace of a is given by (a:1) and the conjugate by a* = (a:1)e – a where e is the basis element for 1. A series of exercises proves that a composition algebra is always an alternative algebra. == Instances and usage == When the field K is taken to be complex numbers C and the quadratic form z2, then four composition algebras over C are C itself, the bicomplex numbers, the biquaternions (isomorphic to the 2×2 complex matrix ring M(2, C)), and the bioctonions C ⊗ O, which are also called complex octonions. The matrix ring M(2, C) has long been an object of interest, first as biquaternions by Hamilton (1853), later in the isomorphic matrix form, and especially as Pauli algebra. The squaring function N(x) = x2 on the real number field forms the primordial composition algebra. When the field K is taken to be real numbers R, then there are just six other real composition algebras.: 166  In two, four, and eight dimensions there are both a division algebra and a split algebra: binarions: complex numbers with quadratic form x2 + y2 and split-complex numbers with quadratic form x2 − y2, quaternions and split-quaternions, octonions and split-octonions. Every composition algebra has an associated bilinear form B(x,y) constructed with the norm N and a polarization identity: B ( x , y ) = [ N ( x + y ) − N ( x ) − N ( y ) ] / 2. {\displaystyle B(x,y)\ =\ [N(x+y)-N(x)-N(y)]/2.} == History == The composition of sums of squares was noted by several early authors. Diophantus was aware of the identity involving the sum of two squares, now called the Brahmagupta–Fibonacci identity, which is also articulated as a property of Euclidean norms of complex numbers when multiplied. Leonhard Euler discussed the four-square identity in 1748, and it led W. R. Hamilton to construct his four-dimensional algebra of quaternions.: 62  In 1848 tessarines were described giving first light to bicomplex numbers. About 1818 Danish scholar Ferdinand Degen displayed the Degen's eight-square identity, which was later connected with norms of elements of the octonion algebra: Historically, the first non-associative algebra, the Cayley numbers ... arose in the context of the number-theoretic problem of quadratic forms permitting composition…this number-theoretic question can be transformed into one concerning certain algebraic systems, the composition algebras...: 61  In 1919 Leonard Dickson advanced the study of the Hurwitz problem with a survey of efforts to that date, and by exhibiting the method of doubling the quaternions to obtain Cayley numbers. He introduced a new imaginary unit e, and for quaternions q and Q writes a Cayley number q + Qe. Denoting the quaternion conjugate by q′, the product of two Cayley numbers is ( q + Q e ) ( r + R e ) = ( q r − R ′ Q ) + ( R q + Q r ′ ) e . {\displaystyle (q+Qe)(r+Re)=(qr-R'Q)+(Rq+Qr')e.} The conjugate of a Cayley number is q' – Qe, and the quadratic form is qq′ + QQ′, obtained by multiplying the number by its conjugate. The doubling method has come to be called the Cayley–Dickson construction. In 1923 the case of real algebras with positive definite forms was delimited by the Hurwitz's theorem (composition algebras). In 1931 Max Zorn introduced a gamma (γ) into the multiplication rule in the Dickson construction to generate split-octonions. Adrian Albert also used the gamma in 1942 when he showed that Dickson doubling could be applied to any field with the squaring function to construct binarion, quaternion, and octonion algebras with their quadratic forms. Nathan Jacobson described the automorphisms of composition algebras in 1958. The classical composition algebras over R and C are unital algebras. Composition algebras without a multiplicative identity were found by H.P. Petersson (Petersson algebras) and Susumu Okubo (Okubo algebras) and others.: 463–81  == See also == Freudenthal magic square Pfister form Triality == References == == Further reading == Faraut, Jacques; Korányi, Adam (1994). Analysis on symmetric cones. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York. pp. 81–86. ISBN 0-19-853477-9. MR 1446489. Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. Zbl 1068.11023. Harvey, F. Reese (1990). Spinors and Calibrations. Perspectives in Mathematics. Vol. 9. San Diego: Academic Press. ISBN 0-12-329650-1. Zbl 0694.53002.
Wikipedia/Composition_algebra
Physics is the scientific study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. It is one of the most fundamental scientific disciplines. A scientist who specializes in the field of physics is called a physicist. Physics is one of the oldest academic disciplines. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy. Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of technologies that have transformed modern society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus. == History == The word physics comes from the Latin physica ('study of nature'), which itself is a borrowing of the Greek φυσική (phusikḗ 'natural science'), a term derived from φύσις (phúsis 'origin, nature, property'). === Ancient astronomy === Astronomy is one of the oldest natural sciences. Early civilizations dating before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which could not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere. === Natural philosophy === Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus. === Aristotle and Hellenistic physics === During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy developed along many lines of inquiry. Aristotle (Greek: Ἀριστοτέλης, Aristotélēs) (384–322 BCE), a student of Plato, wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC. Aristotelian physics was influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today. He explained ideas such as motion (and gravity) with the theory of four elements. Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place. Because of their differing densities, each element will revert to its own specific place in the atmosphere. So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included: that heavier objects will fall faster, the speed being proportional to the weight and the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air). He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it. The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "prime mover" as the ultimate source of all motion in the world (Book 8 of his treatise Physics). === Medieval European and Islamic === The Western Roman Empire fell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as the Byzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics. In the sixth century, John Philoponus challenged the dominant Aristotelian approach to science although much of his work was focused on Christian theology. In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. The most notable innovations under Islamic scholarship were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. His discussed his experiments with camera obscura, showing that light moved in a straight line; he encouraged readers to reproduce his experiments making him one of the originators of the scientific method === Scientific Revolution === Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Johannes Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name). Newton, and separately Gottfried Wilhelm Leibniz, developed calculus, the mathematical study of continuous change, and Newton applied it to solve physical problems. === 19th century === The discovery of laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased. By the end of the 19th century, theories of thermodynamics, mechanics, and electromagnetics matched a wide variety of observations. Taken together these theories became the basis for what would later be called classical physics.: 2  A few experimental results remained inexplicable. Classical electromagnetism presumed a medium, an luminiferous aether to support the propagation of waves, but this medium could not be detected. The intensity of light from hot glowing blackbody objects did not match the predictions of thermodynamics and electromagnetism. The character of electron emission of illuminated metals differed from predictions. These failures, seemingly insignificant in the big picture would upset the physics world in first two decades of the 20th century. === 20th century === Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light. Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac. From this early work, and work in related fields, the Standard Model of particle physics was derived. Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012, all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research. Areas of mathematics in general are important to this field, such as the study of probabilities and groups. == Core theories == Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature. These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity. === Distinction between classical and modern physics === In the first decades of the 20th century physics was revolutionized by the discoveries of quantum mechanics and relativity. The changes were so fundamental that these new concepts became the foundation of "modern physics", with other topics becoming "classical physics". The majority of applications of physics are essentially classical.: xxxi  The laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light.: xxxii  Outside of this domain, observations do not match predictions provided by classical mechanics.: 6  === Classical theory === Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, thermodynamics, and electromagnetism.: 2  Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received. Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing, and electroacoustics, the manipulation of audible sound waves using electronics. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest. === Modern theory === The discovery of relativity and of quantum mechanics in the first decades of the 20th century transformed the conceptual basis of physics without reducing the practical value of most of the physical theories developed up to that time. Consequently the topics of physics have come to be divided into "classical physics" and "modern physics", with the latter category including effects related to quantum mechanics and relativity.: 2  Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid. The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics. Fundamental concepts in modern physics include: Action Causality Covariance Particle Physical field Physical interaction Quantum Statistical ensemble Symmetry Wave == Research == === Scientific method === Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory. A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation. === Theory and experiment === Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment). Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory. Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way. Beyond the known universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions. Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists. === Scope and aims === Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science". Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together. For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information). === Current research === Research in physics is continually progressing on a large number of fronts. In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity. Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers. In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing. Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections. These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said: I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic. == Branches and fields == === Fields === The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table. Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare. Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach. ==== Nuclear and particle ==== Particle physics is the study of the elementary constituents of matter and energy and the interactions between them. In addition, particle physicists design and develop the high-energy accelerators, detectors, and computer programs necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles. Currently, the interactions of elementary particles and fields are described by the Standard Model. The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces. Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively). The Standard Model also predicts a particle known as the Higgs boson. In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson, an integral part of the Higgs mechanism. Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. ==== Atomic, molecular, and optical ==== Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view). Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics. Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm. ==== Condensed matter ==== Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms. More exotic condensed phases include the superfluid and the Bose–Einstein condensate found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials, and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices. Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics. Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering. ==== Astrophysics ==== Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy. Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang. The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter. == Other aspects == === Education === === Careers === === Philosophy === Physics, as with the rest of science, relies on the philosophy of science and its "scientific method" to advance knowledge of the physical world. The scientific method employs a priori and a posteriori reasoning as well as the use of Bayesian inference to measure the validity of a given theory. Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism, and realism. Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism, and Erwin Schrödinger, who wrote on quantum mechanics. The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking, a view Penrose discusses in his book, The Road to Reality. Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views. Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras, Plato, Galileo, and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields. Physics uses mathematics to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research. Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data. The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical. The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for. === Fundamental vs. applied physics === Physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics. Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves. Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem. The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics. Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations. With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology. There is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics). == See also == Earth science – Fields of natural science related to Earth Neurophysics – branch of biophysics dealing with the development and use of physical methods to gain information about the nervous systemPages displaying wikidata descriptions as a fallback Psychophysics – Branch of knowledge relating physical stimuli and psychological perception Relationship between mathematics and physics Science tourism – Travel to notable science locations === Lists === List of important publications in physics List of physicists Lists of physics equations == Notes == == References == == Sources == == External links == Physics at Quanta Magazine Usenet Physics FAQ – FAQ compiled by sci.physics and other physics newsgroups Website of the Nobel Prize in physics – Award for outstanding contributions to the subject World of Physics – Online encyclopedic dictionary of physics Nature Physics – Academic journal Physics – Online magazine by the American Physical Society – Directory of physics related media The Vega Science Trust – Science videos, including physics HyperPhysics website – Physics and astronomy mind-map from Georgia State University Physics at MIT OpenCourseWare – Online course material from Massachusetts Institute of Technology The Feynman Lectures on Physics
Wikipedia/Physics
In mathematics, particularly abstract algebra, an algebraic closure of a field K is an algebraic extension of K that is algebraically closed. It is one of many closures in mathematics. Using Zorn's lemma or the weaker ultrafilter lemma, it can be shown that every field has an algebraic closure, and that the algebraic closure of a field K is unique up to an isomorphism that fixes every member of K. Because of this essential uniqueness, we often speak of the algebraic closure of K, rather than an algebraic closure of K. The algebraic closure of a field K can be thought of as the largest algebraic extension of K. To see this, note that if L is any algebraic extension of K, then the algebraic closure of L is also an algebraic closure of K, and so L is contained within the algebraic closure of K. The algebraic closure of K is also the smallest algebraically closed field containing K, because if M is any algebraically closed field containing K, then the elements of M that are algebraic over K form an algebraic closure of K. The algebraic closure of a field K has the same cardinality as K if K is infinite, and is countably infinite if K is finite. == Examples == The fundamental theorem of algebra states that the algebraic closure of the field of real numbers is the field of complex numbers. The algebraic closure of the field of rational numbers is the field of algebraic numbers. There are many countable algebraically closed fields within the complex numbers, and strictly containing the field of algebraic numbers; these are the algebraic closures of transcendental extensions of the rational numbers, e.g. the algebraic closure of Q(π). For a finite field of prime power order q, the algebraic closure is a countably infinite field that contains a copy of the field of order qn for each positive integer n (and is in fact the union of these copies). == Existence of an algebraic closure and splitting fields == Let S = { f λ | λ ∈ Λ } {\displaystyle S=\{f_{\lambda }|\lambda \in \Lambda \}} be the set of all monic irreducible polynomials in K[x]. For each f λ ∈ S {\displaystyle f_{\lambda }\in S} , introduce new variables u λ , 1 , … , u λ , d {\displaystyle u_{\lambda ,1},\ldots ,u_{\lambda ,d}} where d = d e g r e e ( f λ ) {\displaystyle d={\rm {degree}}(f_{\lambda })} . Let R be the polynomial ring over K generated by u λ , i {\displaystyle u_{\lambda ,i}} for all λ ∈ Λ {\displaystyle \lambda \in \Lambda } and all i ≤ d e g r e e ( f λ ) {\displaystyle i\leq {\rm {degree}}(f_{\lambda })} . Write f λ − ∏ i = 1 d ( x − u λ , i ) = ∑ j = 0 d − 1 r λ , j ⋅ x j ∈ R [ x ] {\displaystyle f_{\lambda }-\prod _{i=1}^{d}(x-u_{\lambda ,i})=\sum _{j=0}^{d-1}r_{\lambda ,j}\cdot x^{j}\in R[x]} with r λ , j ∈ R {\displaystyle r_{\lambda ,j}\in R} . Let I be the ideal in R generated by the r λ , j {\displaystyle r_{\lambda ,j}} . Since I is strictly smaller than R, Zorn's lemma implies that there exists a maximal ideal M in R that contains I. The field K1=R/M has the property that every polynomial f λ {\displaystyle f_{\lambda }} with coefficients in K splits as the product of x − ( u λ , i + M ) , {\displaystyle x-(u_{\lambda ,i}+M),} and hence has all roots in K1. In the same way, an extension K2 of K1 can be constructed, etc. The union of all these extensions is the algebraic closure of K, because any polynomial with coefficients in this new field has its coefficients in some Kn with sufficiently large n, and then its roots are in Kn+1, and hence in the union itself. It can be shown along the same lines that for any subset S of K[x], there exists a splitting field of S over K. == Separable closure == An algebraic closure Kalg of K contains a unique separable extension Ksep of K containing all (algebraic) separable extensions of K within Kalg. This subextension is called a separable closure of K. Since a separable extension of a separable extension is again separable, there are no finite separable extensions of Ksep, of degree > 1. Saying this another way, K is contained in a separably-closed algebraic extension field. It is unique (up to isomorphism). The separable closure is the full algebraic closure if and only if K is a perfect field. For example, if K is a field of characteristic p and if X is transcendental over K, K ( X ) ( X p ) ⊃ K ( X ) {\displaystyle K(X)({\sqrt[{p}]{X}})\supset K(X)} is a non-separable algebraic field extension. In general, the absolute Galois group of K is the Galois group of Ksep over K. == See also == Algebraically closed field Algebraic extension Puiseux expansion Complete field == References == Kaplansky, Irving (1972). Fields and rings. Chicago lectures in mathematics (Second ed.). University of Chicago Press. ISBN 0-226-42451-0. Zbl 1001.16500. McCarthy, Paul J. (1991). Algebraic extensions of fields (Corrected reprint of the 2nd ed.). New York: Dover Publications. Zbl 0768.12001.
Wikipedia/Algebraic_closure
Crystallography is the branch of science devoted to the study of molecular and crystalline structure and properties. The word crystallography is derived from the Ancient Greek word κρύσταλλος (krústallos; "clear ice, rock-crystal"), and γράφειν (gráphein; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming 2014 the International Year of Crystallography. Crystallography is a broad topic, and many of its subareas, such as X-ray crystallography, are themselves important scientific topics. Crystallography ranges from the fundamentals of crystal structure to the mathematics of crystal geometry, including those that are not periodic or quasicrystals. At the atomic scale it can involve the use of X-ray diffraction to produce experimental data that the tools of X-ray crystallography can convert into detailed positions of atoms, and sometimes electron density. At larger scales it includes experimental tools such as orientational imaging to examine the relative orientations at the grain boundary in materials. Crystallography plays a key role in many areas of biology, chemistry, and physics, as well new developments in these fields. == History and timeline == Before the 20th century, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. The discovery of X-rays and electrons in the last decade of the 19th century enabled the determination of crystal structures on the atomic scale, which brought about the modern era of crystallography. The first X-ray diffraction experiment was conducted in 1912 by Max von Laue, while electron diffraction was first realized in 1927 in the Davisson–Germer experiment and parallel work by George Paget Thomson and Alexander Reid. These developed into the two main branches of crystallography, X-ray crystallography and electron diffraction. The quality and throughput of solving crystal structures greatly improved in the second half of the 20th century, with the developments of customized instruments and phasing algorithms. Nowadays, crystallography is an interdisciplinary field, supporting theoretical and experimental discoveries in various domains. Modern-day scientific instruments for crystallography vary from laboratory-sized equipment, such as diffractometers and electron microscopes, to dedicated large facilities, such as photoinjectors, synchrotron light sources and free-electron lasers. == Methodology == Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray diffraction, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. X-rays interact with the spatial distribution of electrons in the sample. Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition the magnetic moment of neutrons is non-zero, so they are also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels, which can sometimes be resolved by substituting deuterium for hydrogen. Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample.: Chpt 4  It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces. == Applications in various areas == === Materials science === Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which take diffraction patterns of samples with a large number of crystals, play an important role in structural determination. Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. Crystallographic studies help elucidate the relationship between a material's structure and its properties, aiding in developing new materials with tailored characteristics. This understanding is crucial in various fields, including metallurgy, geology, and materials science. Advancements in crystallographic techniques, such as electron diffraction and X-ray crystallography, continue to expand our understanding of material behavior at the atomic level. In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs. Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory. === Biology === X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron diffraction has been used to determine some protein structures, most notably membrane proteins and viral capsids. Macromolecular structures determined through X-ray crystallography (and other techniques) are housed in the Protein Data Bank (PDB)–a freely accessible repository for the structures of proteins and other biological macromolecules. There are many molecular graphics codes available for visualising these structures. == Notation == Coordinates in square brackets such as [100] denote a direction vector (in real space). Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001] or the negative of any of those directions. Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl]. Indices in curly brackets or braces such as {100} denote a family of planes and their normals. In cubic materials the symmetry makes them equivalent, just as the way angle brackets denote a family of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}. == Reference literature == The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are: Vol A - Space Group Symmetry, Vol A1 - Symmetry Relations Between Space Groups, Vol B - Reciprocal Space, Vol C - Mathematical, Physical, and Chemical Tables, Vol D - Physical Properties of Crystals, Vol E - Subperiodic Groups, Vol F - Crystallography of Biological Macromolecules, and Vol G - Definition and Exchange of Crystallographic Data. == Notable scientists == == See also == == References == == External links == Free book, Geometry of Crystals, Polycrystals and Phase Transformations American Crystallographic Association Learning Crystallography Web Course on Crystallography Crystallographic Space Groups
Wikipedia/Crystallography
The word 'algebra' is used for various branches and structures of mathematics. For their overview, see Algebra. The name comes from the famous 10th century book Al-Jabr by Al-Khwarizmi. == The bare word "algebra" == The bare word "algebra" may refer to: Elementary algebra Abstract algebra Algebra over a field In universal algebra, algebra has an axiomatic definition, roughly as an instance of any of a number of algebraic structures, such as groups, rings, etc. == Branches of mathematics == Elementary algebra, i.e. "high-school algebra" Abstract algebra Linear algebra Relational algebra Universal algebra The term is also traditionally used for the field of: Computer algebra, dealing with software systems for symbolic mathematical computation, which often offer capabilities beyond what is normally understood to be "algebra" == Mathematical structures == === Vector space with multiplication === An "algebra", or to be verbose, an algebra over a field, is a vector space equipped with a bilinear vector product. Some notable algebras in this sense are: In ring theory and linear algebra: Algebra over a commutative ring, a module equipped with a bilinear product. Generalization of algebras over a field Associative algebra, a module equipped with an associative bilinear vector product Superalgebra, a Z 2 {\displaystyle \mathbb {Z} _{2}} -graded algebra Lie algebras, Poisson algebras, and Jordan algebras, important examples of (potentially) nonassociative algebras In functional analysis: Banach algebra, an associative algebra A over the real or complex numbers which at the same time is also a Banach space Operator algebra, continuous linear operators on a topological vector space with multiplication given by the composition *-algebra, An algebra with a notion of adjoints C*-algebra, a Banach algebra equipped with a unary involution operation Von Neumann algebra (or W*-algebra) See also coalgebra, the dual notion. === Other structures === A different class of "algebras" consists of objects which generalize logical connectives, sets, and lattices. In logic: Relational algebra, in which a set of finitary relations that is closed under certain operators Boolean algebra and Boolean algebra (structure) Heyting algebra In measure theory: Algebra over a set, a collection of sets closed under finite unions and complementation Sigma algebra, a collection of sets closed under countable unions and complementation "Algebra" can also describe more general structures: In category theory and computer science: F-algebra and F-coalgebra T-algebra == Other uses == Algebra Blessett, singer from the U.S, goes by the stage name Algebra == See also == Algebraic (disambiguation) List of all articles whose title begins with "algebra"
Wikipedia/Algebra_(disambiguation)
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix. Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors". Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor. == Definition == Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. === As multidimensional arrays === A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an n-dimensional space is represented by a one-dimensional array with n components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by T ij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and T ij can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together. The total number of indices (m) required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an m-dimensional array or an m-way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors. Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors e ^ i {\displaystyle \mathbf {\hat {e}} _{i}} are expressed in terms of the old basis vectors e j {\displaystyle \mathbf {e} _{j}} as, e ^ i = ∑ j = 1 n e j R i j = e j R i j . {\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.} Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R, v ^ i = ( R − 1 ) j i v j , {\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},} where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself, w ^ i = w j R i j . {\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.} This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript). As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array T {\displaystyle T} that transforms under a change of basis matrix R = ( R i j ) {\displaystyle R=\left(R_{i}^{j}\right)} by T ^ = R − 1 T R {\displaystyle {\hat {T}}=R^{-1}TR} . For the individual matrix entries, this transformation law has the form T ^ j ′ i ′ = ( R − 1 ) i i ′ T j i R j ′ j {\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}} so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1). Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above: v = v ^ i e ^ i = ( ( R − 1 ) j i v j ) ( e k R i k ) = ( ( R − 1 ) j i R i k ) v j e k = δ j k v j e k = v k e k = v i e i {\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}} , where δ j k {\displaystyle \delta _{j}^{k}} is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like v i e i {\displaystyle {v}^{i}\,\mathbf {e} _{i}} can immediately be seen to be geometrically identical in all coordinate systems. Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components ( T v ) i {\displaystyle (Tv)^{i}} are given by ( T v ) i = T j i v j {\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}} . These components transform contravariantly, since ( T v ^ ) i ′ = T ^ j ′ i ′ v ^ j ′ = [ ( R − 1 ) i i ′ T j i R j ′ j ] [ ( R − 1 ) k j ′ v k ] = ( R − 1 ) i i ′ ( T v ) i . {\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{k}^{j'}v^{k}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.} The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as, T ^ j 1 ′ , … , j q ′ i 1 ′ , … , i p ′ = ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ {\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} T j 1 , … , j q i 1 , … , i p {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}} R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short. This discussion motivates the following formal definition: Definition. A tensor of type (p, q) is an assignment of a multidimensional array T j 1 … j q i 1 … i p [ f ] {\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]} to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis f ↦ f ⋅ R = ( e i R 1 i , … , e i R n i ) {\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)} then the multidimensional array obeys the transformation law T j 1 ′ … j q ′ i 1 ′ … i p ′ [ f ⋅ R ] = ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ {\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} T j 1 , … , j q i 1 , … , i p [ f ] {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]} R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci. An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If f = ( f 1 , … , f n ) {\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})} is an ordered basis, and R = ( R j i ) {\displaystyle R=\left(R_{j}^{i}\right)} is an invertible n × n {\displaystyle n\times n} matrix, then the action is given by f R = ( f i R 1 i , … , f i R n i ) . {\displaystyle \mathbf {f} R=\left(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}\right).} Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let ρ {\displaystyle \rho } be a representation of GL(n) on W (that is, a group homomorphism ρ : GL ( n ) → GL ( W ) {\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)} ). Then a tensor of type ρ {\displaystyle \rho } is an equivariant map T : F → W {\displaystyle T:F\to W} . Equivariance here means that T ( F R ) = ρ ( R − 1 ) T ( F ) . {\displaystyle T(FR)=\rho \left(R^{-1}\right)T(F).} When ρ {\displaystyle \rho } is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups. === As multilinear maps === A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type (p, q) tensor T is defined as a multilinear map, T : V ∗ × ⋯ × V ∗ ⏟ p copies × V × ⋯ × V ⏟ q copies → R , {\displaystyle T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,} where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, ⁠ R {\displaystyle \mathbb {R} } ⁠. More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing ⁠ R {\displaystyle \mathbb {R} } ⁠ as the codomain of the multilinear maps. By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V∗, T j 1 … j q i 1 … i p ≡ T ( ε i 1 , … , ε i p , e j 1 , … , e j q ) , {\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),} a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors. In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual. === Using tensor products === For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here. A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces, T ∈ V ⊗ ⋯ ⊗ V ⏟ p copies ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ q copies . {\displaystyle T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.} A basis vi of V and basis wj of W naturally induce a basis vi ⊗ wj of the tensor product V ⊗ W. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e. T = T j 1 … j q i 1 … i p e i 1 ⊗ ⋯ ⊗ e i p ⊗ ε j 1 ⊗ ⋯ ⊗ ε j q . {\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.} Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps. This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual: U ⊗ V ≅ ( U ∗ ∗ ) ⊗ ( V ∗ ∗ ) ≅ ( U ∗ ⊗ V ∗ ) ∗ ≅ Hom 2 ⁡ ( U ∗ × V ∗ ; F ) {\displaystyle U\otimes V\cong \left(U^{**}\right)\otimes \left(V^{**}\right)\cong \left(U^{*}\otimes V^{*}\right)^{*}\cong \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)} The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from Hom 2 ⁡ ( U ∗ × V ∗ ; F ) {\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)} and Hom ⁡ ( U ∗ ⊗ V ∗ ; F ) {\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)} . Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above. === Tensors in infinite dimensions === This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories. === Tensor fields === In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor. In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions, x ¯ i ( x 1 , … , x n ) , {\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),} defining a coordinate transformation, T ^ j 1 ′ … j q ′ i 1 ′ … i p ′ ( x ¯ 1 , … , x ¯ n ) = ∂ x ¯ i 1 ′ ∂ x i 1 ⋯ ∂ x ¯ i p ′ ∂ x i p ∂ x j 1 ∂ x ¯ j 1 ′ ⋯ ∂ x j q ∂ x ¯ j q ′ T j 1 … j q i 1 … i p ( x 1 , … , x n ) . {\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).} == History == The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898. Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense. In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect: I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot. Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product. From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s. == Examples == An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol ε i j k {\displaystyle \varepsilon _{ijk}} nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems. This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor. Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table. == Properties == Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing ε i j k {\displaystyle \varepsilon _{ijk}} not being a tensor, for the sign change under transformations changing the orientation. Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, vectors: n (contravariant indices) and dual vectors: m (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers (n, m), which determine the precise form of the transformation law. The order of a tensor is the sum of these two numbers. The order (also degree or rank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order 2 + 0 = 2, the same as the stress tensor, taking one vector and returning another 1 + 1 = 2. The ε i j k {\displaystyle \varepsilon _{ijk}} -symbol, mapping two vectors to one vector, would have order 2 + 1 = 3. The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order 2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this. == Notation == There are several notational systems that are used to describe tensors and perform calculations involving them. === Ricci calculus === Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives. === Einstein summation convention === The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way. === Penrose graphical notation === Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices. === Abstract index notation === The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation. === Component-free notation === A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces. == Operations == There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type. === Tensor product === The tensor product takes two tensors, S and T, and produces a new tensor, S ⊗ T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e., ( S ⊗ T ) ( v 1 , … , v n , v n + 1 , … , v n + m ) = S ( v 1 , … , v n ) T ( v n + 1 , … , v n + m ) , {\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),} which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e., ( S ⊗ T ) j 1 … j k j k + 1 … j k + m i 1 … i l i l + 1 … i l + n = S j 1 … j k i 1 … i l T j k + 1 … j k + m i l + 1 … i l + n . {\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.} If S is of type (l, k) and T is of type (n, m), then the tensor product S ⊗ T has type (l + n, k + m). === Contraction === Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor T i j {\displaystyle T_{i}^{j}} can be contracted to a scalar through T i i {\displaystyle T_{i}^{i}} , where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace. The contraction is often used in conjunction with the tensor product to contract an index from each tensor. The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor T ∈ V ⊗ V ⊗ V ∗ {\displaystyle T\in V\otimes V\otimes V^{*}} can be written as a linear combination T = v 1 ⊗ w 1 ⊗ α 1 + v 2 ⊗ w 2 ⊗ α 2 + ⋯ + v N ⊗ w N ⊗ α N . {\displaystyle T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.} The contraction of T on the first and last slots is then the vector α 1 ( v 1 ) w 1 + α 2 ( v 2 ) w 2 + ⋯ + α N ( v N ) w N . {\displaystyle \alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.} In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor T i j {\displaystyle T^{ij}} can be contracted to a scalar through T i j g i j {\displaystyle T^{ij}g_{ij}} (yet again assuming the summation convention). === Raising or lowering an index === When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index. Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor. == Applications == === Continuum mechanics === Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed. If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point. === Other examples from physics === Common applications include: Electromagnetic tensor (or Faraday tensor) in electromagnetism Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics Permittivity and electric susceptibility are tensors in anisotropic media Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments Quantum mechanics and quantum computing utilize tensor products for combination of quantum states === Computer vision and optics === The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix. The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities: P i ε 0 = ∑ j χ i j ( 1 ) E j + ∑ j k χ i j k ( 2 ) E j E k + ∑ j k ℓ χ i j k ℓ ( 3 ) E j E k E ℓ + ⋯ . {\displaystyle {\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!} Here χ ( 1 ) {\displaystyle \chi ^{(1)}} is the linear susceptibility, χ ( 2 ) {\displaystyle \chi ^{(2)}} gives the Pockels effect and second harmonic generation, and χ ( 3 ) {\displaystyle \chi ^{(3)}} gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter. === Machine learning === The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same. == Generalizations == === Tensor products of vector spaces === The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space V ⊗ W is a second-order "tensor" in this more general sense, and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces. A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring. === Tensors in infinite dimensions === The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds. === Tensor densities === Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg⋅m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region: m = ∫ Ω ρ d x d y d z , {\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz,} where the Cartesian coordinates x, y, z are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100: x ′ = 100 x , y ′ = 100 y , z ′ = 100 z . {\displaystyle x'=100x,\quad y'=100y,\quad z'=100z.} The numerical value of the density ρ must then also transform by 100−3 m3/cm3 to compensate, so that the numerical value of the mass in kg is still given by integral of ρ d x d y d z {\displaystyle \rho \,dx\,dy\,dz} . Thus ρ ′ = 100 − 3 ρ {\displaystyle \rho '=100^{-3}\rho } (in units of kg⋅cm−3). More generally, if the Cartesian coordinates x, y, z undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables x, y, z (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold. A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition: T j 1 ′ … j q ′ i 1 ′ … i p ′ [ f ⋅ R ] = | det R | − w ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ T j 1 , … , j q i 1 , … , i p [ f ] R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left|\det R\right|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism. Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an (x, y) ∈ R2 with the transformation law ( x , y ) ↦ ( x + y log ⁡ | det R | , y ) . {\displaystyle (x,y)\mapsto (x+y\log \left|\det R\right|,y).} === Geometric objects === The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles. === Spinors === When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant. Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well. == See also == The dictionary definition of tensor at Wiktionary Array data type, for tensor storage and manipulation Bitensor === Foundational === === Applications === == Explanatory notes == == References == === Specific === === General === This article incorporates material from tensor on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == External links ==
Wikipedia/Tensor
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.: 58  Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility. The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry. Many algebraic varieties are differentiable manifolds, but an algebraic variety may have singular points while a differentiable manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces. In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type. == Overview and definitions == An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s. === Affine varieties === For an algebraically closed field K and a natural number n, let An be an affine n-space over K, identified to K n {\displaystyle K^{n}} through the choice of an affine coordinate system. The polynomials  f  in the ring K[x1, ..., xn] can be viewed as K-valued functions on An by evaluating  f  at the points in An, i.e. by choosing values in K for each xi. For each set S of polynomials in K[x1, ..., xn], define the zero-locus Z(S) to be the set of points in An on which the functions in S simultaneously vanish, that is to say Z ( S ) = { x ∈ A n ∣ f ( x ) = 0 for all f ∈ S } . {\displaystyle Z(S)=\left\{x\in \mathbf {A} ^{n}\mid f(x)=0{\text{ for all }}f\in S\right\}.} A subset V of An is called an affine algebraic set if V = Z(S) for some S.: 2  A nonempty affine algebraic set V is called irreducible if it cannot be written as the union of two proper algebraic subsets.: 3  An irreducible affine algebraic set is also called an affine variety.: 3  (Some authors use the phrase affine variety to refer to any affine algebraic set, irreducible or not.) Affine varieties can be given a natural topology by declaring the closed sets to be precisely the affine algebraic sets. This topology is called the Zariski topology.: 2  Given a subset V of An, we define I(V) to be the ideal of all polynomial functions vanishing on V: I ( V ) = { f ∈ K [ x 1 , … , x n ] ∣ f ( x ) = 0 for all x ∈ V } . {\displaystyle I(V)=\left\{f\in K[x_{1},\ldots ,x_{n}]\mid f(x)=0{\text{ for all }}x\in V\right\}.} For any affine algebraic set V, the coordinate ring or structure ring of V is the quotient of the polynomial ring by this ideal.: 4  === Projective varieties and quasi-projective varieties === Let k be an algebraically closed field and let Pn be the projective n-space over k. Let  f  in k[x0, ..., xn] be a homogeneous polynomial of degree d. It is not well-defined to evaluate  f  on points in Pn in homogeneous coordinates. However, because  f  is homogeneous, meaning that  f  (λx0, ..., λxn) = λd f  (x0, ..., xn), it does make sense to ask whether  f  vanishes at a point [x0 : ... : xn]. For each set S of homogeneous polynomials, define the zero-locus of S to be the set of points in Pn on which the functions in S vanish: Z ( S ) = { x ∈ P n ∣ f ( x ) = 0 for all f ∈ S } . {\displaystyle Z(S)=\{x\in \mathbf {P} ^{n}\mid f(x)=0{\text{ for all }}f\in S\}.} A subset V of Pn is called a projective algebraic set if V = Z(S) for some S.: 9  An irreducible projective algebraic set is called a projective variety.: 10  Projective varieties are also equipped with the Zariski topology by declaring all algebraic sets to be closed. Given a subset V of Pn, let I(V) be the ideal generated by all homogeneous polynomials vanishing on V. For any projective algebraic set V, the coordinate ring of V is the quotient of the polynomial ring by this ideal.: 10  A quasi-projective variety is a Zariski open subset of a projective variety. Notice that every affine variety is quasi-projective. Notice also that the complement of an algebraic set in an affine variety is a quasi-projective variety; in the context of affine varieties, such a quasi-projective variety is usually not called a variety but a constructible set. === Abstract varieties === In classical algebraic geometry, all varieties were by definition quasi-projective varieties, meaning that they were open subvarieties of closed subvarieties of a projective space. For example, in Chapter 1 of Hartshorne a variety over an algebraically closed field is defined to be a quasi-projective variety,: 15  but from Chapter 2 onwards, the term variety (also called an abstract variety) refers to a more general object, which locally is a quasi-projective variety, but when viewed as a whole is not necessarily quasi-projective; i.e. it might not have an embedding into projective space.: 105  So classically the definition of an algebraic variety required an embedding into projective space, and this embedding was used to define the topology on the variety and the regular functions on the variety. The disadvantage of such a definition is that not all varieties come with natural embeddings into projective space. For example, under this definition, the product P1 × P1 is not a variety until it is embedded into a larger projective space; this is usually done by the Segre embedding. Furthermore, any variety that admits one embedding into projective space admits many others, for example by composing the embedding with the Veronese embedding; thus many notions that should be intrinsic, such as that of a regular function, are not obviously so. The earliest successful attempt to define an algebraic variety abstractly, without an embedding, was made by André Weil. In his Foundations of Algebraic Geometry, using valuations. Claude Chevalley made a definition of a scheme, which served a similar purpose, but was more general. However, Alexander Grothendieck's definition of a scheme is more general still and has received the most widespread acceptance. In Grothendieck's language, an abstract algebraic variety is usually defined to be an integral, separated scheme of finite type over an algebraically closed field,: 104–105  although some authors drop the irreducibility or the reducedness or the separateness condition or allow the underlying field to be not algebraically closed. Classical algebraic varieties are the quasiprojective integral separated finite type schemes over an algebraically closed field. ==== Existence of non-quasiprojective abstract algebraic varieties ==== One of the earliest examples of a non-quasiprojective algebraic variety were given by Nagata. Nagata's example was not complete (the analog of compactness), but soon afterwards he found an algebraic surface that was complete and non-projective.: Remark 4.10.2 p.105  Since then other examples have been found: for example, it is straightforward to construct toric varieties that are not quasi-projective but complete. == Examples == === Subvariety === A subvariety is a subset of a variety that is itself a variety (with respect to the topological structure induced by the ambient variety). For example, every open subset of a variety is a variety. See also closed immersion. Hilbert's Nullstellensatz says that closed subvarieties of an affine or projective variety are in one-to-one correspondence with the prime ideals or non-irrelevant homogeneous prime ideals of the coordinate ring of the variety. === Affine variety === ==== Example 1 ==== Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element  f  (x, y): f ( x , y ) = x + y − 1. {\displaystyle f(x,y)=x+y-1.} The zero-locus of  f  (x, y) is the set of points in A2 on which this function vanishes: it is the set of all pairs of complex numbers (x, y) such that y = 1 − x. This is called a line in the affine plane. (In the classical topology coming from the topology on the complex numbers, a complex line is a real manifold of dimension two.) This is the set Z( f ): Z ( f ) = { ( x , 1 − x ) ∈ C 2 } . {\displaystyle Z(f)=\{(x,1-x)\in \mathbf {C} ^{2}\}.} Thus the subset V = Z( f ) of A2 is an algebraic set. The set V is not empty. It is irreducible, as it cannot be written as the union of two proper algebraic subsets. Thus it is an affine algebraic variety. ==== Example 2 ==== Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element g(x, y): g ( x , y ) = x 2 + y 2 − 1. {\displaystyle g(x,y)=x^{2}+y^{2}-1.} The zero-locus of g(x, y) is the set of points in A2 on which this function vanishes, that is the set of points (x,y) such that x2 + y2 = 1. As g(x, y) is an absolutely irreducible polynomial, this is an algebraic variety. The set of its real points (that is the points for which x and y are real numbers), is known as the unit circle; this name is also often given to the whole variety. ==== Example 3 ==== The following example is neither a hypersurface, nor a linear space, nor a single point. Let A3 be the three-dimensional affine space over C. The set of points (x, x2, x3) for x in C is an algebraic variety, and more precisely an algebraic curve that is not contained in any plane. It is the twisted cubic shown in the above figure. It may be defined by the equations y − x 2 = 0 z − x 3 = 0 {\displaystyle {\begin{aligned}y-x^{2}&=0\\z-x^{3}&=0\end{aligned}}} The irreducibility of this algebraic set needs a proof. One approach in this case is to check that the projection (x, y, z) → (x, y) is injective on the set of the solutions and that its image is an irreducible plane curve. For more difficult examples, a similar proof may always be given, but may imply a difficult computation: first a Gröbner basis computation to compute the dimension, followed by a random linear change of variables (not always needed); then a Gröbner basis computation for another monomial ordering to compute the projection and to prove that it is generically injective and that its image is a hypersurface, and finally a polynomial factorization to prove the irreducibility of the image. ==== General linear group ==== The set of n-by-n matrices over the base field k can be identified with the affine n2-space A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} with coordinates x i j {\displaystyle x_{ij}} such that x i j ( A ) {\displaystyle x_{ij}(A)} is the (i, j)-th entry of the matrix A {\displaystyle A} . The determinant det {\displaystyle \det } is then a polynomial in x i j {\displaystyle x_{ij}} and thus defines the hypersurface H = V ( det ) {\displaystyle H=V(\det )} in A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} . The complement of H {\displaystyle H} is then an open subset of A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} that consists of all the invertible n-by-n matrices, the general linear group GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} . It is an affine variety, since, in general, the complement of a hypersurface in an affine variety is affine. Explicitly, consider A n 2 × A 1 {\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}} where the affine line is given coordinate t. Then GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} amounts to the zero-locus in A n 2 × A 1 {\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}} of the polynomial in x i j , t {\displaystyle x_{ij},t} : t ⋅ det [ x i j ] − 1 , {\displaystyle t\cdot \det[x_{ij}]-1,} i.e., the set of matrices A such that t det ( A ) = 1 {\displaystyle t\det(A)=1} has a solution. This is best seen algebraically: the coordinate ring of GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} is the localization k [ x i j ∣ 0 ≤ i , j ≤ n ] [ det − 1 ] {\displaystyle k[x_{ij}\mid 0\leq i,j\leq n][{\det }^{-1}]} , which can be identified with k [ x i j , t ∣ 0 ≤ i , j ≤ n ] / ( t det − 1 ) {\displaystyle k[x_{ij},t\mid 0\leq i,j\leq n]/(t\det -1)} . The multiplicative group k* of the base field k is the same as GL 1 ⁡ ( k ) {\displaystyle \operatorname {GL} _{1}(k)} and thus is an affine variety. A finite product of it ( k ∗ ) r {\displaystyle (k^{*})^{r}} is an algebraic torus, which is again an affine variety. A general linear group is an example of a linear algebraic group, an affine variety that has a structure of a group in such a way the group operations are morphism of varieties. ==== Characteristic variety ==== Let A be a not-necessarily-commutative algebra over a field k. Even if A is not commutative, it can still happen that A has a Z {\displaystyle \mathbb {Z} } -filtration so that the associated ring gr ⁡ A = ⨁ i = − ∞ ∞ A i / A i − 1 {\displaystyle \operatorname {gr} A=\bigoplus _{i=-\infty }^{\infty }A_{i}/{A_{i-1}}} is commutative, reduced and finitely generated as a k-algebra; i.e., gr ⁡ A {\displaystyle \operatorname {gr} A} is the coordinate ring of an affine (reducible) variety X. For example, if A is the universal enveloping algebra of a finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , then gr ⁡ A {\displaystyle \operatorname {gr} A} is a polynomial ring (the PBW theorem); more precisely, the coordinate ring of the dual vector space g ∗ {\displaystyle {\mathfrak {g}}^{*}} . Let M be a filtered module over A (i.e., A i M j ⊂ M i + j {\displaystyle A_{i}M_{j}\subset M_{i+j}} ). If gr ⁡ M {\displaystyle \operatorname {gr} M} is fintiely generated as a gr ⁡ A {\displaystyle \operatorname {gr} A} -algebra, then the support of gr ⁡ M {\displaystyle \operatorname {gr} M} in X; i.e., the locus where gr ⁡ M {\displaystyle \operatorname {gr} M} does not vanish is called the characteristic variety of M. The notion plays an important role in the theory of D-modules. === Projective variety === A projective variety is a closed subvariety of a projective space. That is, it is the zero locus of a set of homogeneous polynomials that generate a prime ideal. ==== Example 1 ==== A plane projective curve is the zero locus of an irreducible homogeneous polynomial in three indeterminates. The projective line P1 is an example of a projective curve; it can be viewed as the curve in the projective plane P2 = {[x, y, z]} defined by x = 0. For another example, first consider the affine cubic curve y 2 = x 3 − x . {\displaystyle y^{2}=x^{3}-x.} in the 2-dimensional affine space (over a field of characteristic not two). It has the associated cubic homogeneous polynomial equation: y 2 z = x 3 − x z 2 , {\displaystyle y^{2}z=x^{3}-xz^{2},} which defines a curve in P2 called an elliptic curve. The curve has genus one (genus formula); in particular, it is not isomorphic to the projective line P1, which has genus zero. Using genus to distinguish curves is very basic: in fact, the genus is the first invariant one uses to classify curves (see also the construction of moduli of algebraic curves). ==== Example 2: Grassmannian ==== Let V be a finite-dimensional vector space. The Grassmannian variety Gn(V) is the set of all n-dimensional subspaces of V. It is a projective variety: it is embedded into a projective space via the Plücker embedding: { G n ( V ) ↪ P ( ∧ n V ) ⟨ b 1 , … , b n ⟩ ↦ [ b 1 ∧ ⋯ ∧ b n ] {\displaystyle {\begin{cases}G_{n}(V)\hookrightarrow \mathbf {P} \left(\wedge ^{n}V\right)\\\langle b_{1},\ldots ,b_{n}\rangle \mapsto [b_{1}\wedge \cdots \wedge b_{n}]\end{cases}}} where bi are any set of linearly independent vectors in V, ∧ n V {\displaystyle \wedge ^{n}V} is the n-th exterior power of V, and the bracket [w] means the line spanned by the nonzero vector w. The Grassmannian variety comes with a natural vector bundle (or locally free sheaf in other terminology) called the tautological bundle, which is important in the study of characteristic classes such as Chern classes. ==== Jacobian variety and abelian variety ==== Let C be a smooth complete curve and Pic ⁡ ( C ) {\displaystyle \operatorname {Pic} (C)} the Picard group of it; i.e., the group of isomorphism classes of line bundles on C. Since C is smooth, Pic ⁡ ( C ) {\displaystyle \operatorname {Pic} (C)} can be identified as the divisor class group of C and thus there is the degree homomorphism deg : Pic ⁡ ( C ) → Z {\displaystyle \operatorname {deg} :\operatorname {Pic} (C)\to \mathbb {Z} } . The Jacobian variety Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} of C is the kernel of this degree map; i.e., the group of the divisor classes on C of degree zero. A Jacobian variety is an example of an abelian variety, a complete variety with a compatible abelian group structure on it (the name "abelian" is however not because it is an abelian group). An abelian variety turns out to be projective (in short, algebraic theta functions give an embedding into a projective space. See equations defining abelian varieties); thus, Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} is a projective variety. The tangent space to Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} at the identity element is naturally isomorphic to H 1 ⁡ ( C , O C ) ; {\displaystyle \operatorname {H} ^{1}(C,{\mathcal {O}}_{C});} hence, the dimension of Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} is the genus of C {\displaystyle C} . Fix a point P 0 {\displaystyle P_{0}} on C {\displaystyle C} . For each integer n > 0 {\displaystyle n>0} , there is a natural morphism C n → Jac ⁡ ( C ) , ( P 1 , … , P r ) ↦ [ P 1 + ⋯ + P n − n P 0 ] {\displaystyle C^{n}\to \operatorname {Jac} (C),\,(P_{1},\dots ,P_{r})\mapsto [P_{1}+\cdots +P_{n}-nP_{0}]} where C n {\displaystyle C^{n}} is the product of n copies of C. For g = 1 {\displaystyle g=1} (i.e., C is an elliptic curve), the above morphism for n = 1 {\displaystyle n=1} turns out to be an isomorphism;: Ch. IV, Example 1.3.7.  in particular, an elliptic curve is an abelian variety. ==== Moduli varieties ==== Given an integer g ≥ 0 {\displaystyle g\geq 0} , the set of isomorphism classes of smooth complete curves of genus g {\displaystyle g} is called the moduli of curves of genus g {\displaystyle g} and is denoted as M g {\displaystyle {\mathfrak {M}}_{g}} . There are few ways to show this moduli has a structure of a possibly reducible algebraic variety; for example, one way is to use geometric invariant theory which ensures a set of isomorphism classes has a (reducible) quasi-projective variety structure. Moduli such as the moduli of curves of fixed genus is typically not a projective variety; roughly the reason is that a degeneration (limit) of a smooth curve tends to be non-smooth or reducible. This leads to the notion of a stable curve of genus g ≥ 2 {\displaystyle g\geq 2} , a not-necessarily-smooth complete curve with no terribly bad singularities and not-so-large automorphism group. The moduli of stable curves M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} , the set of isomorphism classes of stable curves of genus g ≥ 2 {\displaystyle g\geq 2} , is then a projective variety which contains M g {\displaystyle {\mathfrak {M}}_{g}} as an open dense subset. Since M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} is obtained by adding boundary points to M g {\displaystyle {\mathfrak {M}}_{g}} , M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} is colloquially said to be a compactification of M g {\displaystyle {\mathfrak {M}}_{g}} . Historically a paper of Mumford and Deligne introduced the notion of a stable curve to show M g {\displaystyle {\mathfrak {M}}_{g}} is irreducible when g ≥ 2 {\displaystyle g\geq 2} . The moduli of curves exemplifies a typical situation: a moduli of nice objects tend not to be projective but only quasi-projective. Another case is a moduli of vector bundles on a curve. Here, there are the notions of stable and semistable vector bundles on a smooth complete curve C {\displaystyle C} . The moduli of semistable vector bundles of a given rank n {\displaystyle n} and a given degree d {\displaystyle d} (degree of the determinant of the bundle) is then a projective variety denoted as S U C ( n , d ) {\displaystyle SU_{C}(n,d)} , which contains the set U C ( n , d ) {\displaystyle U_{C}(n,d)} of isomorphism classes of stable vector bundles of rank n {\displaystyle n} and degree d {\displaystyle d} as an open subset. Since a line bundle is stable, such a moduli is a generalization of the Jacobian variety of C {\displaystyle C} . In general, in contrast to the case of moduli of curves, a compactification of a moduli need not be unique and, in some cases, different non-equivalent compactifications are constructed using different methods and by different authors. An example over C {\displaystyle \mathbb {C} } is the problem of compactifying D / Γ {\displaystyle D/\Gamma } , the quotient of a bounded symmetric domain D {\displaystyle D} by an action of an arithmetic discrete group Γ {\displaystyle \Gamma } . A basic example of D / Γ {\displaystyle D/\Gamma } is when D = H g {\displaystyle D={\mathfrak {H}}_{g}} , Siegel's upper half-space and Γ {\displaystyle \Gamma } commensurable with Sp ⁡ ( 2 g , Z ) {\displaystyle \operatorname {Sp} (2g,\mathbb {Z} )} ; in that case, D / Γ {\displaystyle D/\Gamma } has an interpretation as the moduli A g {\displaystyle {\mathfrak {A}}_{g}} of principally polarized complex abelian varieties of dimension g {\displaystyle g} (a principal polarization identifies an abelian variety with its dual). The theory of toric varieties (or torus embeddings) gives a way to compactify D / Γ {\displaystyle D/\Gamma } , a toroidal compactification of it. But there are other ways to compactify D / Γ {\displaystyle D/\Gamma } ; for example, there is the minimal compactification of D / Γ {\displaystyle D/\Gamma } due to Baily and Borel: it is the projective variety associated to the graded ring formed by modular forms (in the Siegel case, Siegel modular forms; see also Siegel modular variety). The non-uniqueness of compactifications is due to the lack of moduli interpretations of those compactifications; i.e., they do not represent (in the category-theory sense) any natural moduli problem or, in the precise language, there is no natural moduli stack that would be an analog of moduli stack of stable curves. === Non-affine and non-projective example === An algebraic variety can be neither affine nor projective. To give an example, let X = P1 × A1 and p: X → A1 the projection. Here X is an algebraic variety since it is a product of varieties. It is not affine since P1 is a closed subvariety of X (as the zero locus of p), but an affine variety cannot contain a projective variety of positive dimension as a closed subvariety. It is not projective either, since there is a nonconstant regular function on X; namely, p. Another example of a non-affine non-projective variety is X = A2 − (0, 0) (cf. Morphism of varieties § Examples.) === Non-examples === Consider the affine line A 1 {\displaystyle \mathbb {A} ^{1}} over C {\displaystyle \mathbb {C} } . The complement of the circle { z ∈ C with | z | 2 = 1 } {\displaystyle \{z\in \mathbb {C} {\text{ with }}|z|^{2}=1\}} in A 1 = C {\displaystyle \mathbb {A} ^{1}=\mathbb {C} } is not an algebraic variety (nor even an algebraic set). Note that | z | 2 − 1 {\displaystyle |z|^{2}-1} is not a polynomial in z {\displaystyle z} (although it is a polynomial in the real coordinates x , y {\displaystyle x,y} ). On the other hand, the complement of the origin in A 1 = C {\displaystyle \mathbb {A} ^{1}=\mathbb {C} } is an algebraic (affine) variety, since the origin is the zero-locus of z {\displaystyle z} . This may be explained as follows: the affine line has dimension one and so any subvariety of it other than itself must have strictly less dimension; namely, zero. For similar reasons, a unitary group (over the complex numbers) is not an algebraic variety, while the special linear group SL n ⁡ ( C ) {\displaystyle \operatorname {SL} _{n}(\mathbb {C} )} is a closed subvariety of GL n ⁡ ( C ) {\displaystyle \operatorname {GL} _{n}(\mathbb {C} )} , the zero-locus of det − 1 {\displaystyle \det -1} . (Over a different base field, a unitary group can however be given a structure of a variety.) == Basic results == An affine algebraic set V is a variety if and only if I(V) is a prime ideal; equivalently, V is a variety if and only if its coordinate ring is an integral domain.: 52 : 4  Every nonempty affine algebraic set may be written uniquely as a finite union of algebraic varieties (where none of the varieties in the decomposition is a subvariety of any other).: 5  The dimension of a variety may be defined in various equivalent ways. See Dimension of an algebraic variety for details. A product of finitely many algebraic varieties (over an algebraically closed field) is an algebraic variety. A finite product of affine varieties is affine and a finite product of projective varieties is projective. == Isomorphism of algebraic varieties == Let V1, V2 be algebraic varieties. We say V1 and V2 are isomorphic, and write V1 ≅ V2, if there are regular maps φ : V1 → V2 and ψ : V2 → V1 such that the compositions ψ ∘ φ and φ ∘ ψ are the identity maps on V1 and V2 respectively. == Discussion and generalizations == The basic definitions and facts above enable one to do classical algebraic geometry. To be able to do more — for example, to deal with varieties over fields that are not algebraically closed — some foundational changes are required. The modern notion of a variety is considerably more abstract than the one above, though equivalent in the case of varieties over algebraically closed fields. An abstract algebraic variety is a particular kind of scheme; the generalization to schemes on the geometric side enables an extension of the correspondence described above to a wider class of rings. A scheme is a locally ringed space such that every point has a neighbourhood that, as a locally ringed space, is isomorphic to a spectrum of a ring. Basically, a variety over k is a scheme whose structure sheaf is a sheaf of k-algebras with the property that the rings R that occur above are all integral domains and are all finitely generated k-algebras, that is to say, they are quotients of polynomial algebras by prime ideals. This definition works over any field k. It allows you to glue affine varieties (along common open sets) without worrying whether the resulting object can be put into some projective space. This also leads to difficulties since one can introduce somewhat pathological objects, e.g. an affine line with zero doubled. Such objects are usually not considered varieties, and are eliminated by requiring the schemes underlying a variety to be separated. (Strictly speaking, there is also a third condition, namely, that one needs only finitely many affine patches in the definition above.) Some modern researchers also remove the restriction on a variety having integral domain affine charts, and when speaking of a variety only require that the affine charts have trivial nilradical. A complete variety is a variety such that any map from an open subset of a nonsingular curve into it can be extended uniquely to the whole curve. Every projective variety is complete, but not vice versa. These varieties have been called "varieties in the sense of Serre", since Serre's foundational paper FAC on sheaf cohomology was written for them. They remain typical objects to start studying in algebraic geometry, even if more general objects are also used in an auxiliary way. One way that leads to generalizations is to allow reducible algebraic sets (and fields k that aren't algebraically closed), so the rings R may not be integral domains. A more significant modification is to allow nilpotents in the sheaf of rings, that is, rings which are not reduced. This is one of several generalizations of classical algebraic geometry that are built into Grothendieck's theory of schemes. Allowing nilpotent elements in rings is related to keeping track of "multiplicities" in algebraic geometry. For example, the closed subscheme of the affine line defined by x2 = 0 is different from the subscheme defined by x = 0 (the origin). More generally, the fiber of a morphism of schemes X → Y at a point of Y may be non-reduced, even if X and Y are reduced. Geometrically, this says that fibers of good mappings may have nontrivial "infinitesimal" structure. There are further generalizations called algebraic spaces and stacks. == Algebraic manifolds == An algebraic manifold is an algebraic variety that is also an m-dimensional manifold, and hence every sufficiently small local patch is isomorphic to km. Equivalently, the variety is smooth (free from singular points). When k is the real numbers, R, algebraic manifolds are called Nash manifolds. Algebraic manifolds can be defined as the zero set of a finite collection of analytic algebraic functions. Projective algebraic manifolds are an equivalent definition for projective varieties. The Riemann sphere is one example. == See also == Variety (disambiguation) — listing also several mathematical meanings Function field of an algebraic variety Birational geometry Motive (algebraic geometry) Analytic variety Zariski–Riemann space Semi-algebraic set Fano variety Mnëv's universality theorem == Notes == == References == === Sources === This article incorporates material from Isomorphism of varieties on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Algebraic_variety
In abstract algebra, a magma, binar, or, rarely, groupoid is a basic kind of algebraic structure. Specifically, a magma consists of a set equipped with a single binary operation that must be closed by definition. No other properties are imposed. == History and terminology == The term groupoid was introduced in 1927 by Heinrich Brandt describing his Brandt groupoid. The term was then appropriated by B. A. Hausmann and Øystein Ore (1937) in the sense (of a set with a binary operation) used in this article. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology. The Brandt groupoid is a groupoid in the sense used in category theory, but not in the sense used by Hausmann and Ore. Nevertheless, influential books in semigroup theory, including Clifford and Preston (1961) and Howie (1995) use groupoid in the sense of Hausmann and Ore. Hollings (2014) writes that the term groupoid is "perhaps most often used in modern mathematics" in the sense given to it in category theory. According to Bergman and Hausknecht (1996): "There is no generally accepted word for a set with a not necessarily associative binary operation. The word groupoid is used by many universal algebraists, but workers in category theory and related areas object strongly to this usage because they use the same word to mean 'category in which all morphisms are invertible'. The term magma was used by Serre [Lie Algebras and Lie Groups, 1965]." It also appears in Bourbaki's Éléments de mathématique, Algèbre, chapitres 1 à 3, 1970. == Definition == A magma is a set M matched with an operation • that sends any two elements a, b ∈ M to another element, a • b ∈ M. The symbol • is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation (M, •) must satisfy the following requirement (known as the magma or closure property): For all a, b in M, the result of the operation a • b is also in M. And in mathematical notation: a , b ∈ M ⟹ a ⋅ b ∈ M . {\displaystyle a,b\in M\implies a\cdot b\in M.} If • is instead a partial operation, then (M, •) is called a partial magma or, more often, a partial groupoid. == Morphism of magmas == A morphism of magmas is a function f : M → N that maps magma (M, •) to magma (N, ∗) that preserves the binary operation: f (x • y) = f(x) ∗ f(y). For example, with M equal to the positive real numbers and • as the geometric mean, N equal to the real number line, and ∗ as the arithmetic mean, a logarithm f is a morphism of the magma (M, •) to (N, ∗). proof: log ⁡ x y = log ⁡ x + log ⁡ y 2 {\displaystyle \log {\sqrt {xy}}\ =\ {\frac {\log x+\log y}{2}}} Note that these commutative magmas are not associative; nor do they have an identity element. This morphism of magmas has been used in economics since 1863 when W. Stanley Jevons calculated the rate of inflation in 39 commodities in England in his A Serious Fall in the Value of Gold Ascertained, page 7. == Notation and combinatorics == The magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. Also, the operation • is often omitted and notated by juxtaposition: (a • (b • c)) • d ≡ (a(bc))d. A shorthand is often used to reduce the number of parentheses, in which the innermost operations and pairs of parentheses are omitted, being replaced just with juxtaposition: xy • z ≡ (x • y) • z. For example, the above is abbreviated to the following expression, still containing parentheses: (a • bc)d. A way to avoid completely the use of parentheses is prefix notation, in which the same expression would be written ••a•bcd. Another way, familiar to programmers, is postfix notation (reverse Polish notation), in which the same expression would be written abc••d•, in which the order of execution is simply left-to-right (no currying). The set of all possible strings consisting of symbols denoting elements of the magma, and sets of balanced parentheses is called the Dyck language. The total number of different ways of writing n applications of the magma operator is given by the Catalan number Cn. Thus, for example, C2 = 2, which is just the statement that (ab)c and a(bc) are the only two ways of pairing three elements of a magma with two operations. Less trivially, C3 = 5: ((ab)c)d, (a(bc))d, (ab)(cd), a((bc)d), and a(b(cd)). There are nn2 magmas with n elements, so there are 1, 1, 16, 19683, 4294967296, ... (sequence A002489 in the OEIS) magmas with 0, 1, 2, 3, 4, ... elements. The corresponding numbers of non-isomorphic magmas are 1, 1, 10, 3330, 178981952, ... (sequence A001329 in the OEIS) and the numbers of simultaneously non-isomorphic and non-antiisomorphic magmas are 1, 1, 7, 1734, 89521056, ... (sequence A001424 in the OEIS). == Free magma == A free magma MX on a set X is the "most general possible" magma generated by X (i.e., there are no relations or axioms imposed on the generators; see free object). The binary operation on MX is formed by wrapping each of the two operands in parentheses and juxtaposing them in the same order. For example: a • b = (a)(b), a • (a • b) = (a)((a)(b)), (a • a) • b = ((a)(a))(b). MX can be described as the set of non-associative words on X with parentheses retained. It can also be viewed, in terms familiar in computer science, as the magma of full binary trees with leaves labelled by elements of X. The operation is that of joining trees at the root. A free magma has the universal property such that if f : X → N is a function from X to any magma N, then there is a unique extension of f to a morphism of magmas f′ f′ : MX → N. == Types of magma == Magmas are not often studied as such; instead there are several different kinds of magma, depending on what axioms the operation is required to satisfy. Commonly studied types of magma include: Quasigroup: A magma where division is always possible. Loop: A quasigroup with an identity element. Semigroup: A magma where the operation is associative. Monoid: A semigroup with an identity element. Group: A magma with inverse, associativity, and an identity element. Note that each of divisibility and invertibility imply the cancellation property. Magmas with commutativity Commutative magma: A magma with commutativity. Commutative monoid: A monoid with commutativity. Abelian group: A group with commutativity. == Classification by properties == A magma (S, •), with x, y, u, z ∈ S, is called Medial If it satisfies the identity xy • uz ≡ xu • yz Left semimedial If it satisfies the identity xx • yz ≡ xy • xz Right semimedial If it satisfies the identity yz • xx ≡ yx • zx Semimedial If it is both left and right semimedial Left distributive If it satisfies the identity x • yz ≡ xy • xz Right distributive If it satisfies the identity yz • x ≡ yx • zx Autodistributive If it is both left and right distributive Commutative If it satisfies the identity xy ≡ yx Idempotent If it satisfies the identity xx ≡ x Unipotent If it satisfies the identity xx ≡ yy Zeropotent If it satisfies the identities xx • y ≡ xx ≡ y • xx Alternative If it satisfies the identities xx • y ≡ x • xy and x • yy ≡ xy • y Power-associative If the submagma generated by any element is associative Flexible if xy • x ≡ x • yx Associative If it satisfies the identity x • yz ≡ xy • z, called a semigroup A left unar If it satisfies the identity xy ≡ xz A right unar If it satisfies the identity yx ≡ zx Semigroup with zero multiplication, or null semigroup If it satisfies the identity xy ≡ uv Unital If it has an identity element Left-cancellative If, for all x, y, z, relation xy = xz implies y = z Right-cancellative If, for all x, y, z, relation yx = zx implies y = z Cancellative If it is both right-cancellative and left-cancellative A semigroup with left zeros If it is a semigroup and it satisfies the identity xy ≡ x A semigroup with right zeros If it is a semigroup and it satisfies the identity yx ≡ x Trimedial If any triple of (not necessarily distinct) elements generates a medial submagma Entropic If it is a homomorphic image of a medial cancellation magma. Central If it satisfies the identity xy • yz ≡ y == Number of magmas satisfying given properties == == Category of magmas == The category of magmas, denoted Mag, is the category whose objects are magmas and whose morphisms are magma homomorphisms. The category Mag has direct products, and there is an inclusion functor: Set ↪ Mag as trivial magmas, with operations given by projection x T y = y . More generally, because Mag is algebraic, it is a complete category. An important property is that an injective endomorphism can be extended to an automorphism of a magma extension, just the colimit of the (constant sequence of the) endomorphism. == See also == Universal algebra Magma computer algebra system, named after the object of this article. Commutative magma Algebraic structures whose axioms are all identities Groupoid algebra Hall set == References == == Further reading == Bruck, Richard Hubert (1971), A survey of binary systems (3rd ed.), Springer, ISBN 978-0-387-03497-3
Wikipedia/Magma_(algebra)
In abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography. The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups. == History == Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry. Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory. The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups. == Main classes of groups == The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations. === Permutation groups === The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself (X = G) by means of the left regular representation. In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥ 5, the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥ 5 in radicals. === Matrix groups === The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G. === Transformation groups === Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure. The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous. === Abstract groups === Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group began to take hold, where "abstract" means that the nature of the elements are ignored in such a way that two isomorphic groups are considered as the same group. A typical way of specifying an abstract group is through a presentation by generators and relations, G = ⟨ S | R ⟩ . {\displaystyle G=\langle S|R\rangle .} A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy. The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school. === Groups with additional structure === An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group. The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients. == Branches of group theory == === Finite group theory === During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known. During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields. Finite groups often occur when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry. === Representation of groups === Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism: ρ : G → GL ⁡ ( V ) , {\displaystyle \rho :G\to \operatorname {GL} (V),} where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that ρ(g) ∘ ρ(h) = ρ(gh) for any h in G. This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma). Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions. === Lie theory === A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3. Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations. === Combinatorial and geometric group theory === Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications g • h. A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators { g i } i ∈ I {\displaystyle \{g_{i}\}_{i\in I}} , the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by ⟨ F ∣ D ⟩ . {\displaystyle \langle F\mid D\rangle .} For example, the group presentation ⟨ a , b ∣ a b a − 1 b − 1 ⟩ {\displaystyle \langle a,b\mid aba^{-1}b^{-1}\rangle } describes a group which is isomorphic to Z × Z . {\displaystyle \mathbb {Z} \times \mathbb {Z} .} A string consisting of generator symbols and their inverses is called a word. Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free. There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation ⟨ x , y ∣ x y x y x = e ⟩ , {\displaystyle \langle x,y\mid xyxyx=e\rangle ,} is isomorphic to the additive group Z of integers, although this may not be immediately apparent. (Writing z = x y {\displaystyle z=xy} , one has G ≅ ⟨ z , y ∣ z 3 = y ⟩ ≅ ⟨ z ⟩ . {\displaystyle G\cong \langle z,y\mid z^{3}=y\rangle \cong \langle z\rangle .} ) Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X. == Connection of groups and symmetry == Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X. If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example. Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation x 2 − 3 = 0 {\displaystyle x^{2}-3=0} has the two solutions 3 {\displaystyle {\sqrt {3}}} and − 3 {\displaystyle -{\sqrt {3}}} . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots. The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative. Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object. The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question. == Applications of group theory == Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities. === Galois theory === Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory. === Algebraic topology === Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory. === Algebraic geometry === Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example the Hodge conjecture (in certain cases).) The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities. === Algebraic number theory === Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula, ∑ n ≥ 1 1 n s = ∏ p prime 1 1 − p − s , {\displaystyle {\begin{aligned}\sum _{n\geq 1}{\frac {1}{n^{s}}}&=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}},\\\end{aligned}}\!} captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem. === Harmonic analysis === Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques. === Combinatorics === In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma. === Music === The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group. === Physics === In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group. Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution. === Chemistry and materials science === In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals. Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule. In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, where n is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, n = 2, since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cn axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF3), the highest order of rotation axis is C3, so the principal axis of rotation is C3. In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is called σh (horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd). Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example, methane and other tetrahedral molecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation. === Cryptography === Very large groups of prime order constructed in elliptic curve cryptography serve for public-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group. == See also == List of group theory topics Examples of groups Bass-Serre theory == Notes == == References == Borel, Armand (1991), Linear algebraic groups, Graduate Texts in Mathematics, vol. 126 (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-0941-6, ISBN 978-0-387-97370-8, MR 1102012 Carter, Nathan C. (2009), Visual group theory, Classroom Resource Materials Series, Mathematical Association of America, ISBN 978-0-88385-757-1, MR 2504193 Cannon, John J. (1969), "Computers in group theory: A survey", Communications of the ACM, 12: 3–12, doi:10.1145/362835.362837, MR 0290613, S2CID 18226463 Frucht, R. (1939), "Herstellung von Graphen mit vorgegebener abstrakter Gruppe", Compositio Mathematica, 6: 239–50, ISSN 0010-437X, archived from the original on 2008-12-01 Golubitsky, Martin; Stewart, Ian (2006), "Nonlinear dynamics of networks: the groupoid formalism", Bull. Amer. Math. Soc. (N.S.), 43 (3): 305–364, doi:10.1090/S0273-0979-06-01108-6, MR 2223010 Shows the advantage of generalising from group to groupoid. Judson, Thomas W. (1997), Abstract Algebra: Theory and Applications An introductory undergraduate text in the spirit of texts by Gallian or Herstein, covering groups, rings, integral domains, fields and Galois theory. Free downloadable PDF with open-source GFDL license. Kleiner, Israel (1986), "The evolution of group theory: a brief survey", Mathematics Magazine, 59 (4): 195–215, doi:10.2307/2690312, ISSN 0025-570X, JSTOR 2690312, MR 0863090 La Harpe, Pierre de (2000), Topics in geometric group theory, University of Chicago Press, ISBN 978-0-226-31721-2 Livio, M. (2005), The Equation That Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry, Simon & Schuster, ISBN 0-7432-5820-7 Conveys the practical value of group theory by explaining how it points to symmetries in physics and other sciences. Mumford, David (1970), Abelian varieties, Oxford University Press, ISBN 978-0-19-560528-0, OCLC 138290 Ronan M., 2006. Symmetry and the Monster. Oxford University Press. ISBN 0-19-280722-6. For lay readers. Describes the quest to find the basic building blocks for finite groups. Rotman, Joseph (1994), An introduction to the theory of groups, New York: Springer-Verlag, ISBN 0-387-94285-8 A standard contemporary reference. Schupp, Paul E.; Lyndon, Roger C. (2001), Combinatorial group theory, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41158-1 Scott, W. R. (1987) [1964], Group Theory, New York: Dover, ISBN 0-486-65377-3 Inexpensive and fairly readable, but somewhat dated in emphasis, style, and notation. Shatz, Stephen S. (1972), Profinite groups, arithmetic, and geometry, Princeton University Press, ISBN 978-0-691-08017-8, MR 0347778 Weibel, Charles A. (1994), An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, ISBN 978-0-521-55987-4, MR 1269324, OCLC 36131259 == External links == History of the abstract group concept Burnside, William (1911), "Groups, Theory of" , in Chisholm, Hugh (ed.), Encyclopædia Britannica, vol. 12 (11th ed.), Cambridge University Press, pp. 626–636 This is a detailed exposition of contemporaneous understanding of Group Theory by an early researcher in the field.
Wikipedia/Group_theory
Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into two or three major branches: the natural sciences (e.g., physics, chemistry, and biology), which study the physical world; and the social sciences (e.g., economics, psychology, and sociology), which study individuals and societies. Applied sciences are disciplines that use scientific knowledge for practical purposes, such as engineering and medicine. While sometimes referred to as the formal sciences, the study of logic, mathematics, and theoretical computer science (which study formal systems governed by axioms and rules) are typically regarded as separate because they rely on deductive reasoning instead of the scientific method or empirical evidence as their main methodology. The history of science spans the majority of the historical record, with the earliest identifiable predecessors to modern science dating to the Bronze Age in Egypt and Mesopotamia (c. 3000–1200 BCE). Their contributions to mathematics, astronomy, and medicine entered and shaped the Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes, while further advancements, including the introduction of the Hindu–Arabic numeral system, were made during the Golden Age of India.: 12  Scientific research deteriorated in these regions after the fall of the Western Roman Empire during the Early Middle Ages (400–1000 CE), but in the Medieval renaissances (Carolingian Renaissance, Ottonian Renaissance and the Renaissance of the 12th century) scholarship flourished again. Some Greek manuscripts lost in Western Europe were preserved and expanded upon in the Middle East during the Islamic Golden Age, Later, Byzantine Greek scholars contributed to their transmission by bringing Greek manuscripts from the declining Byzantine Empire to Western Europe at the beginning of the Renaissance. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th centuries revived natural philosophy, which was later transformed by the Scientific Revolution that began in the 16th century as new ideas and discoveries departed from previous Greek conceptions and traditions. The scientific method soon played a greater role in knowledge creation and in the 19th century many of the institutional and professional features of science began to take shape, along with the changing of "natural philosophy" to "natural science". New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems. Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions, government agencies, and companies. The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritising the ethical and moral development of commercial products, armaments, health care, public infrastructure, and environmental protection. == Etymology == The word science has been used in Middle English since the 14th century in the sense of "the state of knowing". The word was borrowed from the Anglo-Norman language as the suffix -cience, which was borrowed from the Latin word scientia, meaning "knowledge, awareness, understanding", a noun derivative of sciens meaning "knowing", itself the present active participle of sciō, "to know". There are many hypotheses for science's ultimate word origin. According to Michiel de Vaan, Dutch linguist and Indo-Europeanist, sciō may have its origin in the Proto-Italic language as *skije- or *skijo- meaning "to know", which may originate from Proto-Indo-European language as *skh1-ie, *skh1-io, meaning "to incise". The Lexikon der indogermanischen Verben proposed sciō is a back-formation of nescīre, meaning "to not know, be unfamiliar with", which may derive from Proto-Indo-European *sekH- in Latin secāre, or *skh2-, from *sḱʰeh2(i)- meaning "to cut". In the past, science was a synonym for "knowledge" or "study", in keeping with its Latin origin. A person who conducted scientific research was called a "natural philosopher" or "man of science". In 1834, William Whewell introduced the term scientist in a review of Mary Somerville's book On the Connexion of the Physical Sciences, crediting it to "some ingenious gentleman" (possibly himself). == History == === Early history === Science has no single origin. Rather, scientific thinking emerged gradually over the course of tens of thousands of years, taking different forms around the world, and few details are known about the very earliest developments. Women likely played a central role in prehistoric science, as did religious rituals. Some scholars use the term "protoscience" to label activities in the past that resemble modern science in some but not all features; however, this label has also been criticised as denigrating, or too suggestive of presentism, thinking about those activities only in relation to modern categories. Direct evidence for scientific processes becomes clearer with the advent of writing systems in the Bronze Age civilisations of Ancient Egypt and Mesopotamia (c. 3000–1200 BCE), creating the earliest written records in the history of science.: 12–15  Although the words and concepts of "science" and "nature" were not part of the conceptual landscape at the time, the ancient Egyptians and Mesopotamians made contributions that would later find a place in Greek and medieval science: mathematics, astronomy, and medicine.: 12  From the 3rd millennium BCE, the ancient Egyptians developed a non-positional decimal numbering system, solved practical problems using geometry, and developed a calendar. Their healing therapies involved drug treatments and the supernatural, such as prayers, incantations, and rituals.: 9  The ancient Mesopotamians used knowledge about the properties of various natural chemicals for manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. They studied animal physiology, anatomy, behaviour, and astrology for divinatory purposes. The Mesopotamians had an intense interest in medicine and the earliest medical prescriptions appeared in Sumerian during the Third Dynasty of Ur. They seem to have studied scientific subjects which had practical or religious applications and had little interest in satisfying curiosity. === Classical antiquity === In classical antiquity, there is no real ancient analogue of a modern scientist. Instead, well-educated, usually upper-class, and almost universally male individuals performed various investigations into nature whenever they could afford the time. Before the invention or discovery of the concept of phusis or nature by the pre-Socratic philosophers, the same words tend to be used to describe the natural "way" in which a plant grows, and the "way" in which, for example, one tribe worships a particular god. For this reason, it is claimed that these men were the first philosophers in the strict sense and the first to clearly distinguish "nature" and "convention". The early Greek philosophers of the Milesian school, which was founded by Thales of Miletus and later continued by his successors Anaximander and Anaximenes, were the first to attempt to explain natural phenomena without relying on the supernatural. The Pythagoreans developed a complex number philosophy: 467–468  and contributed significantly to the development of mathematical science.: 465  The theory of atoms was developed by the Greek philosopher Leucippus and his student Democritus. Later, Epicurus would develop a full natural cosmology based on atomism, and would adopt a "canon" (ruler, standard) which established physical criteria or standards of scientific truth. The Greek doctor Hippocrates established the tradition of systematic medical science and is known as "The Father of Medicine". A turning point in the history of early philosophical science was Socrates' example of applying philosophy to the study of human matters, including human nature, the nature of political communities, and human knowledge itself. The Socratic method as documented by Plato's dialogues is a dialectic method of hypothesis elimination: better hypotheses are found by steadily identifying and eliminating those that lead to contradictions. The Socratic method searches for general commonly-held truths that shape beliefs and scrutinises them for consistency. Socrates criticised the older type of study of physics as too purely speculative and lacking in self-criticism. In the 4th century BCE, Aristotle created a systematic programme of teleological philosophy. In the 3rd century BCE, Greek astronomer Aristarchus of Samos was the first to propose a heliocentric model of the universe, with the Sun at the centre and all the planets orbiting it. Aristarchus's model was widely rejected because it was believed to violate the laws of physics, while Ptolemy's Almagest, which contains a geocentric description of the Solar System, was accepted through the early Renaissance instead. The inventor and mathematician Archimedes of Syracuse made major contributions to the beginnings of calculus. Pliny the Elder was a Roman writer and polymath, who wrote the seminal encyclopaedia Natural History. Positional notation for representing numbers likely emerged between the 3rd and 5th centuries CE along Indian trade routes. This numeral system made efficient arithmetic operations more accessible and would eventually become standard for mathematics worldwide. === Middle Ages === Due to the collapse of the Western Roman Empire, the 5th century saw an intellectual decline, with knowledge of classical Greek conceptions of the world deteriorating in Western Europe.: 194  Latin encyclopaedists of the period such as Isidore of Seville preserved the majority of general ancient knowledge. In contrast, because the Byzantine Empire resisted attacks from invaders, they were able to preserve and improve prior learning.: 159  John Philoponus, a Byzantine scholar in the 6th century, started to question Aristotle's teaching of physics, introducing the theory of impetus.: 307, 311, 363, 402  His criticism served as an inspiration to medieval scholars and Galileo Galilei, who extensively cited his works ten centuries later.: 307–308  During late antiquity and the Early Middle Ages, natural phenomena were mainly examined via the Aristotelian approach. The approach includes Aristotle's four causes: material, formal, moving, and final cause. Many Greek classical texts were preserved by the Byzantine Empire and Arabic translations were made by Christians, mainly Nestorians and Miaphysites. Under the Abbasids, these Arabic translations were later improved and developed by Arabic scientists. By the 6th and 7th centuries, the neighbouring Sasanian Empire established the medical Academy of Gondishapur, which was considered by Greek, Syriac, and Persian physicians as the most important medical hub of the ancient world. Islamic study of Aristotelianism flourished in the House of Wisdom established in the Abbasid capital of Baghdad, Iraq and the flourished until the Mongol invasions in the 13th century. Ibn al-Haytham, better known as Alhazen, used controlled experiments in his optical study. Avicenna's compilation of The Canon of Medicine, a medical encyclopaedia, is considered to be one of the most important publications in medicine and was used until the 18th century. By the 11th century most of Europe had become Christian,: 204  and in 1088, the University of Bologna emerged as the first university in Europe. As such, demand for Latin translation of ancient and scientific texts grew,: 204  a major contributor to the Renaissance of the 12th century. Renaissance scholasticism in western Europe flourished, with experiments done by observing, describing, and classifying subjects in nature. In the 13th century, medical teachers and students at Bologna began opening human bodies, leading to the first anatomy textbook based on human dissection by Mondino de Luzzi. === Renaissance === New developments in optics played a role in the inception of the Renaissance, both by challenging long-held metaphysical ideas on perception, as well as by contributing to the improvement and development of technology such as the camera obscura and the telescope. At the start of the Renaissance, Roger Bacon, Vitello, and John Peckham each built up a scholastic ontology upon a causal chain beginning with sensation, perception, and finally apperception of the individual and universal forms of Aristotle.: Book I  A model of vision later known as perspectivism was exploited and studied by the artists of the Renaissance. This theory uses only three of Aristotle's four causes: formal, material, and final. In the 16th century, Nicolaus Copernicus formulated a heliocentric model of the Solar System, stating that the planets revolve around the Sun, instead of the geocentric model where the planets and the Sun revolve around the Earth. This was based on a theorem that the orbital periods of the planets are longer as their orbs are farther from the centre of motion, which he found not to agree with Ptolemy's model. Johannes Kepler and others challenged the notion that the only function of the eye is perception, and shifted the main focus in optics from the eye to the propagation of light. Kepler is best known, however, for improving Copernicus' heliocentric model through the discovery of Kepler's laws of planetary motion. Kepler did not reject Aristotelian metaphysics and described his work as a search for the Harmony of the Spheres. Galileo had made significant contributions to astronomy, physics and engineering. However, he became persecuted after Pope Urban VIII sentenced him for writing about the heliocentric model. The printing press was widely used to publish scholarly arguments, including some that disagreed widely with contemporary ideas of nature. Francis Bacon and René Descartes published philosophical arguments in favour of a new type of non-Aristotelian science. Bacon emphasised the importance of experiment over contemplation, questioned the Aristotelian concepts of formal and final cause, promoted the idea that science should study the laws of nature and the improvement of all human life. Descartes emphasised individual thought and argued that mathematics rather than geometry should be used to study nature. === Age of Enlightenment === At the start of the Age of Enlightenment, Isaac Newton formed the foundation of classical mechanics by his Philosophiæ Naturalis Principia Mathematica, greatly influencing future physicists. Gottfried Wilhelm Leibniz incorporated terms from Aristotelian physics, now used in a new non-teleological way. This implied a shift in the view of objects: objects were now considered as having no innate goals. Leibniz assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes. During this time the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences is the endowment of human life with new inventions and riches", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime or pleasing [speculation]". Science during the Enlightenment was dominated by scientific societies and academies, which had largely replaced universities as centres of scientific research and development. Societies and academies were the backbones of the maturation of the scientific profession. Another important development was the popularisation of science among an increasingly literate population. Enlightenment philosophers turned to a few of their scientific predecessors – Galileo, Kepler, Boyle, and Newton principally – as the guides to every physical and social field of the day. The 18th century saw significant advancements in the practice of medicine and physics; the development of biological taxonomy by Carl Linnaeus; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline. Ideas on human nature, society, and economics evolved during the Enlightenment. Hume and other Scottish Enlightenment thinkers developed A Treatise of Human Nature, which was expressed historically in works by authors including James Burnett, Adam Ferguson, John Millar and William Robertson, all of whom merged a scientific study of how humans behaved in ancient and primitive cultures with a strong awareness of the determining forces of modernity. Modern sociology largely originated from this movement. In 1776, Adam Smith published The Wealth of Nations, which is often considered the first work on modern economics. === 19th century === During the 19th century, many distinguishing characteristics of contemporary modern science began to take shape. These included the transformation of the life and physical sciences; the frequent use of precision instruments; the emergence of terms such as "biologist", "physicist", and "scientist"; an increased professionalisation of those studying nature; scientists gaining cultural authority over many dimensions of society; the industrialisation of numerous countries; the thriving of popular science writings; and the emergence of science journals. During the late 19th century, psychology emerged as a separate discipline from philosophy when Wilhelm Wundt founded the first laboratory for psychological research in 1879. During the mid-19th century Charles Darwin and Alfred Russel Wallace independently proposed the theory of evolution by natural selection in 1858, which explained how different plants and animals originated and evolved. Their theory was set out in detail in Darwin's book On the Origin of Species, published in 1859. Separately, Gregor Mendel presented his paper, "Experiments on Plant Hybridisation" in 1865, which outlined the principles of biological inheritance, serving as the basis for modern genetics. Early in the 19th century John Dalton suggested the modern atomic theory, based on Democritus's original idea of indivisible particles called atoms. The laws of conservation of energy, conservation of momentum and conservation of mass suggested a highly stable universe where there could be little loss of resources. However, with the advent of the steam engine and the Industrial Revolution there was an increased understanding that not all forms of energy have the same energy qualities, the ease of conversion to useful work or to another form of energy. This realisation led to the development of the laws of thermodynamics, in which the free energy of the universe is seen as constantly declining: the entropy of a closed universe increases over time. The electromagnetic theory was established in the 19th century by the works of Hans Christian Ørsted, André-Marie Ampère, Michael Faraday, James Clerk Maxwell, Oliver Heaviside, and Heinrich Hertz. The new theory raised questions that could not easily be answered using Newton's framework. The discovery of X-rays inspired the discovery of radioactivity by Henri Becquerel and Marie Curie in 1896, Marie Curie then became the first person to win two Nobel Prizes. In the next year came the discovery of the first subatomic particle, the electron. === 20th century === In the first half of the century the development of antibiotics and artificial fertilisers improved human living standards globally. Harmful environmental issues such as ozone depletion, ocean acidification, eutrophication, and climate change came to the public's attention and caused the onset of environmental studies. During this period scientific experimentation became increasingly larger in scale and funding. The extensive technological innovation stimulated by World War I, World War II, and the Cold War led to competitions between global powers, such as the Space Race and nuclear arms race. Substantial international collaborations were also made, despite armed conflicts. In the late 20th century active recruitment of women and elimination of sex discrimination greatly increased the number of women scientists, but large gender disparities remained in some fields. The discovery of the cosmic microwave background in 1964 led to a rejection of the steady-state model of the universe in favour of the Big Bang theory of Georges Lemaître. The century saw fundamental changes within science disciplines. Evolution became a unified theory in the early 20th-century when the modern synthesis reconciled Darwinian evolution with classical genetics. Albert Einstein's theory of relativity and the development of quantum mechanics complement classical mechanics to describe physics in extreme length, time and gravity. Widespread use of integrated circuits in the last quarter of the 20th century combined with communications satellites led to a revolution in information technology and the rise of the global internet and mobile computing, including smartphones. The need for mass systematisation of long, intertwined causal chains and large amounts of data led to the rise of the fields of systems theory and computer-assisted scientific modelling. === 21st century === The Human Genome Project was completed in 2003 by identifying and mapping all of the genes of the human genome. The first induced pluripotent human stem cells were made in 2006, allowing adult cells to be transformed into stem cells and turn into any cell type found in the body. With the affirmation of the Higgs boson discovery in 2013, the last particle predicted by the Standard Model of particle physics was found. In 2015, gravitational waves, predicted by general relativity a century before, were first observed. In 2019, the international collaboration Event Horizon Telescope presented the first direct image of a black hole's accretion disc. == Branches == Modern science is commonly divided into three major branches: natural science, social science, and formal science. Each of these branches comprises various specialised yet overlapping scientific disciplines that often possess their own nomenclature and expertise. Both natural and social sciences are empirical sciences, as their knowledge is based on empirical observations and is capable of being tested for its validity by other researchers working under the same conditions. === Natural === Natural science is the study of the physical world. It can be divided into two main branches: life science and physical science. These two branches may be further divided into more specialised disciplines. For example, physical science can be subdivided into physics, chemistry, astronomy, and earth science. Modern natural science is the successor to the natural philosophy that began in Ancient Greece. Galileo, Descartes, Bacon, and Newton debated the benefits of using approaches that were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and other biotic beings. Today, "natural history" suggests observational descriptions aimed at popular audiences. === Social === Social science is the study of human behaviour and the functioning of societies. It has many disciplines that include, but are not limited to anthropology, economics, history, human geography, political science, psychology, and sociology. In the social sciences, there are many competing theoretical perspectives, many of which are extended through competing research programmes such as the functionalists, conflict theorists, and interactionists in sociology. Due to the limitations of conducting controlled experiments involving large groups of individuals or complex situations, social scientists may adopt other research methods such as the historical method, case studies, and cross-cultural studies. Moreover, if quantitative information is available, social scientists may rely on statistical approaches to better understand social relationships and processes. === Formal === Formal science is an area of study that generates knowledge using formal systems. A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. It includes mathematics, systems theory, and theoretical computer science. The formal sciences share similarities with the other two branches by relying on objective, careful, and systematic study of an area of knowledge. They are, however, different from the empirical sciences as they rely exclusively on deductive reasoning, without the need for empirical evidence, to verify their abstract concepts. The formal sciences are therefore a priori disciplines and because of this, there is disagreement on whether they constitute a science. Nevertheless, the formal sciences play an important role in the empirical sciences. Calculus, for example, was initially invented to understand motion in physics. Natural and social sciences that rely heavily on mathematical applications include mathematical physics, chemistry, biology, finance, and economics. === Applied === Applied science is the use of the scientific method and knowledge to attain practical goals and includes a broad range of disciplines such as engineering and medicine. Engineering is the use of scientific principles to invent, design and build machines, structures and technologies. Science may contribute to the development of new technologies. Medicine is the practice of caring for patients by maintaining and restoring health through the prevention, diagnosis, and treatment of injury or disease. === Basic === The applied sciences are often contrasted with the basic sciences, which are focused on advancing scientific theories and laws that explain and predict events in the natural world. === Blue skies === === Computational === Computational science applies computer simulations to science, enabling a better understanding of scientific problems than formal mathematics alone can achieve. The use of machine learning and artificial intelligence is becoming a central feature of computational contributions to science, for example in agent-based computational economics, random forests, topic modeling and various forms of prediction. However, machines alone rarely advance knowledge as they require human guidance and capacity to reason; and they can introduce bias against certain social groups or sometimes underperform against humans. === Interdisciplinary === Interdisciplinary science involves the combination of two or more disciplines into one, such as bioinformatics, a combination of biology and computer science or cognitive sciences. The concept has existed since the ancient Greek period and it became popular again in the 20th century. == Research == Scientific research can be labelled as either basic or applied research. Basic research is the search for knowledge and applied research is the search for solutions to practical problems using this knowledge. Most understanding comes from basic research, though sometimes applied research targets specific practical problems. This leads to technological advances that were not previously imaginable. === Scientific method === Scientific research involves using the scientific method, which seeks to objectively explain the events of nature in a reproducible way. Scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: there is an objective reality shared by all rational observers; this objective reality is governed by natural laws; these laws were discovered by means of systematic observation and experimentation. Mathematics is essential in the formation of hypotheses, theories, and laws, because it is used extensively in quantitative modelling, observing, and collecting measurements. Statistics is used to summarise and analyse data, which allows scientists to assess the reliability of experimental results. In the scientific method an explanatory thought experiment or hypothesis is put forward as an explanation using parsimony principles and is expected to seek consilience – fitting with other accepted facts related to an observation or scientific question. This tentative explanation is used to make falsifiable predictions, which are typically posted before being tested by experimentation. Disproof of a prediction is evidence of progress.: 4–5  Experimentation is especially important in science to help establish causal relationships to avoid the correlation fallacy, though in some sciences such as astronomy or geology, a predicted observation might be more appropriate. When a hypothesis proves unsatisfactory it is modified or discarded. If the hypothesis survives testing, it may become adopted into the framework of a scientific theory, a validly reasoned, self-consistent model or framework for describing the behaviour of certain natural events. A theory typically describes the behaviour of much broader sets of observations than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus, a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. Scientists may generate a model, an attempt to describe or depict an observation in terms of a logical, physical or mathematical representation, and to generate new hypotheses that can be tested by experimentation. While performing experiments to test hypotheses, scientists may have a preference for one outcome over another. Eliminating the bias can be achieved through transparency, careful experimental design, and a thorough peer review process of the experimental results and conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be. Taken in its entirety, the scientific method allows for highly creative problem solving while minimising the effects of subjective and confirmation bias. Intersubjective verifiability, the ability to reach a consensus and reproduce results, is fundamental to the creation of all scientific knowledge. === Literature === Scientific research is published in a range of literature. Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, Journal des sçavans followed by Philosophical Transactions, began publication in 1665. Since that time the total number of active periodicals has steadily increased. In 1981, one estimate for the number of scientific and technical journals in publication was 11,500. Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is considered necessary to communicate the achievements, news, and ambitions of scientists to a wider population. === Challenges === The replication crisis is an ongoing methodological crisis that affects parts of the social and life sciences. In subsequent investigations, the results of many scientific studies have been proven to be unrepeatable. The crisis has long-standing roots; the phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in metascience, which aims to improve the quality of all scientific research while reducing waste. An area of study or speculation that masquerades as science in an attempt to claim legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe, and at a glance, look like they are doing science but lack the honesty to allow their results to be rigorously evaluated. Various types of commercial advertising, ranging from hype to fraud, may fall into these categories. Science has been described as "the most important tool" for separating valid claims from invalid ones. There can also be an element of political bias or ideological bias on all sides of scientific debates. Sometimes, research may be characterised as "bad science", research that may be well-intended but is incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term scientific misconduct refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person. == Philosophy == There are different schools of thought in the philosophy of science. The most popular position is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalise observations. Empiricism generally encompasses inductivism, a position that explains how general theories can be made from the finite amount of empirical evidence available. Many versions of empiricism exist, with the predominant ones being Bayesianism and the hypothetico-deductive method. Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation. Critical rationalism is a contrasting 20th-century approach to science, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories, and that the only way theory A can be affected by observation is after theory A were to conflict with observation, but theory B were to survive the observation. Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories, replacing induction with falsification as the empirical method. Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error, covering all products of the human mind, including science, mathematics, philosophy, and art. Another approach, instrumentalism, emphasises the utility of theories as instruments for explaining and predicting phenomena. It views scientific theories as black boxes, with only their input (initial conditions) and output (predictions) being relevant. Consequences, theoretical entities, and logical structure are claimed to be things that should be ignored. Close to instrumentalism is constructive empiricism, according to which the main criterion for the success of a scientific theory is whether what it says about observable entities is true. Thomas Kuhn argued that the process of observation and evaluation takes place within a paradigm, a logically consistent "portrait" of the world that is consistent with observations made from its framing. He characterised normal science as the process of observation and "puzzle solving", which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Each paradigm has its own distinct questions, aims, and interpretations. The choice between paradigms involves setting two or more "portraits" against the world and deciding which likeness is most promising. A paradigm shift occurs when a significant number of observational anomalies arise in the old paradigm and a new paradigm makes sense of them. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism. Another approach often cited in debates of scientific scepticism against controversial movements like "creation science" is methodological naturalism. Naturalists maintain that a difference should be made between natural and supernatural, and science should be restricted to natural explanations. Methodological naturalism maintains that science requires strict adherence to empirical study and independent verification. == Community == The scientific community is a network of interacting scientists who conduct scientific research. The community consists of smaller groups working in scientific fields. By having peer review, through discussion and debate within journals and conferences, scientists maintain the quality of research methodology and objectivity when interpreting results. === Scientists === Scientists are individuals who conduct scientific research to advance knowledge in an area of interest. Scientists may exhibit a strong curiosity about reality and a desire to apply scientific knowledge for the benefit of public health, nations, the environment, or industries; other motivations include recognition by peers and prestige. In modern times, many scientists study within specific areas of science in academic institutions, often obtaining advanced degrees in the process. Many scientists pursue careers in various fields such as academia, industry, government, and nonprofit organisations. Science has historically been a male-dominated field, with notable exceptions. Women have faced considerable discrimination in science, much as they have in other areas of male-dominated societies. For example, women were frequently passed over for job opportunities and denied credit for their work. The achievements of women in science have been attributed to the defiance of their traditional role as labourers within the domestic sphere. === Learned societies === Learned societies for the communication and promotion of scientific thought and experimentation have existed since the Renaissance. Many scientists belong to a learned society that promotes their respective scientific discipline, profession, or group of related disciplines. Membership may either be open to all, require possession of scientific credentials, or conferred by election. Most scientific societies are nonprofit organisations, and many are professional associations. Their activities typically include holding regular conferences for the presentation and discussion of new research results and publishing or sponsoring academic journals in their discipline. Some societies act as professional bodies, regulating the activities of their members in the public interest, or the collective interest of the membership. The professionalisation of science, begun in the 19th century, was partly enabled by the creation of national distinguished academies of sciences such as the Italian Accademia dei Lincei in 1603, the British Royal Society in 1660, the French Academy of Sciences in 1666, the American National Academy of Sciences in 1863, the German Kaiser Wilhelm Society in 1911, and the Chinese Academy of Sciences in 1949. International scientific organisations, such as the International Science Council, are devoted to international cooperation for science advancement. === Awards === Science awards are usually given to individuals or organisations that have made significant contributions to a discipline. They are often given by prestigious institutions; thus, it is considered a great honour for a scientist receiving them. Since the early Renaissance, scientists have often been awarded medals, money, and titles. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, and chemistry. == Society == === Funding and policies === Funding of science is often through a competitive process in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations, or foundations, allocate scarce funds. Total research funding in most developed countries is between 1.5% and 3% of GDP. In the OECD, around two-thirds of research and development in scientific and technical fields is carried out by industry, and 20% and 10%, respectively, by universities and government. The government funding proportion in certain fields is higher, and it dominates research in social science and the humanities. In less developed nations, the government provides the bulk of the funds for their basic scientific research. Many governments have dedicated agencies to support scientific research, such as the National Science Foundation in the United States, the National Scientific and Technical Research Council in Argentina, Commonwealth Scientific and Industrial Research Organisation in Australia, National Centre for Scientific Research in France, the Max Planck Society in Germany, and National Research Council in Spain. In commercial research and development, all but the most research-orientated corporations focus more heavily on near-term commercialisation possibilities than research driven by curiosity. Science policy is concerned with policies that affect the conduct of the scientific enterprise, including research funding, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care, and environmental monitoring. Science policy sometimes refers to the act of applying scientific knowledge and consensus to the development of public policies. In accordance with public policy being concerned about the well-being of its citizens, science policy's goal is to consider how science and technology can best serve the public. Public policy can directly affect the funding of capital equipment and intellectual infrastructure for industrial research by providing tax incentives to those organisations that fund research. === Education and awareness === Science education for the general public is embedded in the school curriculum, and is supplemented by online pedagogical content (for example, YouTube and Khan Academy), museums, and science magazines and blogs. Major organisations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning, along with philosophy and history. Scientific literacy is chiefly concerned with an understanding of the scientific method, units and methods of measurement, empiricism, a basic understanding of statistics (correlations, qualitative versus quantitative observations, aggregate statistics), and a basic understanding of core scientific fields such as physics, chemistry, biology, ecology, geology, and computation. As a student advances into higher stages of formal education, the curriculum becomes more in depth. Traditional subjects usually included in the curriculum are natural and formal sciences, although recent movements include social and applied science as well. The mass media face pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate may require considerable expertise regarding the matter. Few journalists have real scientific knowledge, and even beat reporters who are knowledgeable about certain scientific issues may be ignorant about other scientific issues that they are suddenly asked to cover. Science magazines such as New Scientist, Science & Vie, and Scientific American cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. The science fiction genre, primarily speculative fiction, can transmit the ideas and methods of science to the general public. Recent efforts to intensify or develop links between science and non-scientific disciplines, such as literature or poetry, include the Creative Writing Science resource developed through the Royal Literary Fund. === Anti-science attitudes === While the scientific method is broadly accepted in the scientific community, some fractions of society reject certain scientific positions or are sceptical about science. Examples are the common notion that COVID-19 is not a major health threat to the US (held by 39% of Americans in August 2021) or the belief that climate change is not a major threat to the US (also held by 40% of Americans, in late 2019 and early 2020). Psychologists have pointed to four factors driving rejection of scientific results: Scientific authorities are sometimes seen as inexpert, untrustworthy, or biased. Some marginalised social groups hold anti-science attitudes, in part because these groups have often been exploited in unethical experiments. Messages from scientists may contradict deeply held existing beliefs or morals. The delivery of a scientific message may not be appropriately targeted to a recipient's learning style. Anti-science attitudes often seem to be caused by fear of rejection in social groups. For instance, climate change is perceived as a threat by only 22% of Americans on the right side of the political spectrum, but by 85% on the left. That is, if someone on the left would not consider climate change as a threat, this person may face contempt and be rejected in that social group. In fact, people may rather deny a scientifically accepted fact than lose or jeopardise their social status. === Politics === Attitudes towards science are often determined by political opinions and goals. Government, business and advocacy groups have been known to use legal and economic pressure to influence scientific researchers. Many factors can act as facets of the politicisation of science such as anti-intellectualism, perceived threats to religious beliefs, and fear for business interests. Politicisation of science is usually accomplished when scientific information is presented in a way that emphasises the uncertainty associated with the scientific evidence. Tactics such as shifting conversation, failing to acknowledge facts, and capitalising on doubt of scientific consensus have been used to gain more attention for views that have been undermined by scientific evidence. Examples of issues that have involved the politicisation of science include the global warming controversy, health effects of pesticides, and health effects of tobacco. == See also == List of scientific occupations List of years in science Logology (science) Science (Wikiversity) Scientific integrity == Notes == == References == == External links ==
Wikipedia/Empirical_sciences
Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study. == Basic idea == In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A. === Arity === An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type Ω {\displaystyle \Omega } , where Ω {\displaystyle \Omega } is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as ⋀ α ∈ J x α {\displaystyle \textstyle \bigwedge _{\alpha \in J}x_{\alpha }} where J is an infinite index set, which is an operation in the algebraic theory of complete lattices. === Equations === After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A. == Varieties == A collection of algebraic structures defined by identities is called a variety or equational class. Restricting one's study to varieties rules out: quantification, including universal quantification (∀) except before an equation, and existential quantification (∃) logical connectives other than conjunction (∧) relations other than equality, in particular inequalities, both a ≠ b and order relations The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only. Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope. The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type). One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces. === Examples === Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities. ==== Groups ==== As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms: Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z. Identity element: There exists an element e such that for each element x, one has e ∗ x = x = x ∗ e; formally: ∃e ∀x. e∗x=x=x∗e. Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x. (Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.) This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become: Associativity: x ∗ (y ∗ z) = (x ∗ y) ∗ z. Identity element: e ∗ x = x = x ∗ e; formally: ∀x. e∗x=x=x∗e. Inverse element: x ∗ (~x) = e = (~x) ∗ x; formally: ∀x. x∗~x=e=~x∗x. To summarize, the usual definition has: a single binary operation (signature (2)) 1 equational law (associativity) 2 quantified laws (identity and inverse) while the universal algebra definition has: 3 operations: one binary, one unary, and one nullary (signature (2, 1, 0)) 3 equational laws (associativity, identity, and inverse) no quantified laws (except outermost universal quantifiers, which are allowed in varieties) A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows that it is unique, as is the inverse of each element. The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration). ==== Other examples ==== Most algebraic structures are examples of universal algebras. Rings, semigroups, quasigroups, groupoids, magmas, loops, and others. Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring. Examples of relational algebras include semilattices, lattices, and Boolean algebras. == Basic constructions == We assume that the type, Ω {\displaystyle \Omega } , has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product. A homomorphism between two algebras A and B is a function h : A → B from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1, ..., xn)) = fB(h(x1), ..., h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A). A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise. == Some basic theorems == The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc. Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products. == Motivations and applications == In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras. It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one." In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system. The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids. === Constraint satisfaction problem === Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra A and an existential sentence φ {\displaystyle \varphi } over this algebra, the question is to find out whether φ {\displaystyle \varphi } can be satisfied in A. The algebra A is often fixed, so that CSPA refers to the problem whose instance is only the existential sentence φ {\displaystyle \varphi } . It is proved that every computational problem can be formulated as CSPA for some algebra A. For example, the n-coloring problem can be stated as CSP of the algebra ({0, 1, ..., n−1}, ≠), i.e. an algebra with n elements and a single relation, inequality. == Generalizations == Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages. In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products). A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law gg−1 = 1 duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space. Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as "essentially algebraic theories". Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic". == History == In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.: v  At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities. Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book: "Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge." Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students. In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others. In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others. Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra. == See also == Equational logic Graph algebra Term algebra Clone Universal algebraic geometry Simple algebra (universal algebra) == Footnotes == == References == == External links == Algebra Universalis—a journal dedicated to Universal Algebra.
Wikipedia/Universal_algebra
Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of single variable calculus, vector calculus, linear algebra and multilinear algebra. The field has its origins in the study of spherical geometry as far back as antiquity. It also relates to astronomy, the geodesy of the Earth, and later the study of hyperbolic geometry by Lobachevsky. The simplest examples of smooth spaces are the plane and space curves and surfaces in the three-dimensional Euclidean space, and the study of these shapes formed the basis for development of modern differential geometry during the 18th and 19th centuries. Since the late 19th century, differential geometry has grown into a field concerned more generally with geometric structures on differentiable manifolds. A geometric structure is one which defines some notion of size, distance, shape, volume, or other rigidifying structure. For example, in Riemannian geometry distances and angles are specified, in symplectic geometry volumes may be computed, in conformal geometry only angles are specified, and in gauge theory certain fields are given over the space. Differential geometry is closely related to, and is sometimes taken to include, differential topology, which concerns itself with properties of differentiable manifolds that do not rely on any additional geometric structure (see that article for more discussion on the distinction between the two subjects). Differential geometry is also related to the geometric aspects of the theory of differential equations, otherwise known as geometric analysis. Differential geometry finds applications throughout mathematics and the natural sciences. Most prominently the language of differential geometry was used by Albert Einstein in his theory of general relativity, and subsequently by physicists in the development of quantum field theory and the standard model of particle physics. Outside of physics, differential geometry finds applications in chemistry, economics, engineering, control theory, computer graphics and computer vision, and recently in machine learning. == History and development == The history and development of differential geometry as a subject begins at least as far back as classical antiquity. It is intimately linked to the development of geometry more generally, of the notion of space and shape, and of topology, especially the study of manifolds. In this section we focus primarily on the history of the application of infinitesimal methods to geometry, and later to the ideas of tangent spaces, and eventually the development of the modern formalism of the subject in terms of tensors and tensor fields. === Classical antiquity until the Renaissance (300 BC – 1600 AD) === The study of differential geometry, or at least the study of the geometry of smooth shapes, can be traced back at least to classical antiquity. In particular, much was known about the geometry of the Earth, a spherical geometry, in the time of the ancient Greek mathematicians. Famously, Eratosthenes calculated the circumference of the Earth around 200 BC, and around 150 AD Ptolemy in his Geography introduced the stereographic projection for the purposes of mapping the shape of the Earth. Implicitly throughout this time principles that form the foundation of differential geometry and calculus were used in geodesy, although in a much simplified form. Namely, as far back as Euclid's Elements it was understood that a straight line could be defined by its property of providing the shortest distance between two points, and applying this same principle to the surface of the Earth leads to the conclusion that great circles, which are only locally similar to straight lines in a flat plane, provide the shortest path between two points on the Earth's surface. Indeed, the measurements of distance along such geodesic paths by Eratosthenes and others can be considered a rudimentary measure of arclength of curves, a concept which did not see a rigorous definition in terms of calculus until the 1600s. Around this time there were only minimal overt applications of the theory of infinitesimals to the study of geometry, a precursor to the modern calculus-based study of the subject. In Euclid's Elements the notion of tangency of a line to a circle is discussed, and Archimedes applied the method of exhaustion to compute the areas of smooth shapes such as the circle, and the volumes of smooth three-dimensional solids such as the sphere, cones, and cylinders. There was little development in the theory of differential geometry between antiquity and the beginning of the Renaissance. Before the development of calculus by Newton and Leibniz, the most significant development in the understanding of differential geometry came from Gerardus Mercator's development of the Mercator projection as a way of mapping the Earth. Mercator had an understanding of the advantages and pitfalls of his map design, and in particular was aware of the conformal nature of his projection, as well as the difference between praga, the lines of shortest distance on the Earth, and the directio, the straight line paths on his map. Mercator noted that the praga were oblique curvatur in this projection. This fact reflects the lack of a metric-preserving map of the Earth's surface onto a flat plane, a consequence of the later Theorema Egregium of Gauss. === After calculus (1600–1800) === The first systematic or rigorous treatment of geometry using the theory of infinitesimals and notions from calculus began around the 1600s when calculus was first developed by Gottfried Leibniz and Isaac Newton. At this time, the recent work of René Descartes introducing analytic coordinates to geometry allowed geometric shapes of increasing complexity to be described rigorously. In particular around this time Pierre de Fermat, Newton, and Leibniz began the study of plane curves and the investigation of concepts such as points of inflection and circles of osculation, which aid in the measurement of curvature. Indeed, already in his first paper on the foundations of calculus, Leibniz notes that the infinitesimal condition d 2 y = 0 {\displaystyle d^{2}y=0} indicates the existence of an inflection point. Shortly after this time the Bernoulli brothers, Jacob and Johann made important early contributions to the use of infinitesimals to study geometry. In lectures by Johann Bernoulli at the time, later collated by L'Hopital into the first textbook on differential calculus, the tangents to plane curves of various types are computed using the condition d y = 0 {\displaystyle dy=0} , and similarly points of inflection are calculated. At this same time the orthogonality between the osculating circles of a plane curve and the tangent directions is realised, and the first analytical formula for the radius of an osculating circle, essentially the first analytical formula for the notion of curvature, is written down. In the wake of the development of analytic geometry and plane curves, Alexis Clairaut began the study of space curves at just the age of 16. In his book Clairaut introduced the notion of tangent and subtangent directions to space curves in relation to the directions which lie along a surface on which the space curve lies. Thus Clairaut demonstrated an implicit understanding of the tangent space of a surface and studied this idea using calculus for the first time. Importantly Clairaut introduced the terminology of curvature and double curvature, essentially the notion of principal curvatures later studied by Gauss and others. Around this same time, Leonhard Euler, originally a student of Johann Bernoulli, provided many significant contributions not just to the development of geometry, but to mathematics more broadly. In regards to differential geometry, Euler studied the notion of a geodesic on a surface deriving the first analytical geodesic equation, and later introduced the first set of intrinsic coordinate systems on a surface, beginning the theory of intrinsic geometry upon which modern geometric ideas are based. Around this time Euler's study of mechanics in the Mechanica lead to the realization that a mass traveling along a surface not under the effect of any force would traverse a geodesic path, an early precursor to the important foundational ideas of Einstein's general relativity, and also to the Euler–Lagrange equations and the first theory of the calculus of variations, which underpins in modern differential geometry many techniques in symplectic geometry and geometric analysis. This theory was used by Lagrange, a co-developer of the calculus of variations, to derive the first differential equation describing a minimal surface in terms of the Euler–Lagrange equation. In 1760 Euler proved a theorem expressing the curvature of a space curve on a surface in terms of the principal curvatures, known as Euler's theorem. Later in the 1700s, the new French school led by Gaspard Monge began to make contributions to differential geometry. Monge made important contributions to the theory of plane curves, surfaces, and studied surfaces of revolution and envelopes of plane curves and space curves. Several students of Monge made contributions to this same theory, and for example Charles Dupin provided a new interpretation of Euler's theorem in terms of the principle curvatures, which is the modern form of the equation. === Intrinsic geometry and non-Euclidean geometry (1800–1900) === The field of differential geometry became an area of study considered in its own right, distinct from the more broad idea of analytic geometry, in the 1800s, primarily through the foundational work of Carl Friedrich Gauss and Bernhard Riemann, and also in the important contributions of Nikolai Lobachevsky on hyperbolic geometry and non-Euclidean geometry and throughout the same period the development of projective geometry. Dubbed the single most important work in the history of differential geometry, in 1827 Gauss produced the Disquisitiones generales circa superficies curvas detailing the general theory of curved surfaces. In this work and his subsequent papers and unpublished notes on the theory of surfaces, Gauss has been dubbed the inventor of non-Euclidean geometry and the inventor of intrinsic differential geometry. In his fundamental paper Gauss introduced the Gauss map, Gaussian curvature, first and second fundamental forms, proved the Theorema Egregium showing the intrinsic nature of the Gaussian curvature, and studied geodesics, computing the area of a geodesic triangle in various non-Euclidean geometries on surfaces. At this time Gauss was already of the opinion that the standard paradigm of Euclidean geometry should be discarded, and was in possession of private manuscripts on non-Euclidean geometry which informed his study of geodesic triangles. Around this same time János Bolyai and Lobachevsky independently discovered hyperbolic geometry and thus demonstrated the existence of consistent geometries outside Euclid's paradigm. Concrete models of hyperbolic geometry were produced by Eugenio Beltrami later in the 1860s, and Felix Klein coined the term non-Euclidean geometry in 1871, and through the Erlangen program put Euclidean and non-Euclidean geometries on the same footing. Implicitly, the spherical geometry of the Earth that had been studied since antiquity was a non-Euclidean geometry, an elliptic geometry. The development of intrinsic differential geometry in the language of Gauss was spurred on by his student, Bernhard Riemann in his Habilitationsschrift, On the hypotheses which lie at the foundation of geometry. In this work Riemann introduced the notion of a Riemannian metric and the Riemannian curvature tensor for the first time, and began the systematic study of differential geometry in higher dimensions. This intrinsic point of view in terms of the Riemannian metric, denoted by d s 2 {\displaystyle ds^{2}} by Riemann, was the development of an idea of Gauss's about the linear element d s {\displaystyle ds} of a surface. At this time Riemann began to introduce the systematic use of linear algebra and multilinear algebra into the subject, making great use of the theory of quadratic forms in his investigation of metrics and curvature. At this time Riemann did not yet develop the modern notion of a manifold, as even the notion of a topological space had not been encountered, but he did propose that it might be possible to investigate or measure the properties of the metric of spacetime through the analysis of masses within spacetime, linking with the earlier observation of Euler that masses under the effect of no forces would travel along geodesics on surfaces, and predicting Einstein's fundamental observation of the equivalence principle a full 60 years before it appeared in the scientific literature. In the wake of Riemann's new description, the focus of techniques used to study differential geometry shifted from the ad hoc and extrinsic methods of the study of curves and surfaces to a more systematic approach in terms of tensor calculus and Klein's Erlangen program, and progress increased in the field. The notion of groups of transformations was developed by Sophus Lie and Jean Gaston Darboux, leading to important results in the theory of Lie groups and symplectic geometry. The notion of differential calculus on curved spaces was studied by Elwin Christoffel, who introduced the Christoffel symbols which describe the covariant derivative in 1868, and by others including Eugenio Beltrami who studied many analytic questions on manifolds. In 1899 Luigi Bianchi produced his Lectures on differential geometry which studied differential geometry from Riemann's perspective, and a year later Tullio Levi-Civita and Gregorio Ricci-Curbastro produced their textbook systematically developing the theory of absolute differential calculus and tensor calculus. It was in this language that differential geometry was used by Einstein in the development of general relativity and pseudo-Riemannian geometry. === Modern differential geometry (1900–2000) === The subject of modern differential geometry emerged from the early 1900s in response to the foundational contributions of many mathematicians, including importantly the work of Henri Poincaré on the foundations of topology. At the start of the 1900s there was a major movement within mathematics to formalise the foundational aspects of the subject to avoid crises of rigour and accuracy, known as Hilbert's program. As part of this broader movement, the notion of a topological space was distilled in by Felix Hausdorff in 1914, and by 1942 there were many different notions of manifold of a combinatorial and differential-geometric nature. Interest in the subject was also focused by the emergence of Einstein's theory of general relativity and the importance of the Einstein Field equations. Einstein's theory popularised the tensor calculus of Ricci and Levi-Civita and introduced the notation g {\displaystyle g} for a Riemannian metric, and Γ {\displaystyle \Gamma } for the Christoffel symbols, both coming from G in Gravitation. Élie Cartan helped reformulate the foundations of the differential geometry of smooth manifolds in terms of exterior calculus and the theory of moving frames, leading in the world of physics to Einstein–Cartan theory. Following this early development, many mathematicians contributed to the development of the modern theory, including Jean-Louis Koszul who introduced connections on vector bundles, Shiing-Shen Chern who introduced characteristic classes to the subject and began the study of complex manifolds, Sir William Vallance Douglas Hodge and Georges de Rham who expanded understanding of differential forms, Charles Ehresmann who introduced the theory of fibre bundles and Ehresmann connections, and others. Of particular importance was Hermann Weyl who made important contributions to the foundations of general relativity, introduced the Weyl tensor providing insight into conformal geometry, and first defined the notion of a gauge leading to the development of gauge theory in physics and mathematics. In the middle and late 20th century differential geometry as a subject expanded in scope and developed links to other areas of mathematics and physics. The development of gauge theory and Yang–Mills theory in physics brought bundles and connections into focus, leading to developments in gauge theory. Many analytical results were investigated including the proof of the Atiyah–Singer index theorem. The development of complex geometry was spurred on by parallel results in algebraic geometry, and results in the geometry and global analysis of complex manifolds were proven by Shing-Tung Yau and others. In the latter half of the 20th century new analytic techniques were developed in regards to curvature flows such as the Ricci flow, which culminated in Grigori Perelman's proof of the Poincaré conjecture. During this same period primarily due to the influence of Michael Atiyah, new links between theoretical physics and differential geometry were formed. Techniques from the study of the Yang–Mills equations and gauge theory were used by mathematicians to develop new invariants of smooth manifolds. Physicists such as Edward Witten, the only physicist to be awarded a Fields medal, made new impacts in mathematics by using topological quantum field theory and string theory to make predictions and provide frameworks for new rigorous mathematics, which has resulted for example in the conjectural mirror symmetry and the Seiberg–Witten invariants. == Branches == === Riemannian geometry === Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric. This is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Riemannian geometry generalizes Euclidean geometry to spaces that are not necessarily flat, though they still resemble Euclidean space at each point infinitesimally, i.e. in the first order of approximation. Various concepts based on length, such as the arc length of curves, area of plane regions, and volume of solids all possess natural analogues in Riemannian geometry. The notion of a directional derivative of a function from multivariable calculus is extended to the notion of a covariant derivative of a tensor. Many concepts of analysis and differential equations have been generalized to the setting of Riemannian manifolds. A distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, the Theorema Egregium of Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes that the Gaussian curvatures at the corresponding points must be the same. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean and non-Euclidean geometry. === Pseudo-Riemannian geometry === Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite. A special case of this is a Lorentzian manifold, which is the mathematical basis of Einstein's general relativity theory of gravity. === Finsler geometry === Finsler geometry has Finsler manifolds as the main object of study. This is a differential manifold with a Finsler metric, that is, a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M {\displaystyle M} is a function F : T M → [ 0 , ∞ ) {\displaystyle F:\mathrm {T} M\to [0,\infty )} such that: F ( x , m y ) = m F ( x , y ) {\displaystyle F(x,my)=mF(x,y)} for all ( x , y ) {\displaystyle (x,y)} in T M {\displaystyle \mathrm {T} M} and all m ≥ 0 {\displaystyle m\geq 0} , F {\displaystyle F} is infinitely differentiable in T M ∖ { 0 } {\displaystyle \mathrm {T} M\setminus \{0\}} , The vertical Hessian of F 2 {\displaystyle F^{2}} is positive definite. === Symplectic geometry === Symplectic geometry is the study of symplectic manifolds. An almost symplectic manifold is a differentiable manifold equipped with a smoothly varying non-degenerate skew-symmetric bilinear form on each tangent space, i.e., a nondegenerate 2-form ω, called the symplectic form. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed: dω = 0. A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. The phase space of a mechanical system is a symplectic manifold and they made an implicit appearance already in the work of Joseph Louis Lagrange on analytical mechanics and later in Carl Gustav Jacobi's and William Rowan Hamilton's formulations of classical mechanics. By contrast with Riemannian geometry, where the curvature provides a local invariant of Riemannian manifolds, Darboux's theorem states that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably the Poincaré–Birkhoff theorem, conjectured by Henri Poincaré and then proved by G.D. Birkhoff in 1912. It claims that if an area preserving map of an annulus twists each boundary component in opposite directions, then the map has at least two fixed points. === Contact geometry === Contact geometry deals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. A contact structure on a (2n + 1)-dimensional manifold M is given by a smooth hyperplane field H in the tangent bundle that is as far as possible from being associated with the level sets of a differentiable function on M (the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each point p, a hyperplane distribution is determined by a nowhere vanishing 1-form α {\displaystyle \alpha } , which is unique up to multiplication by a nowhere vanishing function: H p = ker ⁡ α p ⊂ T p M . {\displaystyle H_{p}=\ker \alpha _{p}\subset T_{p}M.} A local 1-form on M is a contact form if the restriction of its exterior derivative to H is a non-degenerate two-form and thus induces a symplectic structure on Hp at each point. If the distribution H can be defined by a global one-form α {\displaystyle \alpha } then this form is contact if and only if the top-dimensional form α ∧ ( d α ) n {\displaystyle \alpha \wedge (d\alpha )^{n}} is a volume form on M, i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system. === Complex and Kähler geometry === Complex differential geometry is the study of complex manifolds. An almost complex manifold is a real manifold M {\displaystyle M} , endowed with a tensor of type (1, 1), i.e. a vector bundle endomorphism (called an almost complex structure) J : T M → T M {\displaystyle J:TM\rightarrow TM} , such that J 2 = − 1. {\displaystyle J^{2}=-1.\,} It follows from this definition that an almost complex manifold is even-dimensional. An almost complex manifold is called complex if N J = 0 {\displaystyle N_{J}=0} , where N J {\displaystyle N_{J}} is a tensor of type (2, 1) related to J {\displaystyle J} , called the Nijenhuis tensor (or sometimes the torsion). An almost complex manifold is complex if and only if it admits a holomorphic coordinate atlas. An almost Hermitian structure is given by an almost complex structure J, along with a Riemannian metric g, satisfying the compatibility condition g ( J X , J Y ) = g ( X , Y ) . {\displaystyle g(JX,JY)=g(X,Y).\,} An almost Hermitian structure defines naturally a differential two-form ω J , g ( X , Y ) := g ( J X , Y ) . {\displaystyle \omega _{J,g}(X,Y):=g(JX,Y).\,} The following two conditions are equivalent: N J = 0 and d ω = 0 {\displaystyle N_{J}=0{\mbox{ and }}d\omega =0\,} ∇ J = 0 {\displaystyle \nabla J=0\,} where ∇ {\displaystyle \nabla } is the Levi-Civita connection of g {\displaystyle g} . In this case, ( J , g ) {\displaystyle (J,g)} is called a Kähler structure, and a Kähler manifold is a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and a symplectic manifold. A large class of Kähler manifolds (the class of Hodge manifolds) is given by all the smooth complex projective varieties. === CR geometry === CR geometry is the study of the intrinsic geometry of boundaries of domains in complex manifolds. === Conformal geometry === Conformal geometry is the study of the set of angle-preserving (conformal) transformations on a space. === Differential topology === Differential topology is the study of global geometric invariants without a metric or symplectic form. Differential topology starts from the natural operations such as Lie derivative of natural vector bundles and de Rham differential of forms. Beside Lie algebroids, also Courant algebroids start playing a more important role. === Lie groups === A Lie group is a group in the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariant vector fields. Beside the structure theory there is also the wide field of representation theory. === Geometric analysis === Geometric analysis is a mathematical discipline where tools from differential equations, especially elliptic partial differential equations are used to establish new results in differential geometry and differential topology. === Gauge theory === Gauge theory is the study of connections on vector bundles and principal bundles, and arises out of problems in mathematical physics and physical gauge theories which underpin the standard model of particle physics. Gauge theory is concerned with the study of differential equations for connections on bundles, and the resulting geometric moduli spaces of solutions to these equations as well as the invariants that may be derived from them. These equations often arise as the Euler–Lagrange equations describing the equations of motion of certain physical systems in quantum field theory, and so their study is of considerable interest in physics. == Bundles and connections == The apparatus of vector bundles, principal bundles, and connections on bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, the tangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion of parallel transport. An important example is provided by affine connections. For a surface in R3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. In Riemannian geometry, the Levi-Civita connection serves a similar purpose. More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may be spacetime and the bundles and connections are related to various physical fields. == Intrinsic versus extrinsic == From the beginning and through the middle of the 19th century, differential geometry was studied from the extrinsic point of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work of Riemann, the intrinsic point of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss's theorema egregium, to the effect that Gaussian curvature is an intrinsic invariant. The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic. However, there is a price to pay in technical complexity: the intrinsic definitions of curvature and connections become much less visually intuitive. These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See the Nash embedding theorem.) In the formalism of geometric calculus both extrinsic and intrinsic geometry of a manifold can be characterized by a single bivector-valued one-form called the shape operator. == Applications == Below are some examples of how differential geometry is applied to other fields of science and mathematics. In physics, differential geometry has many applications, including: Differential geometry is the language in which Albert Einstein's general theory of relativity is expressed. According to the theory, the universe is a smooth manifold equipped with a pseudo-Riemannian metric, which describes the curvature of spacetime. Understanding this curvature is essential for the positioning of satellites into orbit around the Earth. Differential geometry is also indispensable in the study of gravitational lensing and black holes. Differential forms are used in the study of electromagnetism. Differential geometry has applications to both Lagrangian mechanics and Hamiltonian mechanics. Symplectic manifolds in particular can be used to study Hamiltonian systems. Riemannian geometry and contact geometry have been used to construct the formalism of geometrothermodynamics which has found applications in classical equilibrium thermodynamics. In chemistry and biophysics when modelling cell membrane structure under varying pressure. In economics, differential geometry has applications to the field of econometrics. Geometric modeling (including computer graphics) and computer-aided geometric design draw on ideas from differential geometry. In engineering, differential geometry can be applied to solve problems in digital signal processing. In control theory, differential geometry can be used to analyze nonlinear controllers, particularly geometric control In probability, statistics, and information theory, one can interpret various structures as Riemannian manifolds, which yields the field of information geometry, particularly via the Fisher information metric. In structural geology, differential geometry is used to analyze and describe geologic structures. In computer vision, differential geometry is used to analyze shapes. In image processing, differential geometry is used to process and analyse data on non-flat surfaces. Grigori Perelman's proof of the Poincaré conjecture using the techniques of Ricci flows demonstrated the power of the differential-geometric approach to questions in topology and it highlighted the important role played by its analytic methods. In wireless communications, Grassmannian manifolds are used for beamforming techniques in multiple antenna systems. In geodesy, for calculating distances and angles on the mean sea level surface of the Earth, modelled by an ellipsoid of revolution. In neuroimaging and brain-computer interface, symmetric positive definite manifolds are used to model functional, structural, or electrophysiological connectivity matrices. == See also == == References == == Further reading == Ethan D. Bloch (27 June 2011). A First Course in Geometric Topology and Differential Geometry. Boston: Springer Science & Business Media. ISBN 978-0-8176-8122-7. OCLC 811474509. Burke, William L. (1997). Applied differential geometry. Cambridge University Press. ISBN 0-521-26929-6. OCLC 53249854. do Carmo, Manfredo Perdigão (1976). Differential geometry of curves and surfaces. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 978-0-13-212589-5. OCLC 1529515. Frankel, Theodore (2004). The geometry of physics : an introduction (2nd ed.). New York: Cambridge University Press. ISBN 978-0-521-53927-2. OCLC 51855212. Elsa Abbena; Simon Salamon; Alfred Gray (2017). Modern Differential Geometry of Curves and Surfaces with Mathematica (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN 978-1-351-99220-6. OCLC 1048919510. Kreyszig, Erwin (1991). Differential Geometry. New York: Dover Publications. ISBN 978-0-486-66721-8. OCLC 23384584. Kühnel, Wolfgang (2002). Differential Geometry: Curves – Surfaces – Manifolds (2nd ed.). Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-3988-1. OCLC 61500086. McCleary, John (1994). Geometry from a differentiable viewpoint. Cambridge University Press. ISBN 0-521-13311-4. OCLC 915912917. Spivak, Michael (1999). A Comprehensive Introduction to Differential Geometry (5 Volumes) (3rd ed.). Publish or Perish. ISBN 0-914098-72-1. OCLC 179192286. ter Haar Romeny, Bart M. (2003). Front-end vision and multi-scale image analysis : multi-scale computer vision theory and applications, written in Mathematica. Dordrecht: Kluwer Academic. ISBN 978-1-4020-1507-6. OCLC 52806205. == External links == "Differential geometry", Encyclopedia of Mathematics, EMS Press, 2001 [1994] B. Conrad. Differential Geometry handouts, Stanford University Michael Murray's online differential geometry course, 1996 Archived 2013-08-01 at the Wayback Machine A Modern Course on Curves and Surfaces, Richard S Palais, 2003 Archived 2019-04-09 at the Wayback Machine Richard Palais's 3DXM Surfaces Gallery Archived 2019-04-09 at the Wayback Machine Balázs Csikós's Notes on Differential Geometry Archived 2009-06-05 at the Wayback Machine N. J. Hicks, Notes on Differential Geometry, Van Nostrand. MIT OpenCourseWare: Differential Geometry, Fall 2008
Wikipedia/Differential_geometry
In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations. "Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. "Subalgebra" can refer to either case. == Subalgebras for algebras over a ring or field == A subalgebra of an algebra over a commutative ring or field is a vector subspace which is closed under the multiplication of vectors. The restriction of the algebra multiplication makes it an algebra over the same ring or field. This notion also applies to most specializations, where the multiplication must satisfy additional properties, e.g. to associative algebras or to Lie algebras. Only for unital algebras is there a stronger notion, of unital subalgebra, for which it is also required that the unit of the subalgebra be the unit of the bigger algebra. === Example === The 2×2-matrices over the reals R, with matrix multiplication, form a four-dimensional unital algebra M(2,R). The 2×2-matrices for which all entries are zero, except for the first one on the diagonal, form a subalgebra. It is also unital, but it is not a unital subalgebra. The identity element of M(2,R) is the identity matrix I , so the unital subalgebras contain the line of diagonal matrices {x I : x in R}. For two-dimensional subalgebras, consider E 2 = ( a c b − a ) 2 = ( a 2 + b c 0 0 b c + a 2 ) = p I where p = a 2 + b c . {\displaystyle E^{2}={\begin{pmatrix}a&c\\b&-a\end{pmatrix}}^{2}={\begin{pmatrix}a^{2}+bc&0\\0&bc+a^{2}\end{pmatrix}}=pI\ \ {\text{where}}\ \ p=a^{2}+bc.} When p = 0, then E is nilpotent and the subalgebra { x I + y E : x, y in R } is a copy of the dual number plane. When p is negative, take q = 1/√−p, so that (q E)2 = − I, and subalgebra { x I + y (qE) : x,y in R } is a copy of the complex plane. Finally, when p is positive, take q = 1/√p, so that (qE)2 = I, and subalgebra { x I + y (qE) : x,y in R } is a copy of the plane of split-complex numbers. By the law of trichotomy, these are the only planar subalgebras of M(2,R). L. E. Dickson noted in 1914, the "Equivalence of complex quaternion and complex matric algebras", meaning M(2,C), the 2x2 complex matrices. But he notes also, "the real quaternion and real matric sub-algebras are not [isomorphic]." The difference is evident as there are the three isomorphism classes of planar subalgebras of M(2,R), while real quaternions have only one isomorphism class of planar subalgebras as they are all isomorphic to C. == Subalgebras in universal algebra == In universal algebra, a subalgebra of an algebra A is a subset S of A that also has the structure of an algebra of the same type when the algebraic operations are restricted to S. If the axioms of a kind of algebraic structure is described by equational laws, as is typically the case in universal algebra, then the only thing that needs to be checked is that S is closed under the operations. Some authors consider algebras with partial functions. There are various ways of defining subalgebras for these. Another generalization of algebras is to allow relations. These more general algebras are usually called structures, and they are studied in model theory and in theoretical computer science. For structures with relations there are notions of weak and of induced substructures. === Example === For example, the standard signature for groups in universal algebra is (•, −1, 1). (Inversion and unit are needed to get the right notions of homomorphism and so that the group laws can be expressed as equations.) Therefore, a subgroup of a group G is a subset S of G such that: the identity e of G belongs to S (so that S is closed under the identity constant operation); whenever x belongs to S, so does x−1 (so that S is closed under the inverse operation); whenever x and y belong to S, so does x • y (so that S is closed under the group's multiplication operation). == References == Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64243-5 Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
Wikipedia/Subalgebra
In mathematics, a linear equation is an equation that may be put in the form a 1 x 1 + … + a n x n + b = 0 , {\displaystyle a_{1}x_{1}+\ldots +a_{n}x_{n}+b=0,} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the variables (or unknowns), and b , a 1 , … , a n {\displaystyle b,a_{1},\ldots ,a_{n}} are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken. The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true. In the case of just one variable, there is exactly one solution (provided that a 1 ≠ 0 {\displaystyle a_{1}\neq 0} ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n − 1) in the Euclidean space of dimension n. Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. == One variable == A linear equation in one variable x can be written as a x + b = 0 , {\displaystyle ax+b=0,} with a ≠ 0 {\displaystyle a\neq 0} . The solution is x = − b a {\displaystyle x=-{\frac {b}{a}}} . == Two variables == A linear equation in two variables x and y can be written as a x + b y + c = 0 , {\displaystyle ax+by+c=0,} where a and b are not both 0. If a and b are real numbers, it has infinitely many solutions. === Linear function === If b ≠ 0, the equation a x + b y + c = 0 {\displaystyle ax+by+c=0} is a linear equation in the single variable y for every value of x. It therefore has a unique solution for y, which is given by y = − a b x − c b . {\displaystyle y=-{\frac {a}{b}}x-{\frac {c}{b}}.} This defines a function. The graph of this function is a line with slope − a b {\displaystyle -{\frac {a}{b}}} and y-intercept − c b . {\displaystyle -{\frac {c}{b}}.} The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when c = 0, that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that c = 0 are often called linear maps. === Geometric interpretation === Each solution (x, y) of a linear equation a x + b y + c = 0 {\displaystyle ax+by+c=0} may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that a and b are not both zero. Conversely, every line is the set of all solutions of a linear equation. The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line. If b ≠ 0, the line is the graph of the function of x that has been defined in the preceding section. If b = 0, the line is a vertical line (that is a line parallel to the y-axis) of equation x = − c a , {\displaystyle x=-{\frac {c}{a}},} which is not the graph of a function of x. Similarly, if a ≠ 0, the line is the graph of a function of y, and, if a = 0, one has a horizontal line of equation y = − c b . {\displaystyle y=-{\frac {c}{b}}.} === Equation of a line === There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case. ==== Slope–intercept form or Gradient-intercept form ==== A non-vertical line can be defined by its slope m, and its y-intercept y0 (the y coordinate of its intersection with the y-axis). In this case, its linear equation can be written y = m x + y 0 . {\displaystyle y=mx+y_{0}.} If, moreover, the line is not horizontal, it can be defined by its slope and its x-intercept x0. In this case, its equation can be written y = m ( x − x 0 ) , {\displaystyle y=m(x-x_{0}),} or, equivalently, y = m x − m x 0 . {\displaystyle y=mx-mx_{0}.} These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation a x + b y + c = 0 , {\displaystyle ax+by+c=0,} these forms can be easily deduced from the relations m = − a b , x 0 = − c a , y 0 = − c b . {\displaystyle {\begin{aligned}m&=-{\frac {a}{b}},\\x_{0}&=-{\frac {c}{a}},\\y_{0}&=-{\frac {c}{b}}.\end{aligned}}} ==== Point–slope form or Point-gradient form ==== A non-vertical line can be defined by its slope m, and the coordinates x 1 , y 1 {\displaystyle x_{1},y_{1}} of any point of the line. In this case, a linear equation of the line is y = y 1 + m ( x − x 1 ) , {\displaystyle y=y_{1}+m(x-x_{1}),} or y = m x + y 1 − m x 1 . {\displaystyle y=mx+y_{1}-mx_{1}.} This equation can also be written y − y 1 = m ( x − x 1 ) {\displaystyle y-y_{1}=m(x-x_{1})} to emphasize that the slope of a line can be computed from the coordinates of any two points. ==== Intercept form ==== A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values x0 and y0 of these two points are nonzero, and an equation of the line is x x 0 + y y 0 = 1. {\displaystyle {\frac {x}{x_{0}}}+{\frac {y}{y_{0}}}=1.} (It is easy to verify that the line defined by this equation has x0 and y0 as intercept values). ==== Two-point form ==== Given two different points (x1, y1) and (x2, y2), there is exactly one line that passes through them. There are several ways to write a linear equation of this line. If x1 ≠ x2, the slope of the line is y 2 − y 1 x 2 − x 1 . {\displaystyle {\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}.} Thus, a point-slope form is y − y 1 = y 2 − y 1 x 2 − x 1 ( x − x 1 ) . {\displaystyle y-y_{1}={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}(x-x_{1}).} By clearing denominators, one gets the equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 , {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0,} which is valid also when x1 = x2 (to verify this, it suffices to verify that the two given points satisfy the equation). This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms: ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} (exchanging the two points changes the sign of the left-hand side of the equation). ==== Determinant form ==== The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that. The equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0} is the result of expanding the determinant in the equation | x − x 1 y − y 1 x 2 − x 1 y 2 − y 1 | = 0. {\displaystyle {\begin{vmatrix}x-x_{1}&y-y_{1}\\x_{2}-x_{1}&y_{2}-y_{1}\end{vmatrix}}=0.} The equation ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} can be obtained by expanding with respect to its first row the determinant in the equation | x y 1 x 1 y 1 1 x 2 y 2 1 | = 0. {\displaystyle {\begin{vmatrix}x&y&1\\x_{1}&y_{1}&1\\x_{2}&y_{2}&1\end{vmatrix}}=0.} Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through n points in a space of dimension n − 1. These equations rely on the condition of linear dependence of points in a projective space. == More than two variables == A linear equation with more than two variables may always be assumed to have the form a 1 x 1 + a 2 x 2 + ⋯ + a n x n + b = 0. {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}+b=0.} The coefficient b, often denoted a0 is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the ai with i > 0. When dealing with n = 3 {\displaystyle n=3} variables, it is common to use x , y {\displaystyle x,\;y} and z {\displaystyle z} instead of indexed variables. A solution of such an equation is a n-tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality. For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for b ≠ 0) as having no solution, or all n-tuples are solutions. The n-tuples that are solutions of a linear equation in n variables are the Cartesian coordinates of the points of an (n − 1)-dimensional hyperplane in an n-dimensional Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane. If a linear equation is given with aj ≠ 0, then the equation can be solved for xj, yielding x j = − b a j − ∑ i ∈ { 1 , … , n } , i ≠ j a i a j x i . {\displaystyle x_{j}=-{\frac {b}{a_{j}}}-\sum _{i\in \{1,\ldots ,n\},i\neq j}{\frac {a_{i}}{a_{j}}}x_{i}.} If the coefficients are real numbers, this defines a real-valued function of n real variables. == See also == Linear equation over a ring Algebraic equation Line coordinates Linear inequality Nonlinear equation == Notes == == References == Barnett, R.A.; Ziegler, M.R.; Byleen, K.E. (2008), College Mathematics for Business, Economics, Life Sciences and the Social Sciences (11th ed.), Upper Saddle River, N.J.: Pearson, ISBN 978-0-13-157225-6 Larson, Ron; Hostetler, Robert (2007), Precalculus:A Concise Course, Houghton Mifflin, ISBN 978-0-618-62719-6 Wilson, W.A.; Tracey, J.I. (1925), Analytic Geometry (revised ed.), D.C. Heath == External links == "Linear equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Linear_equation
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. == History == The term "algebraic combinatorics" was introduced in the late 1970s. Through the early or mid-1990s, typical combinatorial objects of interest in algebraic combinatorics either admitted a lot of symmetries (association schemes, strongly regular graphs, posets with a group action) or possessed a rich algebraic structure, frequently of representation theoretic origin (symmetric functions, Young tableaux). This period is reflected in the area 05E, Algebraic combinatorics, of the AMS Mathematics Subject Classification, introduced in 1991. == Scope == Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group theory and representation theory, lattice theory and commutative algebra are commonly used. == Important topics == === Symmetric functions === The ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric groups. === Association schemes === An association scheme is a collection of binary relations satisfying certain compatibility conditions. Association schemes provide a unified approach to many topics, for example combinatorial designs and coding theory. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups. === Strongly regular graphs === A strongly regular graph is defined as follows. Let G = (V,E) be a regular graph with v vertices and degree k. G is said to be strongly regular if there are also integers λ and μ such that: Every two adjacent vertices have λ common neighbours. Every two non-adjacent vertices have μ common neighbours. A graph of this kind is sometimes said to be a srg(v, k, λ, μ). Some authors exclude graphs which satisfy the definition trivially, namely those graphs which are the disjoint union of one or more equal-sized complete graphs, and their complements, the Turán graphs. === Young tableaux === A Young tableau (pl.: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley. === Matroids === A matroid is a structure that captures and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid, the most significant being in terms of independent sets, bases, circuits, closed sets or flats, closure operators, and rank functions. Matroid theory borrows extensively from the terminology of linear algebra and graph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications in geometry, topology, combinatorial optimization, network theory and coding theory. === Finite geometries === A finite geometry is any geometric system that has only a finite number of points. The familiar Euclidean geometry is not finite, because a Euclidean line contains infinitely many points. A geometry based on the graphics displayed on a computer screen, where the pixels are considered to be the points, would be a finite geometry. While there are many systems that could be called finite geometries, attention is mostly paid to the finite projective and affine spaces because of their regularity and simplicity. Other significant types of finite geometry are finite Möbius or inversive planes and Laguerre planes, which are examples of a general type called Benz planes, and their higher-dimensional analogs such as higher finite inversive geometries. Finite geometries may be constructed via linear algebra, starting from vector spaces over a finite field; the affine and projective planes so constructed are called Galois geometries. Finite geometries can also be defined purely axiomatically. Most common finite geometries are Galois geometries, since any finite projective space of dimension three or greater is isomorphic to a projective space over a finite field (that is, the projectivization of a vector space over a finite field). However, dimension two has affine and projective planes that are not isomorphic to Galois geometries, namely the non-Desarguesian planes. Similar results hold for other kinds of finite geometries. == See also == Algebraic graph theory Combinatorial commutative algebra Polyhedral combinatorics Algebraic Combinatorics (journal) Journal of Algebraic Combinatorics International Conference on Formal Power Series and Algebraic Combinatorics == Citations == == Works cited == == Further reading == == External links == Media related to Algebraic combinatorics at Wikimedia Commons
Wikipedia/Algebraic_combinatorics
In mathematics, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} , where P is a polynomial with coefficients in some field, often the field of the rational numbers. For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is an algebraic equation with integer coefficients and y 4 + x y 2 − x 3 3 + x y 2 + y 2 + 1 7 = 0 {\displaystyle y^{4}+{\frac {xy}{2}}-{\frac {x^{3}}{3}}+xy^{2}+y^{2}+{\frac {1}{7}}=0} is a multivariate polynomial equation over the rationals. For many authors, the term algebraic equation refers only to the univariate case, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables (the multivariate case), in which case the term polynomial equation is usually preferred. Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). == Terminology == The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory. Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve nth roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations. == History == The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets). Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like x = 1 + 5 2 {\displaystyle x={\frac {1+{\sqrt {5}}}{2}}} for the positive solution of x 2 − x − 1 = 0 {\displaystyle x^{2}-x-1=0} . The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals. == Areas of study == The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations. Two equations are equivalent if they have the same set of solutions. In particular the equation P = Q {\displaystyle P=Q} is equivalent to P − Q = 0 {\displaystyle P-Q=0} . It follows that the study of algebraic equations is equivalent to the study of polynomials. A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation y 4 + x y 2 = x 3 3 − x y 2 + y 2 − 1 7 {\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}} becomes 42 y 4 + 21 x y − 14 x 3 + 42 x y 2 − 42 y 2 + 6 = 0. {\displaystyle 42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0.} Because sine, exponentiation, and 1/T are not polynomial functions, e T x 2 + 1 T x y + sin ⁡ ( T ) z − 2 = 0 {\displaystyle e^{T}x^{2}+{\frac {1}{T}}xy+\sin(T)z-2=0} is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T. == Theory == === Polynomials === Given an equation in unknown x ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , with coefficients in a field K, one can equivalently say that the solutions of (E) in K are the roots in K of the polynomial P = a n X n + a n − 1 X n − 1 + ⋯ + a 1 X + a 0 ∈ K [ X ] {\displaystyle P=a_{n}X^{n}+a_{n-1}X^{n-1}+\dots +a_{1}X+a_{0}\quad \in K[X]} . It can be shown that a polynomial of degree n in a field has at most n roots. The equation (E) therefore has at most n solutions. If K' is a field extension of K, one may consider (E) to be an equation with coefficients in K and the solutions of (E) in K are also solutions in K' (the converse does not hold in general). It is always possible to find a field extension of K known as the rupture field of the polynomial P, in which (E) has at least one solution. === Existence of solutions to real and complex equations === The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution. It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as x 2 + 1 = 0 {\displaystyle x^{2}+1=0} does not have a solution in R {\displaystyle \mathbb {R} } (the solutions are the imaginary units i and −i). While the real solutions of real equations are intuitive (they are the x-coordinates of the points where the curve y = P(x) intersects the x-axis), the existence of complex solutions to real equations can be surprising and less easy to visualize. However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in x is continuous, and it approaches − ∞ {\displaystyle -\infty } as x approaches − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as x approaches + ∞ {\displaystyle +\infty } . By the intermediate value theorem, it must therefore assume the value zero at some real x, which is then a solution of the polynomial equation. === Connection to Galois theory === There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals. == Explicit solution of numerical equations == === Approach === The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree n reduces to factoring the associated polynomial, that is, rewriting (E) in the form a n ( x − z 1 ) … ( x − z n ) = 0 {\displaystyle a_{n}(x-z_{1})\dots (x-z_{n})=0} , where the solutions are then the z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} . The problem is then to express the z i {\displaystyle z_{i}} in terms of the a i {\displaystyle a_{i}} . This approach applies more generally if the coefficients and solutions belong to an integral domain. === General techniques === ==== Factoring ==== If an equation P(x) = 0 of degree n has a rational root α, the associated polynomial can be factored to give the form P(X) = (X − α)Q(X) (by dividing P(X) by X − α or by writing P(X) − P(α) as a linear combination of terms of the form Xk − αk, and factoring out X − α. Solving P(x) = 0 thus reduces to solving the degree n − 1 equation Q(x) = 0. See for example the case n = 3. ==== Elimination of the sub-dominant term ==== To solve an equation of degree n, ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , a common preliminary step is to eliminate the degree-n - 1 term: by setting x = y − a n − 1 n a n {\displaystyle x=y-{\frac {a_{n-1}}{n\,a_{n}}}} , equation (E) becomes a n y n + b n − 2 y n − 2 + ⋯ + b 1 y + b 0 = 0 {\displaystyle a_{n}y^{n}+b_{n-2}y^{n-2}+\dots +b_{1}y+b_{0}=0} . Leonhard Euler developed this technique for the case n = 3 but it is also applicable to the case n = 4, for example. === Quadratic equations === To solve a quadratic equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} one calculates the discriminant Δ defined by Δ = b 2 − 4 a c {\displaystyle \Delta =b^{2}-4ac} . If the polynomial has real coefficients, it has: two distinct real roots if Δ > 0 {\displaystyle \Delta >0} ; one real double root if Δ = 0 {\displaystyle \Delta =0} ; no real root if Δ < 0 {\displaystyle \Delta <0} , but two complex conjugate roots. === Cubic equations === The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula. === Quartic equations === For detailed discussions of some solution methods see: Tschirnhaus transformation (general method, not guaranteed to succeed); Bezout method (general method, not guaranteed to succeed); Ferrari method (solutions for degree 4); Euler method (solutions for degree 4); Lagrange method (solutions for degree 4); Descartes method (solutions for degree 2 or 4); A quartic equation a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0} with a ≠ 0 {\displaystyle a\neq 0} may be reduced to a quadratic equation by a change of variable provided it is either biquadratic (b = d = 0) or quasi-palindromic (e = a, d = b). Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions. === Higher-degree equations === Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17. Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions. Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method. == See also == Algebraic function Algebraic number Root finding Linear equation (degree = 1) Quadratic equation (degree = 2) Cubic equation (degree = 3) Quartic equation (degree = 4) Quintic equation (degree = 5) Sextic equation (degree = 6) Septic equation (degree = 7) System of linear equations System of polynomial equations Linear Diophantine equation Linear equation over a ring Cramer's theorem (algebraic curves), on the number of points usually sufficient to determine a bivariate n-th degree curve == References == "Algebraic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Algebraic Equation". MathWorld.
Wikipedia/Polynomial_equation
In algebra, a septic equation is an equation of the form a x 7 + b x 6 + c x 5 + d x 4 + e x 3 + f x 2 + g x + h = 0 , {\displaystyle ax^{7}+bx^{6}+cx^{5}+dx^{4}+ex^{3}+fx^{2}+gx+h=0,\,} where a ≠ 0. A septic function is a function of the form f ( x ) = a x 7 + b x 6 + c x 5 + d x 4 + e x 3 + f x 2 + g x + h {\displaystyle f(x)=ax^{7}+bx^{6}+cx^{5}+dx^{4}+ex^{3}+fx^{2}+gx+h\,} where a ≠ 0. In other words, it is a polynomial of degree seven. If a = 0, then f is a sextic function (b ≠ 0), quintic function (b = 0, c ≠ 0), etc. The equation may be obtained from the function by setting f(x) = 0. The coefficients a, b, c, d, e, f, g, h may be either integers, rational numbers, real numbers, complex numbers or, more generally, members of any field. Because they have an odd degree, septic functions appear similar to quintic and cubic functions when graphed, except they may possess additional local maxima and local minima (up to three maxima and three minima). The derivative of a septic function is a sextic function. == Solvable septics == Some seventh degree equations can be solved by factorizing into radicals, but other septics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory. To give an example of an irreducible but solvable septic, one can generalize the solvable de Moivre quintic to get, x 7 + 7 α x 5 + 14 α 2 x 3 + 7 α 3 x + β = 0 {\displaystyle x^{7}+7\alpha x^{5}+14\alpha ^{2}x^{3}+7\alpha ^{3}x+\beta =0\,} , where the auxiliary equation is y 2 + β y − α 7 = 0 {\displaystyle y^{2}+\beta y-\alpha ^{7}=0\,} . This means that the septic is obtained by eliminating u and v between x = u + v, uv + α = 0 and u7 + v7 + β = 0. It follows that the septic's seven roots are given by x k = ω k y 1 7 + ω k 6 y 2 7 {\displaystyle x_{k}=\omega _{k}{\sqrt[{7}]{y_{1}}}+\omega _{k}^{6}{\sqrt[{7}]{y_{2}}}} where ωk is any of the 7 seventh roots of unity. The Galois group of this septic is the maximal solvable group of order 42. This is easily generalized to any other degrees k, not necessarily prime. Another solvable family is, x 7 − 2 x 6 + ( α + 1 ) x 5 + ( α − 1 ) x 4 − α x 3 − ( α + 5 ) x 2 − 6 x − 4 = 0 {\displaystyle x^{7}-2x^{6}+(\alpha +1)x^{5}+(\alpha -1)x^{4}-\alpha x^{3}-(\alpha +5)x^{2}-6x-4=0\,} whose members appear in Kluner's Database of Number Fields. Its discriminant is Δ = − 4 4 ( 4 α 3 + 99 α 2 − 34 α + 467 ) 3 {\displaystyle \Delta =-4^{4}\left(4\alpha ^{3}+99\alpha ^{2}-34\alpha +467\right)^{3}\,} The Galois group of these septics is the dihedral group of order 14. The general septic equation can be solved with the alternating or symmetric Galois groups A7 or S7. Such equations require hyperelliptic functions and associated theta functions of genus 3 for their solution. However, these equations were not studied specifically by the nineteenth-century mathematicians studying the solutions of algebraic equations, because the sextic equations' solutions were already at the limits of their computational abilities without computers. Septics are the lowest order equations for which it is not obvious that their solutions may be obtained by composing continuous functions of two variables. Hilbert's 13th problem was the conjecture this was not possible in the general case for seventh-degree equations. Vladimir Arnold solved this in 1957, demonstrating that this was always possible. However, Arnold himself considered the genuine Hilbert problem to be whether for septics their solutions may be obtained by superimposing algebraic functions of two variables. As of 2023, the problem is still open. == Galois groups == There are seven Galois groups for septics: Septic equations solvable by radicals have a Galois group which is either the cyclic group of order 7, or the dihedral group of order 14, or a metacyclic group of order 21 or 42. The L(3, 2) Galois group (of order 168) is formed by the permutations of the 7 vertex labels which preserve the 7 "lines" in the Fano plane. Septic equations with this Galois group L(3, 2) require elliptic functions but not hyperelliptic functions for their solution. Otherwise the Galois group of a septic is either the alternating group of order 7 ! / 2 = 2520 {\displaystyle 7!/2=2520} or the symmetric group of order 7 ! = 5040. {\displaystyle 7!=5040.} == Septic equation for the squared area of a cyclic pentagon or hexagon == The square of the area of a cyclic pentagon is a root of a septic equation whose coefficients are symmetric functions of the sides of the pentagon. The same is true of the square of the area of a cyclic hexagon. == See also == Cubic function Quartic function Quintic function Sextic equation Labs septic == References ==
Wikipedia/Septic_equation
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. == Computational problems == === Problem instances === A computational problem can be viewed as an infinite collection of instances together with a set (possibly empty) of solutions for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the travelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances. === Representing problem instances === When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently. === Decision problems as formal languages === Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is either yes or no (alternatively, 1 or 0). A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. === Function problems === A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output is not just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples ( a , b , c ) {\displaystyle (a,b,c)} such that the relation a × b = c {\displaystyle a\times b=c} holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers. === Measuring the size of an instance === To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2 n {\displaystyle 2n} vertices compared to the time taken for a graph with n {\displaystyle n} vertices? If the input size is n {\displaystyle n} , the time taken can be expressed as a function of n {\displaystyle n} . Since the time taken on different inputs of the same size can be different, the worst-case time complexity T ( n ) {\displaystyle T(n)} is defined to be the maximum time taken over all inputs of size n {\displaystyle n} . If T ( n ) {\displaystyle T(n)} is a polynomial in n {\displaystyle n} , then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm. == Machine models and complexity measures == === Turing machine === A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm. === Other machine models === Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems. === Complexity measures === For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M {\displaystyle M} on input x {\displaystyle x} is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine M {\displaystyle M} is said to operate within time f ( n ) {\displaystyle f(n)} if the time required by M {\displaystyle M} on each input of length n {\displaystyle n} is at most f ( n ) {\displaystyle f(n)} . A decision problem A {\displaystyle A} can be solved in time f ( n ) {\displaystyle f(n)} if there exists a Turing machine operating in time f ( n ) {\displaystyle f(n)} that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f ( n ) {\displaystyle f(n)} on a deterministic Turing machine is then denoted by DTIME( f ( n ) {\displaystyle f(n)} ). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. === Best, worst and average case complexity === The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size n {\displaystyle n} may be faster to solve than others, we define the following complexities: Best-case complexity: This is the complexity of solving the problem for the best input of size n {\displaystyle n} . Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n {\displaystyle n} . Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. Worst-case complexity: This is the complexity of solving the problem for the worst input of size n {\displaystyle n} . The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst. For example, the deterministic sorting algorithm quicksort addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time O( n 2 {\displaystyle n^{2}} ). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O ( n log ⁡ n ) {\displaystyle O(n\log n)} . The best case occurs when each pivoting divides the list in half, also needing O ( n log ⁡ n ) {\displaystyle O(n\log n)} time. === Upper and lower bounds on the complexity of problems === To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound T ( n ) {\displaystyle T(n)} on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most T ( n ) {\displaystyle T(n)} . However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T ( n ) {\displaystyle T(n)} for a problem requires showing that no algorithm can have time complexity lower than T ( n ) {\displaystyle T(n)} . Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if T ( n ) = 7 n 2 + 15 n + 40 {\displaystyle T(n)=7n^{2}+15n+40} , in big O notation one would write T ( n ) ∈ O ( n 2 ) {\displaystyle T(n)\in O(n^{2})} . == Complexity classes == === Defining complexity classes === A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc. Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: The set of decision problems solvable by a deterministic Turing machine within time f ( n ) {\displaystyle f(n)} . (This complexity class is known as DTIME( f ( n ) {\displaystyle f(n)} ).) But bounding the computation time above by some concrete function f ( n ) {\displaystyle f(n)} often yields complexity classes that depend on the chosen machine model. For instance, the language { x x ∣ x is any binary string } {\displaystyle \{xx\mid x{\text{ is any binary string}}\}} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP. === Important complexity classes === Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: Logarithmic-space classes do not account for the space required to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems. === Hierarchy theorems === For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME( n {\displaystyle n} ) is contained in DTIME( n 2 {\displaystyle n^{2}} ), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the time hierarchy theorem states that D T I M E ( o ( f ( n ) ) ) ⊊ D T I M E ( f ( n ) ⋅ log ⁡ ( f ( n ) ) ) {\displaystyle {\mathsf {DTIME}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DTIME}}{\big (}f(n)\cdot \log(f(n)){\big )}} . The space hierarchy theorem states that D S P A C E ( o ( f ( n ) ) ) ⊊ D S P A C E ( f ( n ) ) {\displaystyle {\mathsf {DSPACE}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DSPACE}}{\big (}f(n){\big )}} . The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE. === Reduction === Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem X {\displaystyle X} can be solved using an algorithm for Y {\displaystyle Y} , X {\displaystyle X} is no more difficult than Y {\displaystyle Y} , and we say that X {\displaystyle X} reduces to Y {\displaystyle Y} . There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem X {\displaystyle X} is hard for a class of problems C {\displaystyle C} if every problem in C {\displaystyle C} can be reduced to X {\displaystyle X} . Thus no problem in C {\displaystyle C} is harder than X {\displaystyle X} , since an algorithm for X {\displaystyle X} allows us to solve any problem in C {\displaystyle C} . The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems. If a problem X {\displaystyle X} is in C {\displaystyle C} and hard for C {\displaystyle C} , then X {\displaystyle X} is said to be complete for C {\displaystyle C} . This means that X {\displaystyle X} is the hardest problem in C {\displaystyle C} . (Since many problems could be equally hard, one might say that X {\displaystyle X} is one of the hardest problems in C {\displaystyle C} .) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π 2 {\displaystyle \Pi _{2}} , to another problem, Π 1 {\displaystyle \Pi _{1}} , would indicate that there is no known polynomial-time solution for Π 1 {\displaystyle \Pi _{1}} . This is because a polynomial-time solution to Π 1 {\displaystyle \Pi _{1}} would yield a polynomial-time solution to Π 2 {\displaystyle \Pi _{2}} . Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP. == Important open problems == === P versus NP problem === The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. === Problems in NP not known to be in P or NP-complete === It was shown by Ladner that if P ≠ NP {\displaystyle {\textsf {P}}\neq {\textsf {NP}}} then there exist problems in NP {\displaystyle {\textsf {NP}}} that are neither in P {\displaystyle {\textsf {P}}} nor NP {\displaystyle {\textsf {NP}}} -complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P {\displaystyle {\textsf {P}}} or to be NP {\displaystyle {\textsf {NP}}} -complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P {\displaystyle {\textsf {P}}} , NP {\displaystyle {\textsf {NP}}} -complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time O ( 2 n log ⁡ n ) {\displaystyle O(2^{\sqrt {n\log n}})} for graphs with n {\displaystyle n} vertices, although some recent work by Babai offers some potentially new perspectives on this. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than k {\displaystyle k} . No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP {\displaystyle {\textsf {NP}}} and in co-NP {\displaystyle {\textsf {co-NP}}} (and even in UP and co-UP). If the problem is NP {\displaystyle {\textsf {NP}}} -complete, the polynomial time hierarchy will collapse to its first level (i.e., NP {\displaystyle {\textsf {NP}}} will equal co-NP {\displaystyle {\textsf {co-NP}}} ). The best known algorithm for integer factorization is the general number field sieve, which takes time O ( e ( 64 9 3 ) ( log ⁡ n ) 3 ( log ⁡ log ⁡ n ) 2 3 ) {\displaystyle O(e^{\left({\sqrt[{3}]{\frac {64}{9}}}\right){\sqrt[{3}]{(\log n)}}{\sqrt[{3}]{(\log \log n)^{2}}}})} to factor an odd integer n {\displaystyle n} . However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes. === Separations between other complexity classes === Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆ PP ⊆ PSPACE {\displaystyle {\textsf {P}}\subseteq {\textsf {NP}}\subseteq {\textsf {PP}}\subseteq {\textsf {PSPACE}}} , but it is possible that P = PSPACE {\displaystyle {\textsf {P}}={\textsf {PSPACE}}} . If P {\displaystyle {\textsf {P}}} is not equal to NP {\displaystyle {\textsf {NP}}} , then P {\displaystyle {\textsf {P}}} is not equal to PSPACE {\displaystyle {\textsf {PSPACE}}} either. Since there are many known complexity classes between P {\displaystyle {\textsf {P}}} and PSPACE {\displaystyle {\textsf {PSPACE}}} , such as RP {\displaystyle {\textsf {RP}}} , BPP {\displaystyle {\textsf {BPP}}} , PP {\displaystyle {\textsf {PP}}} , BQP {\displaystyle {\textsf {BQP}}} , MA {\displaystyle {\textsf {MA}}} , PH {\displaystyle {\textsf {PH}}} , etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, co-NP {\displaystyle {\textsf {co-NP}}} is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP {\displaystyle {\textsf {NP}}} problems. It is believed that NP {\displaystyle {\textsf {NP}}} is not equal to co-NP {\displaystyle {\textsf {co-NP}}} ; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then P {\displaystyle {\textsf {P}}} is not equal to NP {\displaystyle {\textsf {NP}}} , since P = co-P {\displaystyle {\textsf {P}}={\textsf {co-P}}} . Thus if P = N P {\displaystyle P=NP} we would have co-P = co-NP {\displaystyle {\textsf {co-P}}={\textsf {co-NP}}} whence NP = P = co-P = co-NP {\displaystyle {\textsf {NP}}={\textsf {P}}={\textsf {co-P}}={\textsf {co-NP}}} . Similarly, it is not known if L {\displaystyle {\textsf {L}}} (the set of all problems that can be solved in logarithmic space) is strictly contained in P {\displaystyle {\textsf {P}}} or equal to P {\displaystyle {\textsf {P}}} . Again, there are many complexity classes between the two, such as NL {\displaystyle {\textsf {NL}}} and NC {\displaystyle {\textsf {NC}}} , and it is not known if they are distinct or equal classes. It is suspected that P {\displaystyle {\textsf {P}}} and BPP {\displaystyle {\textsf {BPP}}} are equal. However, it is currently open if BPP = NEXP {\displaystyle {\textsf {BPP}}={\textsf {NEXP}}} . == Intractability == A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as an intractable problem. Conversely, a problem that can be solved in practice is called a tractable problem, literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions ( P {\displaystyle {\textsf {P}}} , PTIME {\displaystyle {\textsf {PTIME}}} ); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP {\displaystyle {\textsf {NP}}} is not the same as P {\displaystyle {\textsf {P}}} , then NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in P {\displaystyle {\textsf {P}}} does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P {\displaystyle {\textsf {P}}} , yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2 n {\displaystyle 2^{n}} operations before halting. For small n {\displaystyle n} , say 100, and assuming for the sake of example that the computer does 10 12 {\displaystyle 10^{12}} operations each second, the program would run for about 4 × 10 10 {\displaystyle 4\times 10^{10}} years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001 n {\displaystyle 1.0001^{n}} operations is practical until n {\displaystyle n} gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, n 15 {\displaystyle n^{15}} , it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even n 3 {\displaystyle n^{3}} or n 2 {\displaystyle n^{2}} algorithms are often impractical on realistic sizes of problems. == Continuous complexity theory == Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity. Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems. == History == An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited from switching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure". In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete. == See also == == Works on complexity == Wuppuluri, Shyam; Doria, Francisco A., eds. (2020), Unravelling Complexity: The Life and Work of Gregory Chaitin, World Scientific, doi:10.1142/11270, ISBN 978-981-12-0006-9, S2CID 198790362 == References == === Citations === === Textbooks === Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge University Press, ISBN 978-0-521-42426-4, Zbl 1193.68112 Downey, Rod; Fellows, Michael (1999), Parameterized complexity, Monographs in Computer Science, Berlin, New York: Springer-Verlag, ISBN 9780387948836 Du, Ding-Zhu; Ko, Ker-I (2000), Theory of Computational Complexity, John Wiley & Sons, ISBN 978-0-471-34506-0 Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. Series of Books in the Mathematical Sciences (1st ed.). New York: W. H. Freeman and Company. ISBN 9780716710455. MR 0519066. OCLC 247570676. Goldreich, Oded (2008), Computational Complexity: A Conceptual Perspective, Cambridge University Press van Leeuwen, Jan, ed. (1990), Handbook of theoretical computer science (vol. A): algorithms and complexity, MIT Press, ISBN 978-0-444-88071-0 Papadimitriou, Christos (1994), Computational Complexity (1st ed.), Addison Wesley, ISBN 978-0-201-53082-7 Sipser, Michael (2006), Introduction to the Theory of Computation (2nd ed.), USA: Thomson Course Technology, ISBN 978-0-534-95097-2 === Surveys === Khalil, Hatem; Ulery, Dana (1976), "A review of current studies on complexity of algorithms for partial differential equations", Proceedings of the annual conference on - ACM 76, pp. 197–201, doi:10.1145/800191.805573, ISBN 9781450374897, S2CID 15497394 Cook, Stephen (1983), "An overview of computational complexity", Communications of the ACM, 26 (6): 400–408, doi:10.1145/358141.358144, ISSN 0001-0782, S2CID 14323396 Fortnow, Lance; Homer, Steven (2003), "A Short History of Computational Complexity" (PDF), Bulletin of the EATCS, 80: 95–133 Mertens, Stephan (2002), "Computational Complexity for Physicists", Computing in Science & Engineering, 4 (3): 31–47, arXiv:cond-mat/0012185, Bibcode:2002CSE.....4c..31M, doi:10.1109/5992.998639, ISSN 1521-9615, S2CID 633346 == External links == The Complexity Zoo "Computational complexity classes", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Scott Aaronson: Why Philosophers Should Care About Computational Complexity
Wikipedia/Computational_complexity_theory
In mathematics, a quadratic equation (from Latin quadratus 'square') is an equation that can be rearranged in standard form as a x 2 + b x + c = 0 , {\displaystyle ax^{2}+bx+c=0\,,} where the variable x represents an unknown number, and a, b, and c represent known numbers, where a ≠ 0. (If a = 0 and b ≠ 0 then the equation is linear, not quadratic.) The numbers a, b, and c are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term. The values of x that satisfy the equation are called solutions of the equation, and roots or zeros of the quadratic function on its left-hand side. A quadratic equation has at most two solutions. If there is only one solution, one says that it is a double root. If all the coefficients are real numbers, there are either two real solutions, or a single real double root, or two complex solutions that are complex conjugates of each other. A quadratic equation always has two roots, if complex roots are included and a double root is counted for two. A quadratic equation can be factored into an equivalent equation a x 2 + b x + c = a ( x − r ) ( x − s ) = 0 {\displaystyle ax^{2}+bx+c=a(x-r)(x-s)=0} where r and s are the solutions for x. The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} expresses the solutions in terms of a, b, and c. Completing the square is one of several ways for deriving the formula. Solutions to problems that can be expressed in terms of quadratic equations were known as early as 2000 BC. Because the quadratic equation involves only one unknown, it is called "univariate". The quadratic equation contains only powers of x that are non-negative integers, and therefore it is a polynomial equation. In particular, it is a second-degree polynomial equation, since the greatest power is two. == Solving the quadratic equation == A quadratic equation whose coefficients are real numbers can have either zero, one, or two distinct real-valued solutions, also called roots. When there is only one distinct root, it can be interpreted as two roots with the same value, called a double root. When there are no real roots, the coefficients can be considered as complex numbers with zero imaginary part, and the quadratic equation still has two complex-valued roots, complex conjugates of each-other with a non-zero imaginary part. A quadratic equation whose coefficients are arbitrary complex numbers always has two complex-valued roots which may or may not be distinct. The solutions of a quadratic equation can be found by several alternative methods. === Factoring by inspection === It may be possible to express a quadratic equation ax2 + bx + c = 0 as a product (px + q)(rx + s) = 0. In some cases, it is possible, by simple inspection, to determine values of p, q, r, and s that make the two forms equivalent to one another. If the quadratic equation is written in the second form, then the "Zero Factor Property" states that the quadratic equation is satisfied if px + q = 0 or rx + s = 0. Solving these two linear equations provides the roots of the quadratic. For most students, factoring by inspection is the first method of solving quadratic equations to which they are exposed.: 202–207  If one is given a quadratic equation in the form x2 + bx + c = 0, the sought factorization has the form (x + q)(x + s), and one has to find two numbers q and s that add up to b and whose product is c (this is sometimes called "Vieta's rule" and is related to Vieta's formulas). As an example, x2 + 5x + 6 factors as (x + 3)(x + 2). The more general case where a does not equal 1 can require a considerable effort in trial and error guess-and-check, assuming that it can be factored at all by inspection. Except for special cases such as where b = 0 or c = 0, factoring by inspection only works for quadratic equations that have rational roots. This means that the great majority of quadratic equations that arise in practical applications cannot be solved by factoring by inspection.: 207  === Completing the square === The process of completing the square makes use of the algebraic identity x 2 + 2 h x + h 2 = ( x + h ) 2 , {\displaystyle x^{2}+2hx+h^{2}=(x+h)^{2},} which represents a well-defined algorithm that can be used to solve any quadratic equation.: 207  Starting with a quadratic equation in standard form, ax2 + bx + c = 0 Divide each side by a, the coefficient of the squared term. Subtract the constant term c/a from both sides. Add the square of one-half of b/a, the coefficient of x, to both sides. This "completes the square", converting the left side into a perfect square. Write the left side as a square and simplify the right side if necessary. Produce two linear equations by equating the square root of the left side with the positive and negative square roots of the right side. Solve each of the two linear equations. We illustrate use of this algorithm by solving 2x2 + 4x − 4 = 0 2 x 2 + 4 x − 4 = 0 {\displaystyle 2x^{2}+4x-4=0} x 2 + 2 x − 2 = 0 {\displaystyle \ x^{2}+2x-2=0} x 2 + 2 x = 2 {\displaystyle \ x^{2}+2x=2} x 2 + 2 x + 1 = 2 + 1 {\displaystyle \ x^{2}+2x+1=2+1} ( x + 1 ) 2 = 3 {\displaystyle \left(x+1\right)^{2}=3} x + 1 = ± 3 {\displaystyle \ x+1=\pm {\sqrt {3}}} x = − 1 ± 3 {\displaystyle \ x=-1\pm {\sqrt {3}}} The plus–minus symbol "±" indicates that both x = − 1 + 3 {\textstyle x=-1+{\sqrt {3}}} and x = − 1 − 3 {\textstyle x=-1-{\sqrt {3}}} are solutions of the quadratic equation. === Quadratic formula and its derivation === Completing the square can be used to derive a general formula for solving quadratic equations, called the quadratic formula. The mathematical proof will now be briefly summarized. It can easily be seen, by polynomial expansion, that the following equation is equivalent to the quadratic equation: ( x + b 2 a ) 2 = b 2 − 4 a c 4 a 2 . {\displaystyle \left(x+{\frac {b}{2a}}\right)^{2}={\frac {b^{2}-4ac}{4a^{2}}}.} Taking the square root of both sides, and isolating x, gives: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} Some sources, particularly older ones, use alternative parameterizations of the quadratic equation such as ax2 + 2bx + c = 0 or ax2 − 2bx + c = 0 , where b has a magnitude one half of the more common one, possibly with opposite sign. These result in slightly different forms for the solution, but are otherwise equivalent. A number of alternative derivations can be found in the literature. These proofs are simpler than the standard completing the square method, represent interesting applications of other frequently used techniques in algebra, or offer insight into other areas of mathematics. A lesser known quadratic formula, as used in Muller's method, provides the same roots via the equation x = 2 c − b ± b 2 − 4 a c . {\displaystyle x={\frac {2c}{-b\pm {\sqrt {b^{2}-4ac}}}}.} This can be deduced from the standard quadratic formula by Vieta's formulas, which assert that the product of the roots is c/a. It also follows from dividing the quadratic equation by x 2 {\displaystyle x^{2}} giving c x − 2 + b x − 1 + a = 0 , {\displaystyle cx^{-2}+bx^{-1}+a=0,} solving this for x − 1 , {\displaystyle x^{-1},} and then inverting. One property of this form is that it yields one valid root when a = 0, while the other root contains division by zero, because when a = 0, the quadratic equation becomes a linear equation, which has one root. By contrast, in this case, the more common formula has a division by zero for one root and an indeterminate form 0/0 for the other root. On the other hand, when c = 0, the more common formula yields two correct roots whereas this form yields the zero root and an indeterminate form 0/0. When neither a nor c is zero, the equality between the standard quadratic formula and Muller's method, 2 c − b − b 2 − 4 a c = − b + b 2 − 4 a c 2 a , {\displaystyle {\frac {2c}{-b-{\sqrt {b^{2}-4ac}}}}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\,,} can be verified by cross multiplication, and similarly for the other choice of signs. === Reduced quadratic equation === It is sometimes convenient to reduce a quadratic equation so that its leading coefficient is one. This is done by dividing both sides by a, which is always possible since a is non-zero. This produces the reduced quadratic equation: x 2 + p x + q = 0 , {\displaystyle x^{2}+px+q=0,} where p = b/a and q = c/a. This monic polynomial equation has the same solutions as the original. The quadratic formula for the solutions of the reduced quadratic equation, written in terms of its coefficients, is x = − p 2 ± ( p 2 ) 2 − q . {\displaystyle x=-{\frac {p}{2}}\pm {\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}\,.} === Discriminant === In the quadratic formula, the expression underneath the square root sign is called the discriminant of the quadratic equation, and is often represented using an upper case D or an upper case Greek delta: Δ = b 2 − 4 a c . {\displaystyle \Delta =b^{2}-4ac.} A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the discriminant determines the number and nature of the roots. There are three cases: If the discriminant is positive, then there are two distinct roots − b + Δ 2 a and − b − Δ 2 a , {\displaystyle {\frac {-b+{\sqrt {\Delta }}}{2a}}\quad {\text{and}}\quad {\frac {-b-{\sqrt {\Delta }}}{2a}},} both of which are real numbers. For quadratic equations with rational coefficients, if the discriminant is a square number, then the roots are rational—in other cases they may be quadratic irrationals. If the discriminant is zero, then there is exactly one real root − b 2 a , {\displaystyle -{\frac {b}{2a}},} sometimes called a repeated or double root or two equal roots. If the discriminant is negative, then there are no real roots. Rather, there are two distinct (non-real) complex roots − b 2 a + i − Δ 2 a and − b 2 a − i − Δ 2 a , {\displaystyle -{\frac {b}{2a}}+i{\frac {\sqrt {-\Delta }}{2a}}\quad {\text{and}}\quad -{\frac {b}{2a}}-i{\frac {\sqrt {-\Delta }}{2a}},} which are complex conjugates of each other. In these expressions i is the imaginary unit. Thus the roots are distinct if and only if the discriminant is non-zero, and the roots are real if and only if the discriminant is non-negative. === Geometric interpretation === The function f(x) = ax2 + bx + c is a quadratic function. The graph of any quadratic function has the same general shape, which is called a parabola. The location and size of the parabola, and how it opens, depend on the values of a, b, and c. If a > 0, the parabola has a minimum point and opens upward. If a < 0, the parabola has a maximum point and opens downward. The extreme point of the parabola, whether minimum or maximum, corresponds to its vertex. The x-coordinate of the vertex will be located at x = − b 2 a {\displaystyle \scriptstyle x={\tfrac {-b}{2a}}} , and the y-coordinate of the vertex may be found by substituting this x-value into the function. The y-intercept is located at the point (0, c). The solutions of the quadratic equation ax2 + bx + c = 0 correspond to the roots of the function f(x) = ax2 + bx + c, since they are the values of x for which f(x) = 0. If a, b, and c are real numbers and the domain of f is the set of real numbers, then the roots of f are exactly the x-coordinates of the points where the graph touches the x-axis. If the discriminant is positive, the graph touches the x-axis at two points; if zero, the graph touches at one point; and if negative, the graph does not touch the x-axis. === Quadratic factorization === The term x − r {\displaystyle x-r} is a factor of the polynomial a x 2 + b x + c {\displaystyle ax^{2}+bx+c} if and only if r is a root of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} It follows from the quadratic formula that a x 2 + b x + c = a ( x − − b + b 2 − 4 a c 2 a ) ( x − − b − b 2 − 4 a c 2 a ) . {\displaystyle ax^{2}+bx+c=a\left(x-{\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\right)\left(x-{\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}\right).} In the special case b2 = 4ac where the quadratic has only one distinct root (i.e. the discriminant is zero), the quadratic polynomial can be factored as a x 2 + b x + c = a ( x + b 2 a ) 2 . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}.} === Graphical solution === The solutions of the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} may be deduced from the graph of the quadratic function f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} which is a parabola. If the parabola intersects the x-axis in two points, there are two real roots, which are the x-coordinates of these two points (also called x-intercept). If the parabola is tangent to the x-axis, there is a double root, which is the x-coordinate of the contact point between the graph and parabola. If the parabola does not intersect the x-axis, there are two complex conjugate roots. Although these roots cannot be visualized on the graph, their real and imaginary parts can be. Let h and k be respectively the x-coordinate and the y-coordinate of the vertex of the parabola (that is the point with maximal or minimal y-coordinate. The quadratic function may be rewritten y = a ( x − h ) 2 + k . {\displaystyle y=a(x-h)^{2}+k.} Let d be the distance between the point of y-coordinate 2k on the axis of the parabola, and a point on the parabola with the same y-coordinate (see the figure; there are two such points, which give the same distance, because of the symmetry of the parabola). Then the real part of the roots is h, and their imaginary part are ±d. That is, the roots are h + i d and h − i d , {\displaystyle h+id\quad {\text{and}}\quad h-id,} or in the case of the example of the figure 5 + 3 i and 5 − 3 i . {\displaystyle 5+3i\quad {\text{and}}\quad 5-3i.} === Avoiding loss of significance === Although the quadratic formula provides an exact solution, the result is not exact if real numbers are approximated during the computation, as usual in numerical analysis, where real numbers are approximated by floating point numbers (called "reals" in many programming languages). In this context, the quadratic formula is not completely stable. This occurs when the roots have different order of magnitude, or, equivalently, when b2 and b2 − 4ac are close in magnitude. In this case, the subtraction of two nearly equal numbers will cause loss of significance or catastrophic cancellation in the smaller root. To avoid this, the root that is smaller in magnitude, r, can be computed as ( c / a ) / R {\displaystyle (c/a)/R} where R is the root that is bigger in magnitude. This is equivalent to using the formula x = − 2 c b ± b 2 − 4 a c {\displaystyle x={\frac {-2c}{b\pm {\sqrt {b^{2}-4ac}}}}} using the plus sign if b > 0 {\displaystyle b>0} and the minus sign if b < 0. {\displaystyle b<0.} A second form of cancellation can occur between the terms b2 and 4ac of the discriminant, that is when the two roots are very close. This can lead to loss of up to half of correct significant figures in the roots. == Examples and applications == The golden ratio is found as the positive solution of the quadratic equation x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} The equations of the circle and the other conic sections—ellipses, parabolas, and hyperbolas—are quadratic equations in two variables. Given the cosine or sine of an angle, finding the cosine or sine of the angle that is half as large involves solving a quadratic equation. The process of simplifying expressions involving the square root of an expression involving the square root of another expression involves finding the two solutions of a quadratic equation. Descartes' theorem states that for every four kissing (mutually tangent) circles, their radii satisfy a particular quadratic equation. The equation given by Fuss' theorem, giving the relation among the radius of a bicentric quadrilateral's inscribed circle, the radius of its circumscribed circle, and the distance between the centers of those circles, can be expressed as a quadratic equation for which the distance between the two circles' centers in terms of their radii is one of the solutions. The other solution of the same equation in terms of the relevant radii gives the distance between the circumscribed circle's center and the center of the excircle of an ex-tangential quadrilateral. Critical points of a cubic function and inflection points of a quartic function are found by solving a quadratic equation. In physics, for motion with constant acceleration a {\displaystyle a} , the displacement or position x {\displaystyle x} of a moving body can be expressed as a quadratic function of time t {\displaystyle t} given the initial position x 0 {\displaystyle x_{0}} and initial velocity v 0 {\displaystyle v_{0}} : x = x 0 + v 0 t + 1 2 a t 2 {\textstyle x=x_{0}+v_{0}t+{\frac {1}{2}}at^{2}} . In chemistry, the pH of a solution of weak acid can be calculated from the negative base-10 logarithm of the positive root of a quadratic equation in terms of the acidity constant and the analytical concentration of the acid. == History == Babylonian mathematicians, as early as 2000 BC (displayed on Old Babylonian clay tablets) could solve problems relating the areas and sides of rectangles. There is evidence dating this algorithm as far back as the Third Dynasty of Ur. In modern notation, the problems typically involved solving a pair of simultaneous equations of the form: x + y = p , x y = q , {\displaystyle x+y=p,\ \ xy=q,} which is equivalent to the statement that x and y are the roots of the equation:: 86  z 2 + q = p z . {\displaystyle z^{2}+q=pz.} The steps given by Babylonian scribes for solving the above rectangle problem, in terms of x and y, were as follows: Compute half of p. Square the result. Subtract q. Find the (positive) square root using a table of squares. Add together the results of steps (1) and (4) to give x. In modern notation this means calculating x = p 2 + ( p 2 ) 2 − q {\displaystyle x={\frac {p}{2}}+{\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}} , which is equivalent to the modern day quadratic formula for the larger real root (if any) x = − b + b 2 − 4 a c 2 a {\displaystyle x={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}} with a = 1, b = −p, and c = q. Geometric methods were used to solve quadratic equations in Babylonia, Egypt, Greece, China, and India. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. Babylonian mathematicians from circa 400 BC and Chinese mathematicians from circa 200 BC used geometric methods of dissection to solve quadratic equations with positive roots. Rules for quadratic equations were given in The Nine Chapters on the Mathematical Art, a Chinese treatise on mathematics. These early geometric methods do not appear to have had a general formula. Euclid, the Greek mathematician, produced a more abstract geometrical method around 300 BC. With a purely geometric approach Pythagoras and Euclid created a general procedure to find solutions of the quadratic equation. In his work Arithmetica, the Greek mathematician Diophantus solved the quadratic equation, but giving only one root, even when both roots were positive. In 628 AD, Brahmagupta, an Indian mathematician, gave in his book Brāhmasphuṭasiddhānta the first explicit (although still not completely general) solution of the quadratic equation ax2 + bx = c as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." This is equivalent to x = 4 a c + b 2 − b 2 a . {\displaystyle x={\frac {{\sqrt {4ac+b^{2}}}-b}{2a}}.} The Bakhshali Manuscript written in India in the 7th century AD contained an algebraic formula for solving quadratic equations, as well as linear indeterminate equations (originally of type ax/c = y). Muhammad ibn Musa al-Khwarizmi (9th century) developed a set of formulas that worked for positive solutions. Al-Khwarizmi goes further in providing a full solution to the general quadratic equation, accepting one or two numerical answers for every quadratic equation, while providing geometric proofs in the process. He also described the method of completing the square and recognized that the discriminant must be positive,: 230  which was proven by his contemporary 'Abd al-Hamīd ibn Turk (Central Asia, 9th century) who gave geometric figures to prove that if the discriminant is negative, a quadratic equation has no solution.: 234  While al-Khwarizmi himself did not accept negative solutions, later Islamic mathematicians that succeeded him accepted negative solutions,: 191  as well as irrational numbers as solutions. Abū Kāmil Shujā ibn Aslam (Egypt, 10th century) in particular was the first to accept irrational numbers (often in the form of a square root, cube root or fourth root) as solutions to quadratic equations or as coefficients in an equation. The 9th century Indian mathematician Sridhara wrote down rules for solving quadratic equations. The Jewish mathematician Abraham bar Hiyya Ha-Nasi (12th century, Spain) authored the first European book to include the full solution to the general quadratic equation. His solution was largely based on Al-Khwarizmi's work. The writing of the Chinese mathematician Yang Hui (1238–1298 AD) is the first known one in which quadratic equations with negative coefficients of 'x' appear, although he attributes this to the earlier Liu Yi. By 1545 Gerolamo Cardano compiled the works related to the quadratic equations. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing the quadratic formula in the form we know today. == Advanced topics == === Alternative methods of root calculation === ==== Vieta's formulas ==== Vieta's formulas (named after François Viète) are the relations x 1 + x 2 = − b a , x 1 x 2 = c a {\displaystyle x_{1}+x_{2}=-{\frac {b}{a}},\quad x_{1}x_{2}={\frac {c}{a}}} between the roots of a quadratic polynomial and its coefficients. They result from comparing term by term the relation ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 2 ) x + x 1 x 2 = 0 {\displaystyle \left(x-x_{1}\right)\left(x-x_{2}\right)=x^{2}-\left(x_{1}+x_{2}\right)x+x_{1}x_{2}=0} with the equation x 2 + b a x + c a = 0. {\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}=0.} The first Vieta's formula is useful for graphing a quadratic function. Since the graph is symmetric with respect to a vertical line through the vertex, the vertex's x-coordinate is located at the average of the roots (or intercepts). Thus the x-coordinate of the vertex is x V = x 1 + x 2 2 = − b 2 a . {\displaystyle x_{V}={\frac {x_{1}+x_{2}}{2}}=-{\frac {b}{2a}}.} The y-coordinate can be obtained by substituting the above result into the given quadratic equation, giving y V = − b 2 4 a + c = − b 2 − 4 a c 4 a . {\displaystyle y_{V}=-{\frac {b^{2}}{4a}}+c=-{\frac {b^{2}-4ac}{4a}}.} Also, these formulas for the vertex can be deduced directly from the formula (see Completing the square) a x 2 + b x + c = a ( x + b 2 a ) 2 − b 2 − 4 a c 4 a . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}-{\frac {b^{2}-4ac}{4a}}.} For numerical computation, Vieta's formulas provide a useful method for finding the roots of a quadratic equation in the case where one root is much smaller than the other. If |x2| << |x1|, then x1 + x2 ≈ x1, and we have the estimate: x 1 ≈ − b a . {\displaystyle x_{1}\approx -{\frac {b}{a}}.} The second Vieta's formula then provides: x 2 = c a x 1 ≈ − c b . {\displaystyle x_{2}={\frac {c}{ax_{1}}}\approx -{\frac {c}{b}}.} These formulas are much easier to evaluate than the quadratic formula under the condition of one large and one small root, because the quadratic formula evaluates the small root as the difference of two very nearly equal numbers (the case of large b), which causes round-off error in a numerical evaluation. The figure shows the difference between (i) a direct evaluation using the quadratic formula (accurate when the roots are near each other in value) and (ii) an evaluation based upon the above approximation of Vieta's formulas (accurate when the roots are widely spaced). As the linear coefficient b increases, initially the quadratic formula is accurate, and the approximate formula improves in accuracy, leading to a smaller difference between the methods as b increases. However, at some point the quadratic formula begins to lose accuracy because of round off error, while the approximate method continues to improve. Consequently, the difference between the methods begins to increase as the quadratic formula becomes worse and worse. This situation arises commonly in amplifier design, where widely separated roots are desired to ensure a stable operation (see Step response). ==== Trigonometric solution ==== In the days before calculators, people would use mathematical tables—lists of numbers showing the results of calculation with varying arguments—to simplify and speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks. Specialized tables were published for applications such as astronomy, celestial navigation and statistics. Methods of numerical approximation existed, called prosthaphaeresis, that offered shortcuts around time-consuming operations such as multiplication and taking powers and roots. Astronomers, especially, were concerned with methods that could speed up the long series of computations involved in celestial mechanics calculations. It is within this context that we may understand the development of means of solving quadratic equations by the aid of trigonometric substitution. Consider the following alternate form of the quadratic equation, where the sign of the ± symbol is chosen so that a and c may both be positive. By substituting and then multiplying through by cos2(θ) / c, we obtain Introducing functions of 2θ and rearranging, we obtain where the subscripts n and p correspond, respectively, to the use of a negative or positive sign in equation [1]. Substituting the two values of θn or θp found from equations [4] or [5] into [2] gives the required roots of [1]. Complex roots occur in the solution based on equation [5] if the absolute value of sin 2θp exceeds unity. The amount of effort involved in solving quadratic equations using this mixed trigonometric and logarithmic table look-up strategy was two-thirds the effort using logarithmic tables alone. Calculating complex roots would require using a different trigonometric form. To illustrate, let us assume we had available seven-place logarithm and trigonometric tables, and wished to solve the following to six-significant-figure accuracy: 4.16130 x 2 + 9.15933 x − 11.4207 = 0 {\displaystyle 4.16130x^{2}+9.15933x-11.4207=0} A seven-place lookup table might have only 100,000 entries, and computing intermediate results to seven places would generally require interpolation between adjacent entries. log ⁡ a = 0.6192290 , log ⁡ b = 0.9618637 , log ⁡ c = 1.0576927 {\displaystyle \log a=0.6192290,\log b=0.9618637,\log c=1.0576927} 2 a c / b = 2 × 10 ( 0.6192290 + 1.0576927 ) / 2 − 0.9618637 = 1.505314 {\displaystyle 2{\sqrt {ac}}/b=2\times 10^{(0.6192290+1.0576927)/2-0.9618637}=1.505314} θ = ( tan − 1 ⁡ 1.505314 ) / 2 = 28.20169 ∘ or − 61.79831 ∘ {\displaystyle \theta =(\tan ^{-1}1.505314)/2=28.20169^{\circ }{\text{ or }}-61.79831^{\circ }} log ⁡ | tan ⁡ θ | = − 0.2706462 or 0.2706462 {\displaystyle \log |\tan \theta |=-0.2706462{\text{ or }}0.2706462} log ⁡ c / a = ( 1.0576927 − 0.6192290 ) / 2 = 0.2192318 {\displaystyle \log {\textstyle {\sqrt {c/a}}}=(1.0576927-0.6192290)/2=0.2192318} x 1 = 10 0.2192318 − 0.2706462 = 0.888353 {\displaystyle x_{1}=10^{0.2192318-0.2706462}=0.888353} (rounded to six significant figures) x 2 = − 10 0.2192318 + 0.2706462 = − 3.08943 {\displaystyle x_{2}=-10^{0.2192318+0.2706462}=-3.08943} ==== Solution for complex roots in polar coordinates ==== If the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} with real coefficients has two complex roots—the case where b 2 − 4 a c < 0 , {\displaystyle b^{2}-4ac<0,} requiring a and c to have the same sign as each other—then the solutions for the roots can be expressed in polar form as x 1 , x 2 = r ( cos ⁡ θ ± i sin ⁡ θ ) , {\displaystyle x_{1},\,x_{2}=r(\cos \theta \pm i\sin \theta ),} where r = c a {\displaystyle r={\sqrt {\tfrac {c}{a}}}} and θ = cos − 1 ⁡ ( − b 2 a c ) . {\displaystyle \theta =\cos ^{-1}\left({\tfrac {-b}{2{\sqrt {ac}}}}\right).} ==== Geometric solution ==== The quadratic equation may be solved geometrically in a number of ways. One way is via Lill's method. The three coefficients a, b, c are drawn with right angles between them as in SA, AB, and BC in Figure 6. A circle is drawn with the start and end point SC as a diameter. If this cuts the middle line AB of the three then the equation has a solution, and the solutions are given by negative of the distance along this line from A divided by the first coefficient a or SA. If a is 1 the coefficients may be read off directly. Thus the solutions in the diagram are −AX1/SA and −AX2/SA. The Carlyle circle, named after Thomas Carlyle, has the property that the solutions of the quadratic equation are the horizontal coordinates of the intersections of the circle with the horizontal axis. Carlyle circles have been used to develop ruler-and-compass constructions of regular polygons. === Generalization of quadratic equation === The formula and its derivation remain correct if the coefficients a, b and c are complex numbers, or more generally members of any field whose characteristic is not 2. (In a field of characteristic 2, the element 2a is zero and it is impossible to divide by it.) The symbol ± b 2 − 4 a c {\displaystyle \pm {\sqrt {b^{2}-4ac}}} in the formula should be understood as "either of the two elements whose square is b2 − 4ac, if such elements exist". In some fields, some elements have no square roots and some have two; only zero has just one square root, except in fields of characteristic 2. Even if a field does not contain a square root of some number, there is always a quadratic extension field which does, so the quadratic formula will always make sense as a formula in that extension field. ==== Characteristic 2 ==== In a field of characteristic 2, the quadratic formula, which relies on 2 being a unit, does not hold. Consider the monic quadratic polynomial x 2 + b x + c {\displaystyle x^{2}+bx+c} over a field of characteristic 2. If b = 0, then the solution reduces to extracting a square root, so the solution is x = c {\displaystyle x={\sqrt {c}}} and there is only one root since − c = − c + 2 c = c . {\displaystyle -{\sqrt {c}}=-{\sqrt {c}}+2{\sqrt {c}}={\sqrt {c}}.} In summary, x 2 + c = ( x + c ) 2 . {\displaystyle \displaystyle x^{2}+c=(x+{\sqrt {c}})^{2}.} See quadratic residue for more information about extracting square roots in finite fields. In the case that b ≠ 0, there are two distinct roots, but if the polynomial is irreducible, they cannot be expressed in terms of square roots of numbers in the coefficient field. Instead, define the 2-root R(c) of c to be a root of the polynomial x2 + x + c, an element of the splitting field of that polynomial. One verifies that R(c) + 1 is also a root. In terms of the 2-root operation, the two roots of the (non-monic) quadratic ax2 + bx + c are b a R ( a c b 2 ) {\displaystyle {\frac {b}{a}}R\left({\frac {ac}{b^{2}}}\right)} and b a ( R ( a c b 2 ) + 1 ) . {\displaystyle {\frac {b}{a}}\left(R\left({\frac {ac}{b^{2}}}\right)+1\right).} For example, let a denote a multiplicative generator of the group of units of F4, the Galois field of order four (thus a and a + 1 are roots of x2 + x + 1 over F4. Because (a + 1)2 = a, a + 1 is the unique solution of the quadratic equation x2 + a = 0. On the other hand, the polynomial x2 + ax + 1 is irreducible over F4, but it splits over F16, where it has the two roots ab and ab + a, where b is a root of x2 + x + a in F16. This is a special case of Artin–Schreier theory. == See also == Solving quadratic equations with continued fractions Linear equation Cubic function Quartic equation Quintic equation Fundamental theorem of algebra == References == == External links == "Quadratic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Quadratic equations". MathWorld. 101 uses of a quadratic equation Archived 2007-11-10 at the Wayback Machine 101 uses of a quadratic equation: Part II Archived 2007-10-22 at the Wayback Machine
Wikipedia/Quadratic_equation
In mathematics, homotopy theory is a systematic study of situations in which maps can come with homotopies between them. It originated as a topic in algebraic topology, but nowadays is learned as an independent discipline. == Applications to other fields of mathematics == Besides algebraic topology, the theory has also been used in other areas of mathematics such as: Algebraic geometry (e.g., A1 homotopy theory) Category theory (specifically the study of higher categories) == Concepts == === Spaces and maps === In homotopy theory and algebraic topology, the word "space" denotes a topological space. In order to avoid pathologies, one rarely works with arbitrary spaces; instead, one requires spaces to meet extra constraints, such as being compactly generated weak Hausdorff or a CW complex. In the same vein as above, a "map" is a continuous function, possibly with some extra constraints. Often, one works with a pointed space—that is, a space with a "distinguished point", called a basepoint. A pointed map is then a map which preserves basepoints; that is, it sends the basepoint of the domain to that of the codomain. In contrast, a free map is one which needn't preserve basepoints. The Cartesian product of two pointed spaces X , Y {\displaystyle X,Y} are not naturally pointed. A substitute is the smash product X ∧ Y {\displaystyle X\wedge Y} which is characterized by the adjoint relation Map ⁡ ( X ∧ Y , Z ) = Map ⁡ ( X , Map ⁡ ( Y , Z ) ) {\displaystyle \operatorname {Map} (X\wedge Y,Z)=\operatorname {Map} (X,\operatorname {Map} (Y,Z))} , that is, a smash product is an analog of a tensor product in abstract algebra (see tensor-hom adjunction). Explicitly, X ∧ Y {\displaystyle X\wedge Y} is the quotient of X × Y {\displaystyle X\times Y} by the wedge sum X ∨ Y {\displaystyle X\vee Y} . === Homotopy === Let I denote the unit interval [ 0 , 1 ] {\displaystyle [0,1]} . A map h : X × I → Y {\displaystyle h:X\times I\to Y} is called a homotopy from the map h 0 {\displaystyle h_{0}} to the map h 1 {\displaystyle h_{1}} , where h t ( x ) = h ( x , t ) {\displaystyle h_{t}(x)=h(x,t)} . Intuitively, we may think of h {\displaystyle h} as a path from the map h 0 {\displaystyle h_{0}} to the map h 1 {\displaystyle h_{1}} . Indeed, a homotopy can be shown to be an equivalence relation. When X, Y are pointed spaces, the maps h t {\displaystyle h_{t}} are required to preserve the basepoint and the homotopy h {\displaystyle h} is called a based homotopy. A based homotopy is the same as a (based) map X ∧ I + → Y {\displaystyle X\wedge I_{+}\to Y} where I + {\displaystyle I_{+}} is I {\displaystyle I} together with a disjoint basepoint. Given a pointed space X and an integer n ≥ 0 {\displaystyle n\geq 0} , let π n X = [ S n , X ] {\displaystyle \pi _{n}X=[S^{n},X]} be the homotopy classes of based maps S n → X {\displaystyle S^{n}\to X} from a (pointed) n-sphere S n {\displaystyle S^{n}} to X. As it turns out, for n ≥ 1 {\displaystyle n\geq 1} , π n X {\displaystyle \pi _{n}X} are groups called homotopy groups; in particular, π 1 X {\displaystyle \pi _{1}X} is called the fundamental group of X, for n ≥ 2 {\displaystyle n\geq 2} , π n X {\displaystyle \pi _{n}X} are abelian groups by the Eckmann–Hilton argument, π 0 X {\displaystyle \pi _{0}X} can be identified with the set of path-connected components in X {\displaystyle X} . Every group is the fundamental group of some space. A map f {\displaystyle f} is called a homotopy equivalence if there is another map g {\displaystyle g} such that f ∘ g {\displaystyle f\circ g} and g ∘ f {\displaystyle g\circ f} are both homotopic to the identities. Two spaces are said to be homotopy equivalent if there is a homotopy equivalence between them. A homotopy equivalence class of spaces is then called a homotopy type. There is a weaker notion: a map f : X → Y {\displaystyle f:X\to Y} is said to be a weak homotopy equivalence if f ∗ : π n ( X ) → π n ( Y ) {\displaystyle f_{*}:\pi _{n}(X)\to \pi _{n}(Y)} is an isomorphism for each n ≥ 0 {\displaystyle n\geq 0} and each choice of a base point. A homotopy equivalence is a weak homotopy equivalence but the converse need not be true. Through the adjunction Map ⁡ ( X × I , Y ) = Map ⁡ ( X , Map ⁡ ( I , Y ) ) , h ↦ ( x ↦ h ( x , ⋅ ) ) {\displaystyle \operatorname {Map} (X\times I,Y)=\operatorname {Map} (X,\operatorname {Map} (I,Y)),\,\,h\mapsto (x\mapsto h(x,\cdot ))} , a homotopy h : X × I → Y {\displaystyle h:X\times I\to Y} is sometimes viewed as a map X → Y I = Map ⁡ ( I , Y ) {\displaystyle X\to Y^{I}=\operatorname {Map} (I,Y)} . === CW complex === A CW complex is a space that has a filtration X ⊃ ⋯ ⊃ X n ⊃ X n − 1 ⊃ ⋯ ⊃ X 0 {\displaystyle X\supset \cdots \supset X^{n}\supset X^{n-1}\supset \cdots \supset X^{0}} whose union is X {\displaystyle X} and such that X 0 {\displaystyle X^{0}} is a discrete space, called the set of 0-cells (vertices) in X {\displaystyle X} . Each X n {\displaystyle X^{n}} is obtained by attaching several n-disks, n-cells, to X n − 1 {\displaystyle X^{n-1}} via maps S n − 1 → X n − 1 {\displaystyle S^{n-1}\to X^{n-1}} ; i.e., the boundary of an n-disk is identified with the image of S n − 1 {\displaystyle S^{n-1}} in X n − 1 {\displaystyle X^{n-1}} . A subset U {\displaystyle U} is open if and only if U ∩ X n {\displaystyle U\cap X^{n}} is open for each n {\displaystyle n} . For example, a sphere S n {\displaystyle S^{n}} has two cells: one 0-cell and one n {\displaystyle n} -cell, since S n {\displaystyle S^{n}} can be obtained by collapsing the boundary S n − 1 {\displaystyle S^{n-1}} of the n-disk to a point. In general, every manifold has the homotopy type of a CW complex; in fact, Morse theory implies that a compact manifold has the homotopy type of a finite CW complex. Remarkably, Whitehead's theorem says that for CW complexes, a weak homotopy equivalence and a homotopy equivalence are the same thing. Another important result is the approximation theorem. First, the homotopy category of spaces is the category where an object is a space but a morphism is the homotopy class of a map. Then Explicitly, the above approximation functor can be defined as the composition of the singular chain functor S ∗ {\displaystyle S_{*}} followed by the geometric realization functor; see § Simplicial set. The above theorem justifies a common habit of working only with CW complexes. For example, given a space X {\displaystyle X} , one can just define the homology of X {\displaystyle X} to the homology of the CW approximation of X {\displaystyle X} (the cell structure of a CW complex determines the natural homology, the cellular homology and that can be taken to be the homology of the complex.) === Cofibration and fibration === A map f : A → X {\displaystyle f:A\to X} is called a cofibration if given: A map h 0 : X → Z {\displaystyle h_{0}:X\to Z} , and A homotopy g t : A → Z {\displaystyle g_{t}:A\to Z} such that h 0 ∘ f = g 0 {\displaystyle h_{0}\circ f=g_{0}} , there exists a homotopy h t : X → Z {\displaystyle h_{t}:X\to Z} that extends h 0 {\displaystyle h_{0}} and such that h t ∘ f = g t {\displaystyle h_{t}\circ f=g_{t}} . An example is a neighborhood deformation retract; that is, X {\displaystyle X} contains a mapping cylinder neighborhood of a closed subspace A {\displaystyle A} and f {\displaystyle f} the inclusion (e.g., a tubular neighborhood of a closed submanifold). In fact, a cofibration can be characterized as a neighborhood deformation retract pair. Another basic example is a CW pair ( X , A ) {\displaystyle (X,A)} ; many often work only with CW complexes and the notion of a cofibration there is then often implicit. A fibration in the sense of Hurewicz is the dual notion of a cofibration: that is, a map p : X → B {\displaystyle p:X\to B} is a fibration if given (1) a map h 0 : Z → X {\displaystyle h_{0}:Z\to X} and (2) a homotopy g t : Z → B {\displaystyle g_{t}:Z\to B} such that p ∘ h 0 = g 0 {\displaystyle p\circ h_{0}=g_{0}} , there exists a homotopy h t : Z → X {\displaystyle h_{t}:Z\to X} that extends h 0 {\displaystyle h_{0}} and such that p ∘ h t = g t {\displaystyle p\circ h_{t}=g_{t}} . While a cofibration is characterized by the existence of a retract, a fibration is characterized by the existence of a section called the path lifting as follows. Let p ′ : N p → B I {\displaystyle p':Np\to B^{I}} be the pull-back of a map p : E → B {\displaystyle p:E\to B} along χ ↦ χ ( 1 ) : B I → B {\displaystyle \chi \mapsto \chi (1):B^{I}\to B} , called the mapping path space of p {\displaystyle p} . Viewing p ′ {\displaystyle p'} as a homotopy N p × I → B {\displaystyle Np\times I\to B} (see § Homotopy), if p {\displaystyle p} is a fibration, then p ′ {\displaystyle p'} gives a homotopy s : N p → E I {\displaystyle s:Np\to E^{I}} such that s ( e , χ ) ( 0 ) = e , ( p I ∘ s ) ( e , χ ) = χ {\displaystyle s(e,\chi )(0)=e,\,(p^{I}\circ s)(e,\chi )=\chi } where p I : E I → B I {\displaystyle p^{I}:E^{I}\to B^{I}} is given by p {\displaystyle p} . This s {\displaystyle s} is called the path lifting associated to p {\displaystyle p} . Conversely, if there is a path lifting s {\displaystyle s} , then p {\displaystyle p} is a fibration as a required homotopy is obtained via s {\displaystyle s} . A basic example of a fibration is a covering map as it comes with a unique path lifting. If E {\displaystyle E} is a principal G-bundle over a paracompact space, that is, a space with a free and transitive (topological) group action of a (topological) group, then the projection map p : E → X {\displaystyle p:E\to X} is a fibration, because a Hurewicz fibration can be checked locally on a paracompact space. While a cofibration is injective with closed image, a fibration need not be surjective. There are also based versions of a cofibration and a fibration (namely, the maps are required to be based). === Lifting property === A pair of maps i : A → X {\displaystyle i:A\to X} and p : E → B {\displaystyle p:E\to B} is said to satisfy the lifting property if for each commutative square diagram there is a map λ {\displaystyle \lambda } that makes the above diagram still commute. (The notion originates in the theory of model categories.) Let c {\displaystyle {\mathfrak {c}}} be a class of maps. Then a map p : E → B {\displaystyle p:E\to B} is said to satisfy the right lifting property or the RLP if p {\displaystyle p} satisfies the above lifting property for each i {\displaystyle i} in c {\displaystyle {\mathfrak {c}}} . Similarly, a map i : A → X {\displaystyle i:A\to X} is said to satisfy the left lifting property or the LLP if it satisfies the lifting property for each p {\displaystyle p} in c {\displaystyle {\mathfrak {c}}} . For example, a Hurewicz fibration is exactly a map p : E → B {\displaystyle p:E\to B} that satisfies the RLP for the inclusions i 0 : A → A × I {\displaystyle i_{0}:A\to A\times I} . A Serre fibration is a map satisfying the RLP for the inclusions i : S n − 1 → D n {\displaystyle i:S^{n-1}\to D^{n}} where S − 1 {\displaystyle S^{-1}} is the empty set. A Hurewicz fibration is a Serre fibration and the converse holds for CW complexes. On the other hand, a cofibration is exactly a map satisfying the LLP for evaluation maps p : B I → B {\displaystyle p:B^{I}\to B} at 0 {\displaystyle 0} . === Loop and suspension === On the category of pointed spaces, there are two important functors: the loop functor Ω {\displaystyle \Omega } and the (reduced) suspension functor Σ {\displaystyle \Sigma } , which are in the adjoint relation. Precisely, they are defined as Ω X = Map ⁡ ( S 1 , X ) {\displaystyle \Omega X=\operatorname {Map} (S^{1},X)} , and Σ X = X ∧ S 1 {\displaystyle \Sigma X=X\wedge S^{1}} . Because of the adjoint relation between a smash product and a mapping space, we have: Map ⁡ ( Σ X , Y ) = Map ⁡ ( X , Ω Y ) . {\displaystyle \operatorname {Map} (\Sigma X,Y)=\operatorname {Map} (X,\Omega Y).} These functors are used to construct fiber sequences and cofiber sequences. Namely, if f : X → Y {\displaystyle f:X\to Y} is a map, the fiber sequence generated by f {\displaystyle f} is the exact sequence ⋯ → Ω 2 F f → Ω 2 X → Ω 2 Y → Ω F f → Ω X → Ω Y → F f → X → Y {\displaystyle \cdots \to \Omega ^{2}Ff\to \Omega ^{2}X\to \Omega ^{2}Y\to \Omega Ff\to \Omega X\to \Omega Y\to Ff\to X\to Y} where F f {\displaystyle Ff} is the homotopy fiber of f {\displaystyle f} ; i.e., a fiber obtained after replacing f {\displaystyle f} by a (based) fibration. The cofibration sequence generated by f {\displaystyle f} is X → Y → C f → Σ X → ⋯ , {\displaystyle X\to Y\to Cf\to \Sigma X\to \cdots ,} where C f {\displaystyle Cf} is the homotooy cofiber of f {\displaystyle f} constructed like a homotopy fiber (use a quotient instead of a fiber.) The functors Ω , Σ {\displaystyle \Omega ,\Sigma } restrict to the category of CW complexes in the following weak sense: a theorem of Milnor says that if X {\displaystyle X} has the homotopy type of a CW complex, then so does its loop space Ω X {\displaystyle \Omega X} . === Classifying spaces and homotopy operations === Given a topological group G, the classifying space for principal G-bundles ("the" up to equivalence) is a space B G {\displaystyle BG} such that, for each space X, [ X , B G ] = {\displaystyle [X,BG]=} {principal G-bundle on X} / ~ , [ f ] ↦ [ f ∗ E G ] {\displaystyle ,\,\,[f]\mapsto [f^{*}EG]} where the left-hand side is the set of homotopy classes of maps X → B G {\displaystyle X\to BG} , ~ refers isomorphism of bundles, and = is given by pulling-back the distinguished bundle E G {\displaystyle EG} on B G {\displaystyle BG} (called universal bundle) along a map X → B G {\displaystyle X\to BG} . Brown's representability theorem guarantees the existence of classifying spaces. === Spectrum and generalized cohomology === The idea that a classifying space classifies principal bundles can be pushed further. For example, one might try to classify cohomology classes: given an abelian group A (such as Z {\displaystyle \mathbb {Z} } ), [ X , K ( A , n ) ] = H n ⁡ ( X ; A ) {\displaystyle [X,K(A,n)]=\operatorname {H} ^{n}(X;A)} where K ( A , n ) {\displaystyle K(A,n)} is the Eilenberg–MacLane space. The above equation leads to the notion of a generalized cohomology theory; i.e., a contravariant functor from the category of spaces to the category of abelian groups that satisfies the axioms generalizing ordinary cohomology theory. As it turns out, such a functor may not be representable by a space but it can always be represented by a sequence of (pointed) spaces with structure maps called a spectrum. In other words, to give a generalized cohomology theory is to give a spectrum. A K-theory is an example of a generalized cohomology theory. A basic example of a spectrum is a sphere spectrum: S 0 → S 1 → S 2 → ⋯ {\displaystyle S^{0}\to S^{1}\to S^{2}\to \cdots } === Ring spectrum and module spectrum === == Homotopy colimit and limit == == Key theorems == Seifert–van Kampen theorem Homotopy excision theorem Freudenthal suspension theorem (a corollary of the excision theorem) Landweber exact functor theorem Dold–Kan correspondence Eckmann–Hilton argument - this shows for instance higher homotopy groups are abelian. Universal coefficient theorem Dold–Thom theorem == Obstruction theory and characteristic class == See also: Characteristic class, Postnikov tower, Whitehead torsion == Localization and completion of a space == == Specific theories == There are several specific theories simple homotopy theory stable homotopy theory chromatic homotopy theory rational homotopy theory p-adic homotopy theory equivariant homotopy theory simplicial homotopy theory == Homotopy hypothesis == One of the basic questions in the foundations of homotopy theory is the nature of a space. The homotopy hypothesis asks whether a space is something fundamentally algebraic. If one prefers to work with a space instead of a pointed space, there is the notion of a fundamental groupoid (and higher variants): by definition, the fundamental groupoid of a space X is the category where the objects are the points of X and the morphisms are paths. == Abstract homotopy theory == Abstract homotopy theory is an axiomatic approach to homotopy theory. Such axiomatization is useful for non-traditional applications of homotopy theory. One approach to axiomatization is by Quillen's model categories. A model category is a category with a choice of three classes of maps called weak equivalences, cofibrations and fibrations, subject to the axioms that are reminiscent of facts in algebraic topology. For example, the category of (reasonable) topological spaces has a structure of a model category where a weak equivalence is a weak homotopy equivalence, a cofibration a certain retract and a fibration a Serre fibration. Another example is the category of non-negatively graded chain complexes over a fixed base ring. === Simplicial set === A simplicial set is an abstract generalization of a simplicial complex and can play a role of a "space" in some sense. Despite the name, it is not a set but is a sequence of sets together with the certain maps (face and degeneracy) between those sets. For example, given a space X {\displaystyle X} , for each integer n ≥ 0 {\displaystyle n\geq 0} , let S n X {\displaystyle S_{n}X} be the set of all maps from the n-simplex to X {\displaystyle X} . Then the sequence S n X {\displaystyle S_{n}X} of sets is a simplicial set. Each simplicial set K = { K n } n ≥ 0 {\displaystyle K=\{K_{n}\}_{n\geq 0}} has a naturally associated chain complex and the homology of that chain complex is the homology of K {\displaystyle K} . The singular homology of X {\displaystyle X} is precisely the homology of the simplicial set S ∗ X {\displaystyle S_{*}X} . Also, the geometric realization | ⋅ | {\displaystyle |\cdot |} of a simplicial set is a CW complex and the composition X ↦ | S ∗ X | {\displaystyle X\mapsto |S_{*}X|} is precisely the CW approximation functor. Another important example is a category or more precisely the nerve of a category, which is a simplicial set. In fact, a simplicial set is the nerve of some category if and only if it satisfies the Segal conditions (a theorem of Grothendieck). Each category is completely determined by its nerve. In this way, a category can be viewed as a special kind of a simplicial set, and this observation is used to generalize a category. Namely, an ∞ {\displaystyle \infty } -category or an ∞ {\displaystyle \infty } -groupoid is defined as particular kinds of simplicial sets. Since simplicial sets are sort of abstract spaces (if not topological spaces), it is possible to develop the homotopy theory on them, which is called the simplicial homotopy theory. == See also == Highly structured ring spectrum Homotopy type theory Pursuing Stacks Shape theory Moduli stack of formal group laws Crossed module Milnor's theorem on Kan complexes Fibration of simplicial sets == References == Bott, Raoul; Tu, Loring W. (1995). Differential Forms in Algebraic Topology. Springer. ISBN 978-038790613-3. May, J. Peter. "A Concise Course in Algebraic Topology" (PDF). University of Chicago. May, J. Peter; Ponto, Kate. More Concise Algebraic Topology: Localization, completion, and model categories (PDF). University of Chicago Press. p. 215. ISBN 978-022651178-8 – via University of Edinburgh. Whitehead, George William (1978). Elements of homotopy theory. Graduate Texts in Mathematics. Vol. 61 (3rd ed.). New York-Berlin: Springer-Verlag. pp. xxi+744. ISBN 978-0-387-90336-1. MR 0516508. Retrieved September 6, 2011. Brown, Ronald (2006). Topology and groupoids. Booksurge LLC. ISBN 1-4196-2722-8. "Homotopical algebra". nLab. Dwyer, W.G.; Spalinski, J. (18 July 1995). "Homotopy Theories and Model Categories". In James, I.M. (ed.). Handbook of Algebraic Topology. Elsevier. ISBN 0-444-81779-4. Hatcher, Allen. "Algebraic topology". Milnor, John (1959). "On spaces having the homotopy type of 𝐶𝑊-complex". Transactions of the American Mathematical Society. 90 (2): 272–280. doi:10.1090/S0002-9947-1959-0100267-4. ISSN 0002-9947. S2CID 123048606. Spanier, Edwin (1989). Algebraic topology. Springer. ISBN 978-0-387-94426-5. Sullivan, Dennis (July 1974). "Genetics of homotopy theory and the Adams conjecture" (PDF). Annals of Mathematics. 2. 100 (1): 1–79. doi:10.2307/1970841. JSTOR 1970841 – via Math - Côte d'Azur University. == Further reading == Cisinski, Denis-Charles (March 2015). "Higher Categories And Topos Theory(in french)" (PDF). Math - University of Toulouse. Porter, Timothy (February 12, 2010). "Abstract Homotopy Theory: The Interaction Of Category Theory And Homotopy Theory: A Revised Version Of The 2001 Article" (PDF). nLab. "Math 527 - Homotopy Theory Spring 2013, Section F1". University of Illinois Urbana-Champaign – via University of Regina., lectures by Martin Frankland Quillen, D. (1967). Homotopical algebra. Lectures Notes in Math. Vol. 43. Springer Verlag. ISBN 978-3-540-03914-3. == External links == "Homotopy theory". ncatlab.org.
Wikipedia/Homotopy_theory
Category theory is a general theory of mathematical structures and their relations. It was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, many constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. Many areas of computer science also rely on category theory, such as functional programming and semantics. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. Metaphorically, a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one. Morphism composition has similar properties as function composition (associativity and existence of an identity morphism for each object). Morphisms are often some sort of functions, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid. The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories C 1 {\displaystyle {\mathcal {C}}_{1}} and C 2 {\displaystyle {\mathcal {C}}_{2}} : it maps objects of C 1 {\displaystyle {\mathcal {C}}_{1}} to objects of C 2 {\displaystyle {\mathcal {C}}_{2}} and morphisms of C 1 {\displaystyle {\mathcal {C}}_{1}} to morphisms of C 2 {\displaystyle {\mathcal {C}}_{2}} in such a way that sources are mapped to sources, and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. == Categories, objects, and morphisms == === Categories === A category C {\displaystyle {\mathcal {C}}} consists of the following three mathematical entities: A class ob ( C ) {\displaystyle {\text{ob}}({\mathcal {C}})} , whose elements are called objects; A class hom ( C ) {\displaystyle {\text{hom}}({\mathcal {C}})} , whose elements are called morphisms or maps or arrows. Each morphism f {\displaystyle f} has a source object a {\displaystyle a} and target object b {\displaystyle b} .The expression f : a → b {\displaystyle f:a\rightarrow b} would be verbally stated as " f {\displaystyle f} is a morphism from a to b".The expression hom ( a , b ) {\displaystyle {\text{hom}}(a,b)} – alternatively expressed as hom C ( a , b ) {\displaystyle {\text{hom}}_{\mathcal {C}}(a,b)} , mor ( a , b ) {\displaystyle {\text{mor}}(a,b)} , or C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} – denotes the hom-class of all morphisms from a {\displaystyle a} to b {\displaystyle b} . A binary operation ∘ {\displaystyle \circ } , called composition of morphisms, such that for any three objects a, b, and c, we have ∘ : hom ( b , c ) × hom ( a , b ) ↦ hom ( a , c ) {\displaystyle \circ :{\text{hom}}(b,c)\times {\text{hom}}(a,b)\mapsto {\text{hom}}(a,c)} The composition of f : a → b {\displaystyle f:a\rightarrow b} and g : b → c {\displaystyle g:b\rightarrow c} is written as g ∘ f {\displaystyle g\circ f} or g f {\displaystyle gf} , governed by two axioms: Associativity: If f : a → b {\displaystyle f:a\rightarrow b} , g : b → c {\displaystyle g:b\rightarrow c} , and h : c → d {\displaystyle h:c\rightarrow d} then h ∘ ( g ∘ f ) = ( h ∘ g ) ∘ f {\displaystyle h\circ (g\circ f)=(h\circ g)\circ f} Identity: For every object x, there exists a morphism 1 x : x → x {\displaystyle 1_{x}:x\rightarrow x} (also denoted as id x {\displaystyle {\text{id}}_{x}} ) called the identity morphism for x, such that for every morphism f : a → b {\displaystyle f:a\rightarrow b} , we have 1 b ∘ f = f = f ∘ 1 a {\displaystyle 1_{b}\circ f=f=f\circ 1_{a}} From the axioms, it can be proved that there is exactly one identity morphism for every object. ==== Examples ==== The category Set As the class of objects ob ( Set ) {\displaystyle {\text{ob}}({\text{Set}})} , we choose the class of all sets. As the class of morphisms hom ( Set ) {\displaystyle {\text{hom}}({\text{Set}})} , we choose the class of all functions. Therefore, for two objects A and B, i.e. sets, we have hom ( A , B ) {\displaystyle {\text{hom}}(A,B)} to be the class of all functions ⁠ f {\displaystyle f} ⁠ such that ⁠ f : A → B {\displaystyle f:A\rightarrow B} ⁠. The composition of morphisms ⁠ ∘ {\displaystyle \circ } ⁠ is simply the usual function composition, i.e. for two morphisms ⁠ f : A → B {\displaystyle f:A\rightarrow B} ⁠ and ⁠ g : B → C {\displaystyle g:B\rightarrow C} ⁠, we have ⁠ g ∘ f : A → C {\displaystyle g\circ f:A\rightarrow C} ⁠, ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)=g(f(x))} , which is obviously associative. Furthermore, for every object A we have the identity morphism id A {\displaystyle {\text{id}}_{A}} to be the identity map id A : A → A {\displaystyle {\text{id}}_{A}:A\rightarrow A} , id A ( x ) = x {\displaystyle {\text{id}}_{A}(x)=x} on A === Morphisms === Relations among morphisms (such as fg = h) are often depicted using commutative diagrams, with "points" (corners) representing objects and "arrows" representing morphisms. Morphisms can have any of the following properties. A morphism f : a → b is: a monomorphism (or monic) if f ∘ g1 = f ∘ g2 implies g1 = g2 for all morphisms g1, g2 : x → a. an epimorphism (or epic) if g1 ∘ f = g2 ∘ f implies g1 = g2 for all morphisms g1, g2 : b → x. a bimorphism if f is both epic and monic. an isomorphism if there exists a morphism g : b → a such that f ∘ g = 1b and g ∘ f = 1a. an endomorphism if a = b. end(a) denotes the class of endomorphisms of a. an automorphism if f is both an endomorphism and an isomorphism. aut(a) denotes the class of automorphisms of a. a retraction if a right inverse of f exists, i.e. if there exists a morphism g : b → a with f ∘ g = 1b. a section if a left inverse of f exists, i.e. if there exists a morphism g : b → a with g ∘ f = 1a. Every retraction is an epimorphism, and every section is a monomorphism. Furthermore, the following three statements are equivalent: f is a monomorphism and a retraction; f is an epimorphism and a section; f is an isomorphism. == Functors == Functors are structure-preserving maps between categories. They can be thought of as morphisms in the category of all (small) categories. A (covariant) functor F from a category C to a category D, written F : C → D, consists of: for each object x in C, an object F(x) in D; and for each morphism f : x → y in C, a morphism F(f) : F(x) → F(y) in D, such that the following two properties hold: For every object x in C, F(1x) = 1F(x); For all morphisms f : x → y and g : y → z, F(g ∘ f) = F(g) ∘ F(f). A contravariant functor F: C → D is like a covariant functor, except that it "turns morphisms around" ("reverses all the arrows"). More specifically, every morphism f : x → y in C must be assigned to a morphism F(f) : F(y) → F(x) in D. In other words, a contravariant functor acts as a covariant functor from the opposite category Cop to D. == Natural transformations == A natural transformation is a relation between two functors. Functors often describe "natural constructions" and natural transformations then describe "natural homomorphisms" between two such constructions. Sometimes two quite different constructions yield "the same" result; this is expressed by a natural isomorphism between the two functors. If F and G are (covariant) functors between the categories C and D, then a natural transformation η from F to G associates to every object X in C a morphism ηX : F(X) → G(X) in D such that for every morphism f : X → Y in C, we have ηY ∘ F(f) = G(f) ∘ ηX; this means that the following diagram is commutative: The two functors F and G are called naturally isomorphic if there exists a natural transformation from F to G such that ηX is an isomorphism for every object X in C. == Other concepts == === Universal constructions, limits, and colimits === Using the language of category theory, many areas of mathematical study can be categorized. Categories include sets, groups and topologies. Each category is distinguished by properties that all its objects have in common, such as the empty set or the product of two topologies, yet in the definition of a category, objects are considered atomic, i.e., we do not know whether an object A is a set, a topology, or any other abstract concept. Hence, the challenge is to define special objects without referring to the internal structure of those objects. To define the empty set without referring to elements, or the product topology without referring to open sets, one can characterize these objects in terms of their relations to other objects, as given by the morphisms of the respective categories. Thus, the task is to find universal properties that uniquely determine the objects of interest. Numerous important constructions can be described in a purely categorical way if the category limit can be developed and dualized to yield the notion of a colimit. === Equivalent categories === It is a natural question to ask: under which conditions can two categories be considered essentially the same, in the sense that theorems about one category can readily be transformed into theorems about the other category? The major tool one employs to describe such a situation is called equivalence of categories, which is given by appropriate functors between two categories. Categorical equivalence has found numerous applications in mathematics. === Further concepts and results === The definitions of categories and functors provide only the very basics of categorical algebra; additional important topics are listed below. Although there are strong interrelations between all of these topics, the given order can be considered as a guideline for further reading. The functor category DC has as objects the functors from C to D and as morphisms the natural transformations of such functors. The Yoneda lemma is one of the most famous basic results of category theory; it describes representable functors in functor categories. Duality: Every statement, theorem, or definition in category theory has a dual which is essentially obtained by "reversing all the arrows". If one statement is true in a category C then its dual is true in the dual category Cop. This duality, which is transparent at the level of category theory, is often obscured in applications and can lead to surprising relationships. Adjoint functors: A functor can be left (or right) adjoint to another functor that maps in the opposite direction. Such a pair of adjoint functors typically arises from a construction defined by a universal property; this can be seen as a more abstract and powerful view on universal properties. === Higher-dimensional categories === Many of the above concepts, especially equivalence of categories, adjoint functor pairs, and functor categories, can be situated into the context of higher-dimensional categories. Briefly, if we consider a morphism between two objects as a "process taking us from one object to another", then higher-dimensional categories allow us to profitably generalize this by considering "higher-dimensional processes". For example, a (strict) 2-category is a category together with "morphisms between morphisms", i.e., processes which allow us to transform one morphism into another. We can then "compose" these "bimorphisms" both horizontally and vertically, and we require a 2-dimensional "exchange law" to hold, relating the two composition laws. In this context, the standard example is Cat, the 2-category of all (small) categories, and in this example, bimorphisms of morphisms are simply natural transformations of morphisms in the usual sense. Another basic example is to consider a 2-category with a single object; these are essentially monoidal categories. Bicategories are a weaker notion of 2-dimensional categories in which the composition of morphisms is not strictly associative, but only associative "up to" an isomorphism. This process can be extended for all natural numbers n, and these are called n-categories. There is even a notion of ω-category corresponding to the ordinal number ω. Higher-dimensional categories are part of the broader mathematical field of higher-dimensional algebra, a concept introduced by Ronald Brown. For a conversational introduction to these ideas, see John Baez, 'A Tale of n-categories' (1996). == Historical notes == It should be observed first that the whole concept of a category is essentially an auxiliary one; our basic concepts are essentially those of a functor and of a natural transformation [...] Whilst specific examples of functors and natural transformations had been given by Samuel Eilenberg and Saunders Mac Lane in a 1942 paper on group theory, these concepts were introduced in a more general sense, together with the additional notion of categories, in a 1945 paper by the same authors (who discussed applications of category theory to the field of algebraic topology). Their work was an important part of the transition from intuitive and geometric homology to homological algebra, Eilenberg and Mac Lane later writing that their goal was to understand natural transformations, which first required the definition of functors, then categories. Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes; Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure (homomorphisms). Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes (functors) that relate topological structures to algebraic structures (topological invariants) that characterize them. Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry (scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics (categorical abstract machine) came later. Certain categories called topoi (singular topos) can even serve as an alternative to axiomatic set theory as a foundation of mathematics. A topos can also be considered as a specific type of category with two additional topos axioms. These foundational applications of category theory have been worked out in fair detail as a basis for, and justification of, constructive mathematics. Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology. Categorical logic is now a well-defined field based on type theory for intuitionistic logics, with applications in functional programming and domain theory, where a cartesian closed category is taken as a non-syntactic description of a lambda calculus. At the very least, category theoretic language clarifies what exactly these related areas have in common (in some abstract sense). Category theory has been applied in other fields as well, see applied category theory. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. Another application of category theory, more specifically topos theory, has been made in mathematical music theory, see for example the book The Topos of Music, Geometric Logic of Concepts, Theory, and Performance by Guerino Mazzola. More recent efforts to introduce undergraduates to categories as a foundation for mathematics include those of William Lawvere and Rosebrugh (2003) and Lawvere and Stephen Schanuel (1997) and Mirroslav Yotov (2012). == See also == == Notes == == References == === Citations === === Sources === == Further reading == == External links ==
Wikipedia/Category_theory
In algebra, a cubic equation in one variable is an equation of the form a x 3 + b x 2 + c x + d = 0 {\displaystyle ax^{3}+bx^{2}+cx+d=0} in which a is not zero. The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of the coefficients a, b, c, and d of the cubic equation are real numbers, then it has at least one real root (this is true for all odd-degree polynomial functions). All of the roots of the cubic equation can be found by the following means: algebraically: more precisely, they can be expressed by a cubic formula involving the four coefficients, the four basic arithmetic operations, square roots, and cube roots. (This is also true of quadratic (second-degree) and quartic (fourth-degree) equations, but not for higher-degree equations, by the Abel–Ruffini theorem.) trigonometrically numerical approximations of the roots can be found using root-finding algorithms such as Newton's method. The coefficients do not need to be real numbers. Much of what is covered below is valid for coefficients in any field with characteristic other than 2 and 3. The solutions of the cubic equation do not necessarily belong to the same field as the coefficients. For example, some cubic equations with rational coefficients have roots that are irrational (and even non-real) complex numbers. == History == Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, and Egyptians. Babylonian (20th to 16th centuries BC) cuneiform tablets have been found with tables for calculating cubes and cube roots. The Babylonians could have used the tables to solve cubic equations, but no evidence exists to confirm that they did. The problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed. In the 5th century BC, Hippocrates reduced this problem to that of finding two mean proportionals between one line and another of twice its length, but could not solve this with a compass and straightedge construction, a task which is now known to be impossible. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BC and commented on by Liu Hui in the 3rd century. In the 3rd century AD, the Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations (Diophantine equations). Hippocrates, Menaechmus and Archimedes are believed to have come close to solving the problem of doubling the cube using intersecting conic sections, though historians such as Reviel Netz dispute whether the Greeks were thinking about cubic equations or just problems that can lead to cubic equations. Some others like T. L. Heath, who translated all of Archimedes's works, disagree, putting forward evidence that Archimedes really solved cubic equations using intersections of two conics, but also discussed the conditions where the roots are 0, 1 or 2. In the 7th century, the Tang dynasty astronomer mathematician Wang Xiaotong in his mathematical treatise titled Jigu Suanjing systematically established and solved numerically 25 cubic equations of the form x3 + px2 + qx = N, 23 of them with p, q ≠ 0, and two of them with q = 0. In the 11th century, the Persian poet-mathematician, Omar Khayyam (1048–1131), made significant progress in the theory of cubic equations. In an early paper, he discovered that a cubic equation can have more than one solution and stated that it cannot be solved using compass and straightedge constructions. He also found a geometric solution. In his later work, the Treatise on Demonstration of Problems of Algebra, he wrote a complete classification of cubic equations with general geometric solutions found by means of intersecting conic sections. Khayyam made an attempt to come up with an algebraic formula for extracting cubic roots. He wrote: “We have tried to express these roots by algebra but have failed. It may be, however, that men who come after us will succeed.” In the 12th century, the Indian mathematician Bhaskara II attempted the solution of cubic equations without general success. However, he gave one example of a cubic equation: x3 + 12x = 6x2 + 35. In the 12th century, another Persian mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), wrote the Al-Muʿādalāt (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the Horner–Ruffini method to numerically approximate the root of a cubic equation. He also used the concepts of maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation to find algebraic solutions to certain types of cubic equations. In his book Flos, Leonardo de Pisa, also known as Fibonacci (1170–1250), was able to closely approximate the positive solution to the cubic equation x3 + 2x2 + 10x = 20. Writing in Babylonian numerals he gave the result as 1,22,7,42,33,4,40 (equivalent to 1 + 22/60 + 7/602 + 42/603 + 33/604 + 4/605 + 40/606), which has a relative error of about 10−9. In the early 16th century, the Italian mathematician Scipione del Ferro (1465–1526) found a method for solving a class of cubic equations, namely those of the form x3 + mx = n. In fact, all cubic equations can be reduced to this form if one allows m and n to be negative, but negative numbers were not known to him at that time. Del Ferro kept his achievement secret until just before his death, when he told his student Antonio Fior about it. In 1535, Niccolò Tartaglia (1500–1557) received two problems in cubic equations from Zuanne da Coi and announced that he could solve them. He was soon challenged by Fior, which led to a famous contest between the two. Each contestant had to put up a certain amount of money and to propose a number of problems for his rival to solve. Whoever solved more problems within 30 days would get all the money. Tartaglia received questions in the form x3 + mx = n, for which he had worked out a general method. Fior received questions in the form x3 + mx2 = n, which proved to be too difficult for him to solve, and Tartaglia won the contest. Later, Tartaglia was persuaded by Gerolamo Cardano (1501–1576) to reveal his secret for solving cubic equations. In 1539, Tartaglia did so only on the condition that Cardano would never reveal it and that if he did write a book about cubics, he would give Tartaglia time to publish. Some years later, Cardano learned about del Ferro's prior work and published del Ferro's method in his book Ars Magna in 1545, meaning Cardano gave Tartaglia six years to publish his results (with credit given to Tartaglia for an independent solution). Cardano's promise to Tartaglia said that he would not publish Tartaglia's work, and Cardano felt he was publishing del Ferro's, so as to get around the promise. Nevertheless, this led to a challenge to Cardano from Tartaglia, which Cardano denied. The challenge was eventually accepted by Cardano's student Lodovico Ferrari (1522–1565). Ferrari did better than Tartaglia in the competition, and Tartaglia lost both his prestige and his income. Cardano noticed that Tartaglia's method sometimes required him to extract the square root of a negative number. He even included a calculation with these complex numbers in Ars Magna, but he did not really understand it. Rafael Bombelli studied this issue in detail and is therefore often considered as the discoverer of complex numbers. François Viète (1540–1603) independently derived the trigonometric solution for the cubic with three real roots, and René Descartes (1596–1650) extended the work of Viète. == Factorization == If the coefficients of a cubic equation are rational numbers, one can obtain an equivalent equation with integer coefficients, by multiplying all coefficients by a common multiple of their denominators. Such an equation a x 3 + b x 2 + c x + d = 0 , {\displaystyle ax^{3}+bx^{2}+cx+d=0,} with integer coefficients, is said to be reducible if the polynomial on the left-hand side is the product of polynomials of lower degrees. By Gauss's lemma, if the equation is reducible, one can suppose that the factors have integer coefficients. Finding the roots of a reducible cubic equation is easier than solving the general case. In fact, if the equation is reducible, one of the factors must have degree one, and thus have the form q x − p , {\displaystyle qx-p,} with q and p being coprime integers. The rational root test allows finding q and p by examining a finite number of cases (because q must be a divisor of a, and p must be a divisor of d). Thus, one root is x 1 = p q , {\displaystyle \textstyle x_{1}={\frac {p}{q}},} and the other roots are the roots of the other factor, which can be found by polynomial long division. This other factor is a q x 2 + b q + a p q 2 x + c q 2 + b p q + a p 2 q 3 . {\displaystyle {\frac {a}{q}}\,x^{2}+{\frac {bq+ap}{q^{2}}}\,x+{\frac {cq^{2}+bpq+ap^{2}}{q^{3}}}.} (The coefficients seem not to be integers, but must be integers if ⁠ p / q {\displaystyle p/q} ⁠ is a root.) Then, the other roots are the roots of this quadratic polynomial and can be found by using the quadratic formula. == Depressed cubic == Cubics of the form t 3 + p t + q {\displaystyle t^{3}+pt+q} are said to be depressed. They are much simpler than general cubics, but are fundamental, because the study of any cubic may be reduced by a simple change of variable to that of a depressed cubic. Let a x 3 + b x 2 + c x + d = 0 {\displaystyle ax^{3}+bx^{2}+cx+d=0} be a cubic equation. The change of variable x = t − b 3 a {\displaystyle x=t-{\frac {b}{3a}}} gives a cubic (in t) that has no term in t2. After dividing by a one gets the depressed cubic equation t 3 + p t + q = 0 , {\displaystyle t^{3}+pt+q=0,} with t = x + b 3 a p = 3 a c − b 2 3 a 2 q = 2 b 3 − 9 a b c + 27 a 2 d 27 a 3 . {\displaystyle {\begin{aligned}t={}&x+{\frac {b}{3a}}\\p={}&{\frac {3ac-b^{2}}{3a^{2}}}\\q={}&{\frac {2b^{3}-9abc+27a^{2}d}{27a^{3}}}.\end{aligned}}} The roots x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} of the original equation are related to the roots t 1 , t 2 , t 3 {\displaystyle t_{1},t_{2},t_{3}} of the depressed equation by the relations x i = t i − b 3 a , {\displaystyle x_{i}=t_{i}-{\frac {b}{3a}},} for i = 1 , 2 , 3 {\displaystyle i=1,2,3} . == Discriminant and nature of the roots == The nature (real or not, distinct or not) of the roots of a cubic can be determined without computing them explicitly, by using the discriminant. === Discriminant === The discriminant of a polynomial is a function of its coefficients that is zero if and only if the polynomial has a multiple root, or, if it is divisible by the square of a non-constant polynomial. In other words, the discriminant is nonzero if and only if the polynomial is square-free. If r1, r2, r3 are the three roots (not necessarily distinct nor real) of the cubic a x 3 + b x 2 + c x + d , {\displaystyle ax^{3}+bx^{2}+cx+d,} then the discriminant is a 4 ( r 1 − r 2 ) 2 ( r 1 − r 3 ) 2 ( r 2 − r 3 ) 2 . {\displaystyle a^{4}(r_{1}-r_{2})^{2}(r_{1}-r_{3})^{2}(r_{2}-r_{3})^{2}.} The discriminant of the depressed cubic t 3 + p t + q {\displaystyle t^{3}+pt+q} is − ( 4 p 3 + 27 q 2 ) . {\displaystyle -\left(4\,p^{3}+27\,q^{2}\right).} The discriminant of the general cubic a x 3 + b x 2 + c x + d {\displaystyle ax^{3}+bx^{2}+cx+d} is 18 a b c d − 4 b 3 d + b 2 c 2 − 4 a c 3 − 27 a 2 d 2 . {\displaystyle 18\,abcd-4\,b^{3}d+b^{2}c^{2}-4\,ac^{3}-27\,a^{2}d^{2}.} It is the product of a 4 {\displaystyle a^{4}} and the discriminant of the corresponding depressed cubic. Using the formula relating the general cubic and the associated depressed cubic, this implies that the discriminant of the general cubic can be written as 4 ( b 2 − 3 a c ) 3 − ( 2 b 3 − 9 a b c + 27 a 2 d ) 2 27 a 2 . {\displaystyle {\frac {4(b^{2}-3ac)^{3}-(2b^{3}-9abc+27a^{2}d)^{2}}{27a^{2}}}.} It follows that one of these two discriminants is zero if and only if the other is also zero, and, if the coefficients are real, the two discriminants have the same sign. In summary, the same information can be deduced from either one of these two discriminants. To prove the preceding formulas, one can use Vieta's formulas to express everything as polynomials in r1, r2, r3, and a. The proof then results in the verification of the equality of two polynomials. === Nature of the roots === If the coefficients of a polynomial are real numbers, and its discriminant Δ {\displaystyle \Delta } is not zero, there are two cases: If Δ > 0 , {\displaystyle \Delta >0,} the cubic has three distinct real roots If Δ < 0 , {\displaystyle \Delta <0,} the cubic has one real root and two non-real complex conjugate roots. This can be proved as follows. First, if r is a root of a polynomial with real coefficients, then its complex conjugate is also a root. So the non-real roots, if any, occur as pairs of complex conjugate roots. As a cubic polynomial has three roots (not necessarily distinct) by the fundamental theorem of algebra, at least one root must be real. As stated above, if r1, r2, r3 are the three roots of the cubic a x 3 + b x 2 + c x + d {\displaystyle ax^{3}+bx^{2}+cx+d} , then the discriminant is Δ = a 4 ( r 1 − r 2 ) 2 ( r 1 − r 3 ) 2 ( r 2 − r 3 ) 2 {\displaystyle \Delta =a^{4}(r_{1}-r_{2})^{2}(r_{1}-r_{3})^{2}(r_{2}-r_{3})^{2}} If the three roots are real and distinct, the discriminant is a product of positive reals, that is Δ > 0. {\displaystyle \Delta >0.} If only one root, say r1, is real, then r2 and r3 are complex conjugates, which implies that r2 − r3 is a purely imaginary number, and thus that (r2 − r3)2 is real and negative. On the other hand, r1 − r2 and r1 − r3 are complex conjugates, and their product is real and positive. Thus the discriminant is the product of a single negative number and several positive ones. That is Δ < 0. {\displaystyle \Delta <0.} === Multiple root === If the discriminant of a cubic is zero, the cubic has a multiple root. If furthermore its coefficients are real, then all of its roots are real. The discriminant of the depressed cubic t 3 + p t + q {\displaystyle t^{3}+pt+q} is zero if 4 p 3 + 27 q 2 = 0. {\displaystyle 4p^{3}+27q^{2}=0.} If p is also zero, then p = q = 0 , and 0 is a triple root of the cubic. If 4 p 3 + 27 q 2 = 0 , {\displaystyle 4p^{3}+27q^{2}=0,} and p ≠ 0 , then the cubic has a simple root t 1 = 3 q p {\displaystyle t_{1}={\frac {3q}{p}}} and a double root t 2 = t 3 = − 3 q 2 p . {\displaystyle t_{2}=t_{3}=-{\frac {3q}{2p}}.} In other words, t 3 + p t + q = ( t − 3 q p ) ( t + 3 q 2 p ) 2 . {\displaystyle t^{3}+pt+q=\left(t-{\frac {3q}{p}}\right)\left(t+{\frac {3q}{2p}}\right)^{2}.} This result can be proved by expanding the latter product or retrieved by solving the rather simple system of equations resulting from Vieta's formulas. By using the reduction of a depressed cubic, these results can be extended to the general cubic. This gives: If the discriminant of the cubic a x 3 + b x 2 + c x + d {\displaystyle ax^{3}+bx^{2}+cx+d} is zero, then either, if b 2 = 3 a c , {\displaystyle b^{2}=3ac,} the cubic has a triple root x 1 = x 2 = x 3 = − b 3 a , {\displaystyle x_{1}=x_{2}=x_{3}=-{\frac {b}{3a}},} and a x 3 + b x 2 + c x + d = a ( x + b 3 a ) 3 {\displaystyle ax^{3}+bx^{2}+cx+d=a\left(x+{\frac {b}{3a}}\right)^{3}} or, if b 2 ≠ 3 a c , {\displaystyle b^{2}\neq 3ac,} the cubic has a double root x 2 = x 3 = 9 a d − b c 2 ( b 2 − 3 a c ) , {\displaystyle x_{2}=x_{3}={\frac {9ad-bc}{2(b^{2}-3ac)}},} and a simple root, x 1 = 4 a b c − 9 a 2 d − b 3 a ( b 2 − 3 a c ) . {\displaystyle x_{1}={\frac {4abc-9a^{2}d-b^{3}}{a(b^{2}-3ac)}}.} and thus a x 3 + b x 2 + c x + d = a ( x − x 1 ) ( x − x 2 ) 2 . {\displaystyle ax^{3}+bx^{2}+cx+d=a(x-x_{1})(x-x_{2})^{2}.} === Characteristic 2 and 3 === The above results are valid when the coefficients belong to a field of characteristic other than 2 or 3, but must be modified for characteristic 2 or 3, because of the involved divisions by 2 and 3. The reduction to a depressed cubic works for characteristic 2, but not for characteristic 3. However, in both cases, it is simpler to establish and state the results for the general cubic. The main tool for that is the fact that a multiple root is a common root of the polynomial and its formal derivative. In these characteristics, if the derivative is not a constant, it is a linear polynomial in characteristic 3, and is the square of a linear polynomial in characteristic 2. Therefore, for either characteristic 2 or 3, the derivative has only one root. This allows computing the multiple root, and the third root can be deduced from the sum of the roots, which is provided by Vieta's formulas. A difference with other characteristics is that, in characteristic 2, the formula for a double root involves a square root, and, in characteristic 3, the formula for a triple root involves a cube root. == Cardano's formula == Gerolamo Cardano is credited with publishing the first formula for solving cubic equations, attributing it to Scipione del Ferro and Niccolo Fontana Tartaglia. The formula applies to depressed cubics, but, as shown in § Depressed cubic, it allows solving all cubic equations. Cardano's result is that if t 3 + p t + q = 0 {\displaystyle t^{3}+pt+q=0} is a cubic equation such that p and q are real numbers such that q 2 4 + p 3 27 {\displaystyle {\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}} is positive (this implies that the discriminant of the equation is negative) then the equation has the real root u 1 3 + u 2 3 , {\displaystyle {\sqrt[{3}]{u_{1}}}+{\sqrt[{3}]{u_{2}}},} where u 1 {\displaystyle u_{1}} and u 2 {\displaystyle u_{2}} are the two numbers − q 2 + q 2 4 + p 3 27 {\displaystyle -{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}} and − q 2 − q 2 4 + p 3 27 . {\displaystyle -{\frac {q}{2}}-{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}.} See § Derivation of the roots, below, for several methods for getting this result. As shown in § Nature of the roots, the two other roots are non-real complex conjugate numbers, in this case. It was later shown (Cardano did not know complex numbers) that the two other roots are obtained by multiplying one of the cube roots by the primitive cube root of unity ε 1 = − 1 + i 3 2 , {\displaystyle \varepsilon _{1}={\frac {-1+i{\sqrt {3}}}{2}},} and the other cube root by the other primitive cube root of the unity ε 2 = ε 1 2 = − 1 − i 3 2 . {\displaystyle \varepsilon _{2}=\varepsilon _{1}^{2}={\frac {-1-i{\sqrt {3}}}{2}}.} That is, the other roots of the equation are ε 1 u 1 3 + ε 2 u 2 3 {\displaystyle \varepsilon _{1}{\sqrt[{3}]{u_{1}}}+\varepsilon _{2}{\sqrt[{3}]{u_{2}}}} and ε 2 u 1 3 + ε 1 u 2 3 . {\displaystyle \varepsilon _{2}{\sqrt[{3}]{u_{1}}}+\varepsilon _{1}{\sqrt[{3}]{u_{2}}}.} If 4 p 3 + 27 q 2 < 0 , {\displaystyle 4p^{3}+27q^{2}<0,} there are three real roots, but Galois theory allows proving that, if there is no rational root, the roots cannot be expressed by an algebraic expression involving only real numbers. Therefore, the equation cannot be solved in this case with the knowledge of Cardano's time. This case has thus been called casus irreducibilis, meaning irreducible case in Latin. In casus irreducibilis, Cardano's formula can still be used, but some care is needed in the use of cube roots. A first method is to define the symbols {\displaystyle {\sqrt {{~}^{~}}}} and 3 {\displaystyle {\sqrt[{3}]{{~}^{~}}}} as representing the principal values of the root function (that is the root that has the largest real part). With this convention Cardano's formula for the three roots remains valid, but is not purely algebraic, as the definition of a principal part is not purely algebraic, since it involves inequalities for comparing real parts. Also, the use of principal cube root may give a wrong result if the coefficients are non-real complex numbers. Moreover, if the coefficients belong to another field, the principal cube root is not defined in general. The second way for making Cardano's formula always correct, is to remark that the product of the two cube roots must be −p / 3. It results that a root of the equation is C − p 3 C with C = − q 2 + q 2 4 + p 3 27 3 . {\displaystyle C-{\frac {p}{3C}}\quad {\text{with}}\quad C={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.} In this formula, the symbols {\displaystyle {\sqrt {{~}^{~}}}} and 3 {\displaystyle {\sqrt[{3}]{{~}^{~}}}} denote any square root and any cube root. The other roots of the equation are obtained either by changing of cube root or, equivalently, by multiplying the cube root by a primitive cube root of unity, that is − 1 ± − 3 2 . {\displaystyle \textstyle {\frac {-1\pm {\sqrt {-3}}}{2}}.} This formula for the roots is always correct except when p = q = 0, with the proviso that if p = 0, the square root is chosen so that C ≠ 0. However, Cardano's formula is useless if p = 0 , {\displaystyle p=0,} as the roots are the cube roots of − q . {\displaystyle -q.} Similarly, the formula is also useless in the cases where no cube root is needed, that is when the cubic polynomial is not irreducible; this includes the case 4 p 3 + 27 q 2 = 0. {\displaystyle 4p^{3}+27q^{2}=0.} This formula is also correct when p and q belong to any field of characteristic other than 2 or 3. == General cubic formula == A cubic formula for the roots of the general cubic equation (with a ≠ 0) a x 3 + b x 2 + c x + d = 0 {\displaystyle ax^{3}+bx^{2}+cx+d=0} can be deduced from every variant of Cardano's formula by reduction to a depressed cubic. The variant that is presented here is valid not only for complex coefficients, but also for coefficients a, b, c, d belonging to any algebraically closed field of characteristic other than 2 or 3. If the coefficients are real numbers, the formula covers all complex solutions, not just real ones. The formula being rather complicated, it is worth splitting it in smaller formulas. Let Δ 0 = b 2 − 3 a c , Δ 1 = 2 b 3 − 9 a b c + 27 a 2 d . {\displaystyle {\begin{aligned}\Delta _{0}&=b^{2}-3ac,\\\Delta _{1}&=2b^{3}-9abc+27a^{2}d.\end{aligned}}} (Both Δ 0 {\displaystyle \Delta _{0}} and Δ 1 {\displaystyle \Delta _{1}} can be expressed as resultants of the cubic and its derivatives: Δ 1 {\displaystyle \Delta _{1}} is ⁠−1/8a⁠ times the resultant of the cubic and its second derivative, and Δ 0 {\displaystyle \Delta _{0}} is ⁠−1/12a⁠ times the resultant of the first and second derivatives of the cubic polynomial.) Then let C = Δ 1 ± Δ 1 2 − 4 Δ 0 3 2 3 , {\displaystyle C={\sqrt[{3}]{\frac {\Delta _{1}\pm {\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}}{2}}},} where the symbols {\displaystyle {\sqrt {{~}^{~}}}} and 3 {\displaystyle {\sqrt[{3}]{{~}^{~}}}} are interpreted as any square root and any cube root, respectively (every nonzero complex number has two square roots and three cubic roots). The sign "±" before the square root is either "+" or "–"; the choice is almost arbitrary, and changing it amounts to choosing a different square root. However, if a choice yields C = 0 (this occurs if Δ 0 = 0 {\displaystyle \Delta _{0}=0} ), then the other sign must be selected instead. If both choices yield C = 0, that is, if Δ 0 = Δ 1 = 0 , {\displaystyle \Delta _{0}=\Delta _{1}=0,} a fraction ⁠0/0⁠ occurs in following formulas; this fraction must be interpreted as equal to zero (see the end of this section). With these conventions, one of the roots is x = − 1 3 a ( b + C + Δ 0 C ) . {\displaystyle x=-{\frac {1}{3a}}\left(b+C+{\frac {\Delta _{0}}{C}}\right).} The other two roots can be obtained by changing the choice of the cube root in the definition of C, or, equivalently by multiplying C by a primitive cube root of unity, that is ⁠–1 ± √–3/2⁠. In other words, the three roots are x k = − 1 3 a ( b + ξ k C + Δ 0 ξ k C ) , k ∈ { 0 , 1 , 2 } , {\displaystyle x_{k}=-{\frac {1}{3a}}\left(b+\xi ^{k}C+{\frac {\Delta _{0}}{\xi ^{k}C}}\right),\qquad k\in \{0,1,2\}{\text{,}}} where ξ = ⁠–1 + √–3/2⁠. As for the special case of a depressed cubic, this formula applies but is useless when the roots can be expressed without cube roots. In particular, if Δ 0 = Δ 1 = 0 , {\displaystyle \Delta _{0}=\Delta _{1}=0,} the formula gives that the three roots equal − b 3 a , {\displaystyle {\frac {-b}{3a}},} which means that the cubic polynomial can be factored as a ( x + b 3 a ) 3 . {\displaystyle \textstyle a(x+{\frac {b}{3a}})^{3}.} A straightforward computation allows verifying that the existence of this factorization is equivalent with Δ 0 = Δ 1 = 0. {\displaystyle \Delta _{0}=\Delta _{1}=0.} == Trigonometric and hyperbolic solutions == === Trigonometric solution for three real roots === When a cubic equation with real coefficients has three real roots, the formulas expressing these roots in terms of radicals involve complex numbers. Galois theory allows proving that when the three roots are real, and none is rational (casus irreducibilis), one cannot express the roots in terms of real radicals. Nevertheless, purely real expressions of the solutions may be obtained using trigonometric functions, specifically in terms of cosines and arccosines. More precisely, the roots of the depressed cubic t 3 + p t + q = 0 {\displaystyle t^{3}+pt+q=0} are t k = 2 − p 3 cos ⁡ [ 1 3 arccos ⁡ ( 3 q 2 p − 3 p ) − 2 π k 3 ] for k = 0 , 1 , 2. {\displaystyle t_{k}=2\,{\sqrt {-{\frac {p}{3}}}}\,\cos \left[\,{\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\,\right)-{\frac {2\pi k}{3}}\,\right]\qquad {\text{for }}k=0,1,2.} This formula is due to François Viète. It is purely real when the equation has three real roots (that is 4 p 3 + 27 q 2 < 0 {\displaystyle 4p^{3}+27q^{2}<0} ). Otherwise, it is still correct but involves complex cosines and arccosines when there is only one real root, and it is nonsensical (division by zero) when p = 0. This formula can be straightforwardly transformed into a formula for the roots of a general cubic equation, using the back-substitution described in § Depressed cubic. The formula can be proved as follows: Starting from the equation t3 + pt + q = 0, let us set t = u cos θ. The idea is to choose u to make the equation coincide with the identity 4 cos 3 ⁡ θ − 3 cos ⁡ θ − cos ⁡ ( 3 θ ) = 0. {\displaystyle 4\cos ^{3}\theta -3\cos \theta -\cos(3\theta )=0.} For this, choose u = 2 − p 3 , {\displaystyle u=2\,{\sqrt {-{\frac {p}{3}}}}\,,} and divide the equation by u 3 4 . {\displaystyle {\frac {u^{3}}{4}}.} This gives 4 cos 3 ⁡ θ − 3 cos ⁡ θ − 3 q 2 p − 3 p = 0. {\displaystyle 4\cos ^{3}\theta -3\cos \theta -{\frac {3q}{2p}}\,{\sqrt {\frac {-3}{p}}}=0.} Combining with the above identity, one gets cos ⁡ ( 3 θ ) = 3 q 2 p − 3 p , {\displaystyle \cos(3\theta )={\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\,,} and the roots are thus t k = 2 − p 3 cos ⁡ [ 1 3 arccos ⁡ ( 3 q 2 p − 3 p ) − 2 π k 3 ] for k = 0 , 1 , 2. {\displaystyle t_{k}=2\,{\sqrt {-{\frac {p}{3}}}}\,\cos \left[{\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)-{\frac {2\pi k}{3}}\right]\qquad {\text{for }}k=0,1,2.} === Hyperbolic solution for one real root === When there is only one real root (and p ≠ 0), this root can be similarly represented using hyperbolic functions, as t 0 = − 2 | q | q − p 3 cosh ⁡ [ 1 3 arcosh ⁡ ( − 3 | q | 2 p − 3 p ) ] if 4 p 3 + 27 q 2 > 0 and p < 0 , t 0 = − 2 p 3 sinh ⁡ [ 1 3 arsinh ⁡ ( 3 q 2 p 3 p ) ] if p > 0. {\displaystyle {\begin{aligned}t_{0}&=-2{\frac {|q|}{q}}{\sqrt {-{\frac {p}{3}}}}\cosh \left[{\frac {1}{3}}\operatorname {arcosh} \left({\frac {-3|q|}{2p}}{\sqrt {\frac {-3}{p}}}\right)\right]\qquad {\text{if }}~4p^{3}+27q^{2}>0~{\text{ and }}~p<0,\\t_{0}&=-2{\sqrt {\frac {p}{3}}}\sinh \left[{\frac {1}{3}}\operatorname {arsinh} \left({\frac {3q}{2p}}{\sqrt {\frac {3}{p}}}\right)\right]\qquad {\text{if }}~p>0.\end{aligned}}} If p ≠ 0 and the inequalities on the right are not satisfied (the case of three real roots), the formulas remain valid but involve complex quantities. When p = ±3, the above values of t0 are sometimes called the Chebyshev cube root. More precisely, the values involving cosines and hyperbolic cosines define, when p = −3, the same analytic function denoted C1/3(q), which is the proper Chebyshev cube root. The value involving hyperbolic sines is similarly denoted S1/3(q), when p = 3. == Geometric solutions == === Omar Khayyám's solution === For solving the cubic equation x3 + m2x = n where n > 0, Omar Khayyám constructed the parabola y = x2/m, the circle that has as a diameter the line segment [0, n/m2] on the positive x-axis, and a vertical line through the point where the circle and the parabola intersect above the x-axis. The solution is given by the length of the horizontal line segment from the origin to the intersection of the vertical line and the x-axis (see the figure). A simple modern proof is as follows. Multiplying the equation by x/m2 and regrouping the terms gives x 4 m 2 = x ( n m 2 − x ) . {\displaystyle {\frac {x^{4}}{m^{2}}}=x\left({\frac {n}{m^{2}}}-x\right).} The left-hand side is the value of y2 on the parabola. The equation of the circle being y2 + x(x − ⁠n/m2⁠) = 0, the right hand side is the value of y2 on the circle. === Solution with angle trisector === A cubic equation with real coefficients can be solved geometrically using compass, straightedge, and an angle trisector if and only if it has three real roots.: Thm. 1  A cubic equation can be solved by compass-and-straightedge construction (without trisector) if and only if it has a rational root. This implies that the old problems of angle trisection and doubling the cube, set by ancient Greek mathematicians, cannot be solved by compass-and-straightedge construction. == Geometric interpretation of the roots == === Three real roots === Viète's trigonometric expression of the roots in the three-real-roots case lends itself to a geometric interpretation in terms of a circle. When the cubic is written in depressed form (2), t3 + pt + q = 0, as shown above, the solution can be expressed as t k = 2 − p 3 cos ⁡ ( 1 3 arccos ⁡ ( 3 q 2 p − 3 p ) − k 2 π 3 ) for k = 0 , 1 , 2 . {\displaystyle t_{k}=2{\sqrt {-{\frac {p}{3}}}}\cos \left({\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)-k{\frac {2\pi }{3}}\right)\quad {\text{for}}\quad k=0,1,2\,.} Here arccos ⁡ ( 3 q 2 p − 3 p ) {\displaystyle \arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)} is an angle in the unit circle; taking ⁠1/3⁠ of that angle corresponds to taking a cube root of a complex number; adding −k⁠2π/3⁠ for k = 1, 2 finds the other cube roots; and multiplying the cosines of these resulting angles by 2 − p 3 {\displaystyle 2{\sqrt {-{\frac {p}{3}}}}} corrects for scale. For the non-depressed case (1) (shown in the accompanying graph), the depressed case as indicated previously is obtained by defining t such that x = t − ⁠b/3a⁠ so t = x + ⁠b/3a⁠. Graphically this corresponds to simply shifting the graph horizontally when changing between the variables t and x, without changing the angle relationships. This shift moves the point of inflection and the centre of the circle onto the y-axis. Consequently, the roots of the equation in t sum to zero. === One real root === ==== In the Cartesian plane ==== When the graph of a cubic function is plotted in the Cartesian plane, if there is only one real root, it is the abscissa (x-coordinate) of the horizontal intercept of the curve (point R on the figure). Further, if the complex conjugate roots are written as g ± hi, then the real part g is the abscissa of the tangency point H of the tangent line to cubic that passes through x-intercept R of the cubic (that is the signed length OM, negative on the figure). The imaginary parts ±h are the square roots of the tangent of the angle between this tangent line and the horizontal axis. ==== In the complex plane ==== With one real and two complex roots, the three roots can be represented as points in the complex plane, as can the two roots of the cubic's derivative. There is an interesting geometrical relationship among all these roots. The points in the complex plane representing the three roots serve as the vertices of an isosceles triangle. (The triangle is isosceles because one root is on the horizontal (real) axis and the other two roots, being complex conjugates, appear symmetrically above and below the real axis.) Marden's theorem says that the points representing the roots of the derivative of the cubic are the foci of the Steiner inellipse of the triangle—the unique ellipse that is tangent to the triangle at the midpoints of its sides. If the angle at the vertex on the real axis is less than ⁠π/3⁠ then the major axis of the ellipse lies on the real axis, as do its foci and hence the roots of the derivative. If that angle is greater than ⁠π/3⁠, the major axis is vertical and its foci, the roots of the derivative, are complex conjugates. And if that angle is ⁠π/3⁠, the triangle is equilateral, the Steiner inellipse is simply the triangle's incircle, its foci coincide with each other at the incenter, which lies on the real axis, and hence the derivative has duplicate real roots. == Galois group == Given a cubic irreducible polynomial over a field K of characteristic different from 2 and 3, the Galois group over K is the group of the field automorphisms that fix K of the smallest extension of K (splitting field). As these automorphisms must permute the roots of the polynomials, this group is either the group S3 of all six permutations of the three roots, or the group A3 of the three circular permutations. The discriminant Δ of the cubic is the square of Δ = a 2 ( r 1 − r 2 ) ( r 1 − r 3 ) ( r 2 − r 3 ) , {\displaystyle {\sqrt {\Delta }}=a^{2}(r_{1}-r_{2})(r_{1}-r_{3})(r_{2}-r_{3}),} where a is the leading coefficient of the cubic, and r1, r2 and r3 are the three roots of the cubic. As Δ {\displaystyle {\sqrt {\Delta }}} changes of sign if two roots are exchanged, Δ {\displaystyle {\sqrt {\Delta }}} is fixed by the Galois group only if the Galois group is A3. In other words, the Galois group is A3 if and only if the discriminant is the square of an element of K. As most integers are not squares, when working over the field Q of the rational numbers, the Galois group of most irreducible cubic polynomials is the group S3 with six elements. An example of a Galois group A3 with three elements is given by p(x) = x3 − 3x − 1, whose discriminant is 81 = 92. == Derivation of the roots == This section regroups several methods for deriving Cardano's formula. === Cardano's method === This method is due to Scipione del Ferro and Tartaglia, but is named after Gerolamo Cardano who first published it in his book Ars Magna (1545). This method applies to a depressed cubic t3 + pt + q = 0. The idea is to introduce two variables u and v {\displaystyle v} such that u + v = t {\displaystyle u+v=t} and to substitute this in the depressed cubic, giving u 3 + v 3 + ( 3 u v + p ) ( u + v ) + q = 0. {\displaystyle u^{3}+v^{3}+(3uv+p)(u+v)+q=0.} At this point Cardano imposed the condition 3 u v + p = 0. {\displaystyle 3uv+p=0.} This removes the third term in the previous equality, leading to the system of equations u 3 + v 3 = − q u v = − p 3 . {\displaystyle {\begin{aligned}u^{3}+v^{3}&=-q\\uv&=-{\frac {p}{3}}.\end{aligned}}} Knowing the sum and the product of u3 and v 3 , {\displaystyle v^{3},} one deduces that they are the two solutions of the quadratic equation 0 = ( x − u 3 ) ( x − v 3 ) = x 2 − ( u 3 + v 3 ) x + u 3 v 3 = x 2 − ( u 3 + v 3 ) x + ( u v ) 3 {\displaystyle {\begin{aligned}0&=(x-u^{3})(x-v^{3})\\&=x^{2}-(u^{3}+v^{3})x+u^{3}v^{3}\\&=x^{2}-(u^{3}+v^{3})x+(uv)^{3}\end{aligned}}} so x 2 + q x − p 3 27 = 0. {\displaystyle x^{2}+qx-{\frac {p^{3}}{27}}=0.} The discriminant of this equation is Δ = q 2 + 4 p 3 27 {\displaystyle \Delta =q^{2}+{\frac {4p^{3}}{27}}} , and assuming it is positive, real solutions to this equation are (after folding division by 4 under the square root): − q 2 ± q 2 4 + p 3 27 . {\displaystyle -{\frac {q}{2}}\pm {\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}.} So (without loss of generality in choosing u or v {\displaystyle v} ): u = − q 2 + q 2 4 + p 3 27 3 . {\displaystyle u={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.} v = − q 2 − q 2 4 + p 3 27 3 . {\displaystyle v={\sqrt[{3}]{-{\frac {q}{2}}-{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.} As u + v = t , {\displaystyle u+v=t,} the sum of the cube roots of these solutions is a root of the equation. That is t = − q 2 + q 2 4 + p 3 27 3 + − q 2 − q 2 4 + p 3 27 3 {\displaystyle t={\sqrt[{3}]{-{q \over 2}+{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}+{\sqrt[{3}]{-{q \over 2}-{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}} is a root of the equation; this is Cardano's formula. This works well when 4 p 3 + 27 q 2 > 0 , {\displaystyle 4p^{3}+27q^{2}>0,} but, if 4 p 3 + 27 q 2 < 0 , {\displaystyle 4p^{3}+27q^{2}<0,} the square root appearing in the formula is not real. As a complex number has three cube roots, using Cardano's formula without care would provide nine roots, while a cubic equation cannot have more than three roots. This was clarified first by Rafael Bombelli in his book L'Algebra (1572). The solution is to use the fact that u v = − p 3 , {\displaystyle uv=-{\frac {p}{3}},} that is, v = − p 3 u . {\displaystyle v={\frac {-p}{3u}}.} This means that only one cube root needs to be computed, and leads to the second formula given in § Cardano's formula. The other roots of the equation can be obtained by changing of cube root, or, equivalently, by multiplying the cube root by each of the two primitive cube roots of unity, which are − 1 ± − 3 2 . {\displaystyle {\frac {-1\pm {\sqrt {-3}}}{2}}.} === Vieta's substitution === Vieta's substitution is a method introduced by François Viète (Vieta is his Latin name) in a text published posthumously in 1615, which provides directly the second formula of § Cardano's method, and avoids the problem of computing two different cube roots. Starting from the depressed cubic t3 + pt + q = 0, Vieta's substitution is t = w − ⁠p/3w⁠. The substitution t = w – ⁠p/3w⁠ transforms the depressed cubic into w 3 + q − p 3 27 w 3 = 0. {\displaystyle w^{3}+q-{\frac {p^{3}}{27w^{3}}}=0.} Multiplying by w3, one gets a quadratic equation in w3: ( w 3 ) 2 + q ( w 3 ) − p 3 27 = 0. {\displaystyle (w^{3})^{2}+q(w^{3})-{\frac {p^{3}}{27}}=0.} Let W = − q 2 ± p 3 27 + q 2 4 {\displaystyle W=-{\frac {q}{2}}\pm {\sqrt {{\frac {p^{3}}{27}}+{\frac {q^{2}}{4}}}}} be any nonzero root of this quadratic equation. If w1, w2 and w3 are the three cube roots of W, then the roots of the original depressed cubic are w1 − ⁠p/3w1⁠, w2 − ⁠p/3w2⁠, and w3 − ⁠p/3w3⁠. The other root of the quadratic equation is − p 3 27 W . {\displaystyle \textstyle -{\frac {p^{3}}{27W}}.} This implies that changing the sign of the square root exchanges wi and − ⁠p/3wi⁠ for i = 1, 2, 3, and therefore does not change the roots. This method only fails when both roots of the quadratic equation are zero, that is when p = q = 0, in which case the only root of the depressed cubic is 0. === Lagrange's method === In his paper Réflexions sur la résolution algébrique des équations ("Thoughts on the algebraic solving of equations"), Joseph Louis Lagrange introduced a new method to solve equations of low degree in a uniform way, with the hope that he could generalize it for higher degrees. This method works well for cubic and quartic equations, but Lagrange did not succeed in applying it to a quintic equation, because it requires solving a resolvent polynomial of degree at least six. Apart from the fact that nobody had previously succeeded, this was the first indication of the non-existence of an algebraic formula for degrees 5 and higher; as was later proved by the Abel–Ruffini theorem. Nevertheless, modern methods for solving solvable quintic equations are mainly based on Lagrange's method. In the case of cubic equations, Lagrange's method gives the same solution as Cardano's. Lagrange's method can be applied directly to the general cubic equation ax3 + bx2 + cx + d = 0, but the computation is simpler with the depressed cubic equation, t3 + pt + q = 0. Lagrange's main idea was to work with the discrete Fourier transform of the roots instead of with the roots themselves. More precisely, let ξ be a primitive third root of unity, that is a number such that ξ3 = 1 and ξ2 + ξ + 1 = 0 (when working in the space of complex numbers, one has ξ = − 1 ± i 3 2 = e 2 i π / 3 , {\displaystyle \textstyle \xi ={\frac {-1\pm i{\sqrt {3}}}{2}}=e^{2i\pi /3},} but this complex interpretation is not used here). Denoting x0, x1 and x2 the three roots of the cubic equation to be solved, let s 0 = x 0 + x 1 + x 2 , s 1 = x 0 + ξ x 1 + ξ 2 x 2 , s 2 = x 0 + ξ 2 x 1 + ξ x 2 , {\displaystyle {\begin{aligned}s_{0}&=x_{0}+x_{1}+x_{2},\\s_{1}&=x_{0}+\xi x_{1}+\xi ^{2}x_{2},\\s_{2}&=x_{0}+\xi ^{2}x_{1}+\xi x_{2},\end{aligned}}} be the discrete Fourier transform of the roots. If s0, s1 and s2 are known, the roots may be recovered from them with the inverse Fourier transform consisting of inverting this linear transformation; that is, x 0 = 1 3 ( s 0 + s 1 + s 2 ) , x 1 = 1 3 ( s 0 + ξ 2 s 1 + ξ s 2 ) , x 2 = 1 3 ( s 0 + ξ s 1 + ξ 2 s 2 ) . {\displaystyle {\begin{aligned}x_{0}&={\tfrac {1}{3}}(s_{0}+s_{1}+s_{2}),\\x_{1}&={\tfrac {1}{3}}(s_{0}+\xi ^{2}s_{1}+\xi s_{2}),\\x_{2}&={\tfrac {1}{3}}(s_{0}+\xi s_{1}+\xi ^{2}s_{2}).\end{aligned}}} By Vieta's formulas, s0 is known to be zero in the case of a depressed cubic, and −⁠b/a⁠ for the general cubic. So, only s1 and s2 need to be computed. They are not symmetric functions of the roots (exchanging x1 and x2 exchanges also s1 and s2), but some simple symmetric functions of s1 and s2 are also symmetric in the roots of the cubic equation to be solved. Thus these symmetric functions can be expressed in terms of the (known) coefficients of the original cubic, and this allows eventually expressing the si as roots of a polynomial with known coefficients. This works well for every degree, but, in degrees higher than four, the resulting polynomial that has the si as roots has a degree higher than that of the initial polynomial, and is therefore unhelpful for solving. This is the reason for which Lagrange's method fails in degrees five and higher. In the case of a cubic equation, P = s 1 s 2 , {\displaystyle P=s_{1}s_{2},} and S = s 1 3 + s 2 3 {\displaystyle S=s_{1}^{3}+s_{2}^{3}} are such symmetric polynomials (see below). It follows that s 1 3 {\displaystyle s_{1}^{3}} and s 2 3 {\displaystyle s_{2}^{3}} are the two roots of the quadratic equation z 2 − S z + P 3 = 0. {\displaystyle z^{2}-Sz+P^{3}=0.} Thus the resolution of the equation may be finished exactly as with Cardano's method, with s 1 {\displaystyle s_{1}} and s 2 {\displaystyle s_{2}} in place of u and v . {\displaystyle v.} In the case of the depressed cubic, one has x 0 = 1 3 ( s 1 + s 2 ) {\displaystyle x_{0}={\tfrac {1}{3}}(s_{1}+s_{2})} and s 1 s 2 = − 3 p , {\displaystyle s_{1}s_{2}=-3p,} while in Cardano's method we have set x 0 = u + v {\displaystyle x_{0}=u+v} and u v = − 1 3 p . {\displaystyle uv=-{\tfrac {1}{3}}p.} Thus, up to the exchange of u and v , {\displaystyle v,} we have s 1 = 3 u {\displaystyle s_{1}=3u} and s 2 = 3 v . {\displaystyle s_{2}=3v.} In other words, in this case, Cardano's method and Lagrange's method compute exactly the same things, up to a factor of three in the auxiliary variables, the main difference being that Lagrange's method explains why these auxiliary variables appear in the problem. ==== Computation of S and P ==== A straightforward computation using the relations ξ3 = 1 and ξ2 + ξ + 1 = 0 gives P = s 1 s 2 = x 0 2 + x 1 2 + x 2 2 − ( x 0 x 1 + x 1 x 2 + x 2 x 0 ) , S = s 1 3 + s 2 3 = 2 ( x 0 3 + x 1 3 + x 2 3 ) − 3 ( x 0 2 x 1 + x 1 2 x 2 + x 2 2 x 0 + x 0 x 1 2 + x 1 x 2 2 + x 2 x 0 2 ) + 12 x 0 x 1 x 2 . {\displaystyle {\begin{aligned}P&=s_{1}s_{2}=x_{0}^{2}+x_{1}^{2}+x_{2}^{2}-(x_{0}x_{1}+x_{1}x_{2}+x_{2}x_{0}),\\S&=s_{1}^{3}+s_{2}^{3}=2(x_{0}^{3}+x_{1}^{3}+x_{2}^{3})-3(x_{0}^{2}x_{1}+x_{1}^{2}x_{2}+x_{2}^{2}x_{0}+x_{0}x_{1}^{2}+x_{1}x_{2}^{2}+x_{2}x_{0}^{2})+12x_{0}x_{1}x_{2}.\end{aligned}}} This shows that P and S are symmetric functions of the roots. Using Newton's identities, it is straightforward to express them in terms of the elementary symmetric functions of the roots, giving P = e 1 2 − 3 e 2 , S = 2 e 1 3 − 9 e 1 e 2 + 27 e 3 , {\displaystyle {\begin{aligned}P&=e_{1}^{2}-3e_{2},\\S&=2e_{1}^{3}-9e_{1}e_{2}+27e_{3},\end{aligned}}} with e1 = 0, e2 = p and e3 = −q in the case of a depressed cubic, and e1 = −⁠b/a⁠, e2 = ⁠c/a⁠ and e3 = −⁠d/a⁠, in the general case. == Applications == Cubic equations arise in various other contexts. === In mathematics === Angle trisection and doubling the cube are two ancient problems of geometry that have been proved to not be solvable by straightedge and compass construction, because they are equivalent to solving a cubic equation. Marden's theorem states that the foci of the Steiner inellipse of any triangle can be found by using the cubic function whose roots are the coordinates in the complex plane of the triangle's three vertices. The roots of the first derivative of this cubic are the complex coordinates of those foci. The area of a regular heptagon can be expressed in terms of the roots of a cubic. Further, the ratios of the long diagonal to the side, the side to the short diagonal, and the negative of the short diagonal to the long diagonal all satisfy a particular cubic equation. In addition, the ratio of the inradius to the circumradius of a heptagonal triangle is one of the solutions of a cubic equation. The values of trigonometric functions of angles related to 2 π / 7 {\displaystyle 2\pi /7} satisfy cubic equations. Given the cosine (or other trigonometric function) of an arbitrary angle, the cosine of one-third of that angle is one of the roots of a cubic. The solution of the general quartic equation relies on the solution of its resolvent cubic. The eigenvalues of a 3×3 matrix are the roots of a cubic polynomial which is the characteristic polynomial of the matrix. The characteristic equation of a third-order constant coefficients or Cauchy–Euler (equidimensional variable coefficients) linear differential equation or difference equation is a cubic equation. Intersection points of cubic Bézier curve and straight line can be computed using direct cubic equation representing Bézier curve. Critical points of a quartic function are found by solving a cubic equation (the derivative set equal to zero). Inflection points of a quintic function are the solution of a cubic equation (the second derivative set equal to zero). === In other sciences === In analytical chemistry, the Charlot equation, which can be used to find the pH of buffer solutions, can be solved using a cubic equation. In thermodynamics, equations of state (which relate pressure, volume, and temperature of a substances), e.g. the Van der Waals equation of state, are cubic in the volume. Kinematic equations involving linear rates of acceleration are cubic. The speed of seismic Rayleigh waves is a solution of the Rayleigh wave cubic equation. The steady state speed of a vehicle moving on a slope with air friction for a given input power is solved by a depressed cubic equation. Kepler's third law of planetary motion is cubic in the semi-major axis. == See also == Quartic equation Quintic equation Tschirnhaus transformation Principal equation form == Notes == == References == Guilbeau, Lucye (1930), "The History of the Solution of the Cubic Equation", Mathematics News Letter, 5 (4): 8–12, doi:10.2307/3027812, JSTOR 3027812 == Further reading == Anglin, W. S.; Lambek, Joachim (1995), "Mathematics in the Renaissance", The Heritage of Thales, Springers, pp. 125–131, ISBN 978-0-387-94544-6 Ch. 24. Dence, T. (November 1997), "Cubics, chaos and Newton's method", Mathematical Gazette, 81 (492), Mathematical Association: 403–408, doi:10.2307/3619617, ISSN 0025-5572, JSTOR 3619617, S2CID 125196796 Dunnett, R. (November 1994), "Newton–Raphson and the cubic", Mathematical Gazette, 78 (483), Mathematical Association: 347–348, doi:10.2307/3620218, ISSN 0025-5572, JSTOR 3620218, S2CID 125643035 Jacobson, Nathan (2009), Basic algebra, vol. 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1 Mitchell, D. W. (November 2007), "Solving cubics by solving triangles", Mathematical Gazette, 91, Mathematical Association: 514–516, doi:10.1017/S0025557200182178, ISSN 0025-5572, S2CID 124710259 Mitchell, D. W. (November 2009), "Powers of φ as roots of cubics", Mathematical Gazette, 93, Mathematical Association, doi:10.1017/S0025557200185237, ISSN 0025-5572, S2CID 126286653 Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 5.6 Quadratic and Cubic Equations", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 Rechtschaffen, Edgar (July 2008), "Real roots of cubics: Explicit formula for quasi-solutions", Mathematical Gazette, 92, Mathematical Association: 268–276, doi:10.1017/S0025557200183147, ISSN 0025-5572, S2CID 125870578 Zucker, I. J. (July 2008), "The cubic equation – a new look at the irreducible case", Mathematical Gazette, 92, Mathematical Association: 264–268, doi:10.1017/S0025557200183135, ISSN 0025-5572, S2CID 125986006 == External links == "Cardano formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994] History of quadratic, cubic and quartic equations on MacTutor archive. 500 years of NOT teaching THE CUBIC FORMULA. What is it they think you can't handle? – YouTube video by Mathologer about the history of cubic equations and Cardano's solution, as well as Ferrari's solution to quartic equations
Wikipedia/Cubic_equation
In mathematics, a basic algebraic operation is any one of the common operations of elementary algebra, which include addition, subtraction, multiplication, division, raising to a whole number power, and taking roots (fractional power). These operations may be performed on numbers, in which case they are often called arithmetic operations. They may also be performed, in a similar way, on variables, algebraic expressions, and more generally, on elements of algebraic structures, such as groups and fields. An algebraic operation may also be defined more generally as a function from a Cartesian power of a given set to the same set. The term algebraic operation may also be used for operations that may be defined by compounding basic algebraic operations, such as the dot product. In calculus and mathematical analysis, algebraic operation is also used for the operations that may be defined by purely algebraic methods. For example, exponentiation with an integer or rational exponent is an algebraic operation, but not the general exponentiation with a real or complex exponent. Also, the derivative is an operation on numerical functions and algebraic expressions that is not algebraic. == Notation == Multiplication symbols are usually omitted, and implied, when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy. Sometimes, multiplication symbols are replaced with either a dot or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol, and it must be explicitly used; for example, 3x is written as 3 * x. Rather than using the ambiguous division sign (÷), division is usually represented with a vinculum, a horizontal line, as in ⁠3/x + 1⁠. In plain text and programming languages, a slash (also called a solidus) is used, e.g. 3 / (x + 1). Exponents are usually formatted using superscripts, as in x2. In plain text, the TeX mark-up language, and some programming languages such as MATLAB and Julia, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x2 is written as x ** 2. The plus–minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example, y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes, it is used for denoting a positive-or-negative term such as ±x. == Arithmetic vs algebraic operations == Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below. Note: the use of the letters a {\displaystyle a} and b {\displaystyle b} is arbitrary, and the examples would have been equally valid if x {\displaystyle x} and y {\displaystyle y} were used. == Properties of arithmetic and algebraic operations == == See also == Algebraic expression Algebraic function Elementary algebra Factoring a quadratic expression Order of operations == Notes == == References ==
Wikipedia/Algebraic_operation
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication. A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring. In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital. Every ring is an associative algebra over its center and over the integers. == Definition == Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies r ⋅ ( x y ) = ( r ⋅ x ) y = x ( r ⋅ y ) {\displaystyle r\cdot (xy)=(r\cdot x)y=x(r\cdot y)} for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.) Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (r, x) ↦ f(r)x (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by r ↦ r ⋅ 1A. (See also § From ring homomorphisms below). Every ring is an associative Z-algebra, where Z denotes the ring of the integers. A commutative algebra is an associative algebra that is also a commutative ring. === As a monoid object in the category of modules === The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules. Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map m : A ⊗ R A → A {\displaystyle m:A\otimes _{R}A\to A} . The associativity then refers to the identity: m ∘ ( id ⊗ m ) = m ∘ ( m ⊗ id ) . {\displaystyle m\circ ({\operatorname {id} }\otimes m)=m\circ (m\otimes \operatorname {id} ).} === From ring homomorphisms === An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism η : R → A whose image lies in the center of A, we can make A an R-algebra by defining r ⋅ x = η ( r ) x {\displaystyle r\cdot x=\eta (r)x} for all r ∈ R and x ∈ A. If A is an R-algebra, taking x = 1, the same formula in turn defines a ring homomorphism η : R → A whose image lies in the center. If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism η : R → A. The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms R → A for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms A → A′ that are under R; i.e., R → A → A′ is R → A′ (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R. How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring. == Algebra homomorphisms == A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, φ : A1 → A2 is an associative algebra homomorphism if φ ( r ⋅ x ) = r ⋅ φ ( x ) φ ( x + y ) = φ ( x ) + φ ( y ) φ ( x y ) = φ ( x ) φ ( y ) φ ( 1 ) = 1 {\displaystyle {\begin{aligned}\varphi (r\cdot x)&=r\cdot \varphi (x)\\\varphi (x+y)&=\varphi (x)+\varphi (y)\\\varphi (xy)&=\varphi (x)\varphi (y)\\\varphi (1)&=1\end{aligned}}} The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg. The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings. == Examples == The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics. === Algebra === Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent. Any ring of characteristic n is a (Z/nZ)-algebra in the same way. Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x). Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module. In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K. The complex numbers form a 2-dimensional commutative algebra over the real numbers. The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions). Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}. The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E. The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure). Given a module M over a commutative ring R, the direct sum of modules R ⊕ M has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as (a + x)(b + y) = ab + ay + bx. The notion is sometimes called the algebra of dual numbers. A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field. === Representation theory === The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra. If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups. If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A. A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph. === Analysis === Given any Banach space X, the continuous linear operators A : X → X form an associative algebra (using composition of operators as multiplication); this is a Banach algebra. Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise. The set of semimartingales defined on the filtered probability space (Ω, F, (Ft)t≥0, P) forms a ring under stochastic integration. The Weyl algebra An Azumaya algebra === Geometry and combinatorics === The Clifford algebras, which are useful in geometry and physics. Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics. The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra. A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra Ω ( M ) = ⨁ p = 0 n Ω p ( M ) {\textstyle \Omega (M)=\bigoplus _{p=0}^{n}\Omega ^{p}(M)} , where Ω p ( M ) {\textstyle \Omega ^{p}(M)} consists of differential p-forms on a manifold M, is a differential graded algebra. === Mathematical physics === A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket {,} satisfies the Leibniz rule; i.e., {fg, h} = f{g, h} + g{f, h}. Given a Poisson algebra a {\displaystyle {\mathfrak {a}}} , consider the vector space a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} of formal power series over a {\displaystyle {\mathfrak {a}}} . If a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} has a structure of an associative algebra with multiplication ∗ {\displaystyle *} such that, for f , g ∈ a {\displaystyle f,g\in {\mathfrak {a}}} , f ∗ g = f g − 1 2 { f , g } u + ⋯ , {\displaystyle f*g=fg-{\frac {1}{2}}\{f,g\}u+\cdots ,} then a [ [ u ] ] {\displaystyle {\mathfrak {a}}[\![u]\!]} is called a deformation quantization of a {\displaystyle {\mathfrak {a}}} . A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see § Dual of an associative algebra) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group. Gerstenhaber algebra == Constructions == Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A. Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r · x = (r1A)x. This gives the quotient ring A / I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra. Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication. Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras. Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining r · (s ⊗ a) = (rs ⊗ a). The functor which sends A to R ⊗Z A is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings. Free algebra A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra. == Dual of an associative algebra == Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra. For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication Δ(f)(g, h) = f(gh) and co-unit ε(f) = f(1). The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see § Representations below). == Enveloping algebra == Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra A ⊗R Aop or Aop ⊗R A, depending on authors. Note that a bimodule over A is exactly a left module over Ae. == Separable algebra == Let A be an algebra over a commutative ring R. Then the algebra A is a right module over Ae := Aop ⊗R A with the action x ⋅ (a ⊗ b) = axb. Then, by definition, A is said to separable if the multiplication map A ⊗R A → A : x ⊗ y ↦ xy splits as an Ae-linear map, where A ⊗ A is an Ae-module by (x ⊗ y) ⋅ (a ⊗ b) = ax ⊗ yb. Equivalently, A is separable if it is a projective module over Ae; thus, the Ae-projective dimension of A, sometimes called the bidimension of A, measures the failure of separability. == Finite-dimensional algebra == Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring. === Commutative case === As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent A {\displaystyle A} is separable. A ⊗ k ¯ {\displaystyle A\otimes {\overline {k}}} is reduced, where k ¯ {\displaystyle {\overline {k}}} is some algebraic closure of k. A ⊗ k ¯ = k ¯ n {\displaystyle A\otimes {\overline {k}}={\overline {k}}^{n}} for some n. dim k ⁡ A {\displaystyle \dim _{k}A} is the number of k {\displaystyle k} -algebra homomorphisms A → k ¯ {\displaystyle A\to {\overline {k}}} . Let Γ = Gal ⁡ ( k s / k ) = lim ← ⁡ Gal ⁡ ( k ′ / k ) {\displaystyle \Gamma =\operatorname {Gal} (k_{s}/k)=\varprojlim \operatorname {Gal} (k'/k)} , the profinite group of finite Galois extensions of k. Then A ↦ X A = { k -algebra homomorphisms A → k s } {\displaystyle A\mapsto X_{A}=\{k{\text{-algebra homomorphisms }}A\to k_{s}\}} is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous Γ {\displaystyle \Gamma } -actions. === Noncommutative case === Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., A = Mn(D). More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem. The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.) The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of A / I as a module over the enveloping algebra (A / I)e is at most one, then the natural surjection p : A → A / I splits; i.e., A contains a subalgebra B such that p|B : B ~→ A / I is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras. == Lattices and orders == Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, L ⊗R K = V. Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., ⁠1/2⁠Z is a lattice in Q but not an order (since it is not an algebra). A maximal order is an order that is maximal among all the orders. == Related concepts == === Coalgebras === An associative algebra over K is given by a K-vector space A endowed with a bilinear map A × A → A having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K → A identifying the scalar multiples of the multiplicative identity. If the bilinear map A × A → A is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) A ⊗ A → A (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form A ⊗ A → A and one of the form K → A) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra. There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above. == Representations == A representation of an algebra A is an algebra homomorphism ρ : A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, ρ(xy) = ρ(x)ρ(y) for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V). If A and B are two algebras, and ρ : A → End(V) and τ : B → End(W) are two representations, then there is a (canonical) representation A ⊗ B → End(V ⊗ W) of the tensor product algebra A ⊗ B on the vector space V ⊗ W. However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below. === Motivation for a Hopf algebra === Consider, for example, two representations σ : A → End(V) and τ : A → End(W). One might try to form a tensor product representation ρ : x ↦ σ(x) ⊗ τ(x) according to how it acts on the product vector space, so that ρ ( x ) ( v ⊗ w ) = ( σ ( x ) ( v ) ) ⊗ ( τ ( x ) ( w ) ) . {\displaystyle \rho (x)(v\otimes w)=(\sigma (x)(v))\otimes (\tau (x)(w)).} However, such a map would not be linear, since one would have ρ ( k x ) = σ ( k x ) ⊗ τ ( k x ) = k σ ( x ) ⊗ k τ ( x ) = k 2 ( σ ( x ) ⊗ τ ( x ) ) = k 2 ρ ( x ) {\displaystyle \rho (kx)=\sigma (kx)\otimes \tau (kx)=k\sigma (x)\otimes k\tau (x)=k^{2}(\sigma (x)\otimes \tau (x))=k^{2}\rho (x)} for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ : A → A ⊗ A, and defining the tensor product representation as ρ = ( σ ⊗ τ ) ∘ Δ . {\displaystyle \rho =(\sigma \otimes \tau )\circ \Delta .} Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups). === Motivation for a Lie algebra === One can try to be more clever in defining a tensor product. Consider, for example, x ↦ ρ ( x ) = σ ( x ) ⊗ Id W + Id V ⊗ τ ( x ) {\displaystyle x\mapsto \rho (x)=\sigma (x)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)} so that the action on the tensor product space is given by ρ ( x ) ( v ⊗ w ) = ( σ ( x ) v ) ⊗ w + v ⊗ ( τ ( x ) w ) {\displaystyle \rho (x)(v\otimes w)=(\sigma (x)v)\otimes w+v\otimes (\tau (x)w)} . This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication: ρ ( x y ) = σ ( x ) σ ( y ) ⊗ Id W + Id V ⊗ τ ( x ) τ ( y ) {\displaystyle \rho (xy)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)} . But, in general, this does not equal ρ ( x ) ρ ( y ) = σ ( x ) σ ( y ) ⊗ Id W + σ ( x ) ⊗ τ ( y ) + σ ( y ) ⊗ τ ( x ) + Id V ⊗ τ ( x ) τ ( y ) {\displaystyle \rho (x)\rho (y)=\sigma (x)\sigma (y)\otimes {\mbox{Id}}_{W}+\sigma (x)\otimes \tau (y)+\sigma (y)\otimes \tau (x)+{\mbox{Id}}_{V}\otimes \tau (x)\tau (y)} . This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra. == Non-unital algebras == Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital. One example of a non-unital associative algebra is given by the set of all functions f : R → R whose limit as x nears infinity is zero. Another example is the vector space of continuous periodic functions, together with the convolution product. == See also == Abstract algebra Algebraic structure Algebra over a field Sheaf of algebras, a sort of an algebra over a ringed space Deligne's conjecture on Hochschild cohomology == Notes == == Citations == == References ==
Wikipedia/Associative_algebra
In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers (involving the Prime Number Theorem and Riemann zeta function) and additive number theory (such as the Goldbach conjecture and Waring's problem). == Branches of analytic number theory == Analytic number theory can be split up into two major parts, divided more by the type of problems they attempt to solve than fundamental differences in technique. Multiplicative number theory deals with the distribution of the prime numbers, such as estimating the number of primes in an interval, and includes the prime number theorem and Dirichlet's theorem on primes in arithmetic progressions. Additive number theory is concerned with the additive structure of the integers, such as Goldbach's conjecture that every even number greater than 2 is the sum of two primes. One of the main results in additive number theory is the solution to Waring's problem. == History == === Precursors === Much of analytic number theory was inspired by the prime number theorem. Let π(x) be the prime-counting function that gives the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / ln(x) is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x / ln(x) as x approaches infinity is 1: lim x → ∞ π ( x ) x / ln ⁡ ( x ) = 1 , {\displaystyle \lim _{x\to \infty }{\frac {\pi (x)}{x/\ln(x)}}=1,} known as the asymptotic law of distribution of prime numbers. Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a/(A ln(a) + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B ≈ −1.08366. Carl Friedrich Gauss considered the same question: "Im Jahr 1792 oder 1793" ('in the year 1792 or 1793'), according to his own recollection nearly sixty years later in a letter to Encke (1849), he wrote in his logarithm table (he was then 15 or 16) the short note "Primzahlen unter a ( = ∞ ) a ln ⁡ a {\displaystyle a(=\infty ){\frac {a}{\ln a}}} " ('prime numbers under a ( = ∞ ) a ln ⁡ a {\displaystyle a(=\infty ){\frac {a}{\ln a}}} '). But Gauss never published this conjecture. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / ln(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients. === Dirichlet === Johann Peter Gustav Lejeune Dirichlet is credited with the creation of analytic number theory, a field in which he found several deep results and in proving them introduced some fundamental tools, many of which were later named after him. In 1837 he published Dirichlet's theorem on arithmetic progressions, using mathematical analysis concepts to tackle an algebraic problem and thus creating the branch of analytic number theory. In proving the theorem, he introduced the Dirichlet characters and L-functions. In 1841 he generalized his arithmetic progressions theorem from integers to the ring of Gaussian integers Z [ i ] {\displaystyle \mathbb {Z} [i]} . === Chebyshev === In two papers from 1848 and 1850, the Russian mathematician Pafnuty L'vovich Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s) (for real values of the argument "s", as are works of Leonhard Euler, as early as 1737) predating Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit of π(x)/(x/ln(x)) as x goes to infinity exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near to 1 for all x. Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2. === Riemann === Bernhard Riemann made some famous contributions to modern analytic number theory. In a single short paper (the only one he published on the subject of number theory), he investigated the Riemann zeta function and established its importance for understanding the distribution of prime numbers. He made a series of conjectures about properties of the zeta function, one of which is the well-known Riemann hypothesis. === Hadamard and de la Vallée-Poussin === Extending the ideas of Riemann, two proofs of the prime number theorem were obtained independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function ζ(s) is non-zero for all complex values of the variable s that have the form s = 1 + it with t > 0. === Modern times === The biggest technical change after 1950 has been the development of sieve methods, particularly in multiplicative problems. These are combinatorial in nature, and quite varied. The extremal branch of combinatorial theory has in return been greatly influenced by the value placed in analytic number theory on quantitative upper and lower bounds. Another recent development is probabilistic number theory, which uses methods from probability theory to estimate the distribution of number theoretic functions, such as how many prime divisors a number has. Specifically, the breakthroughs by Yitang Zhang, James Maynard, Terence Tao and Ben Green have all used the Goldston–Pintz–Yıldırım method, which they originally used to prove that p n + 1 − p n ≥ o ( log ⁡ p n ) . {\displaystyle p_{n+1}-p_{n}\geq o(\log p_{n}).} Developments within analytic number theory are often refinements of earlier techniques, which reduce the error terms and widen their applicability. For example, the circle method of Hardy and Littlewood was conceived as applying to power series near the unit circle in the complex plane; it is now thought of in terms of finite exponential sums (that is, on the unit circle, but with the power series truncated). The needs of Diophantine approximation are for auxiliary functions that are not generating functions—their coefficients are constructed by use of a pigeonhole principle—and involve several complex variables. The fields of Diophantine approximation and transcendence theory have expanded, to the point that the techniques have been applied to the Mordell conjecture. == Problems and results == Theorems and results within analytic number theory tend not to be exact structural results about the integers, for which algebraic and geometrical tools are more appropriate. Instead, they give approximate bounds and estimates for various number theoretical functions, as the following examples illustrate. === Multiplicative number theory === Euclid showed that there are infinitely many prime numbers. An important question is to determine the asymptotic distribution of the prime numbers; that is, a rough description of how many primes are smaller than a given number. Gauss, amongst others, after computing a large list of primes, conjectured that the number of primes less than or equal to a large number N is close to the value of the integral ∫ 2 N 1 log ⁡ t d t . {\displaystyle \int _{2}^{N}{\frac {1}{\log t}}\,dt.} In 1859 Bernhard Riemann used complex analysis and a special meromorphic function now known as the Riemann zeta function to derive an analytic expression for the number of primes less than or equal to a real number x. Remarkably, the main term in Riemann's formula was exactly the above integral, lending substantial weight to Gauss's conjecture. Riemann found that the error terms in this expression, and hence the manner in which the primes are distributed, are closely related to the complex zeros of the zeta function. Using Riemann's ideas and by getting more information on the zeros of the zeta function, Jacques Hadamard and Charles Jean de la Vallée-Poussin managed to complete the proof of Gauss's conjecture. In particular, they proved that if π ( x ) = ( number of primes ≤ x ) , {\displaystyle \pi (x)=({\text{number of primes }}\leq x),} then lim x → ∞ π ( x ) x / log ⁡ x = 1. {\displaystyle \lim _{x\to \infty }{\frac {\pi (x)}{x/\log x}}=1.} This remarkable result is what is now known as the prime number theorem. It is a central result in analytic number theory. Loosely speaking, it states that given a large number N, the number of primes less than or equal to N is about N/log(N). More generally, the same question can be asked about the number of primes in any arithmetic progression a + nq for any integer n. In one of the first applications of analytic techniques to number theory, Dirichlet proved that any arithmetic progression with a and q coprime contains infinitely many primes. The prime number theorem can be generalised to this problem; letting π ( x , a , q ) = ( number of primes ≤ x in the arithmetic progression a + n q , n ∈ Z ) , {\displaystyle \pi (x,a,q)=({\text{number of primes }}\leq x{\text{ in the arithmetic progression }}a+nq,\ n\in \mathbf {Z} ),} then if a and q are coprime, lim x → ∞ π ( x , a , q ) ϕ ( q ) x / log ⁡ x = 1 , {\displaystyle \lim _{x\to \infty }{\frac {\pi (x,a,q)\phi (q)}{x/\log x}}=1,} where ϕ {\displaystyle \phi } is the totient function. There are also many deep and wide-ranging conjectures in number theory whose proofs seem too difficult for current techniques, such as the twin prime conjecture which asks whether there are infinitely many primes p such that p + 2 is prime. On the assumption of the Elliott–Halberstam conjecture it has been proven recently that there are infinitely many primes p such that p + k is prime for some positive even k at most 12. Also, it has been proven unconditionally (i.e. not depending on unproven conjectures) that there are infinitely many primes p such that p + k is prime for some positive even k at most 246. === Additive number theory === One of the most important problems in additive number theory is Waring's problem, which asks whether it is possible, for any k ≥ 2, to write any positive integer as the sum of a bounded number of kth powers, n = x 1 k + ⋯ + x ℓ k . {\displaystyle n=x_{1}^{k}+\cdots +x_{\ell }^{k}.} The case for squares, k = 2, was answered by Lagrange in 1770, who proved that every positive integer is the sum of at most four squares. The general case was proved by Hilbert in 1909, using algebraic techniques which gave no explicit bounds. An important breakthrough was the application of analytic tools to the problem by Hardy and Littlewood. These techniques are known as the circle method, and give explicit upper bounds for the function G(k), the smallest number of kth powers needed, such as Vinogradov's bound G ( k ) ≤ k ( 3 log ⁡ k + 11 ) . {\displaystyle G(k)\leq k(3\log k+11).} === Diophantine problems === Diophantine problems are concerned with integer solutions to polynomial equations: one may study the distribution of solutions, that is, counting solutions according to some measure of "size" or height. An important example is the Gauss circle problem, which asks for integers points (x y) which satisfy x 2 + y 2 ≤ r 2 . {\displaystyle x^{2}+y^{2}\leq r^{2}.} In geometrical terms, given a circle centered about the origin in the plane with radius r, the problem asks how many integer lattice points lie on or inside the circle. It is not hard to prove that the answer is π r 2 + E ( r ) {\displaystyle \pi r^{2}+E(r)} , where E ( r ) / r 2 → 0 {\displaystyle E(r)/r^{2}\to 0} as r → ∞ {\displaystyle r\to \infty } . Again, the difficult part and a great achievement of analytic number theory is obtaining specific upper bounds on the error term E(r). It was shown by Gauss that E ( r ) = O ( r ) {\displaystyle E(r)=O(r)} . In general, an O(r) error term would be possible with the unit circle (or, more properly, the closed unit disk) replaced by the dilates of any bounded planar region with piecewise smooth boundary. Furthermore, replacing the unit circle by the unit square, the error term for the general problem can be as large as a linear function of r. Therefore, getting an error bound of the form O ( r δ ) {\displaystyle O(r^{\delta })} for some δ < 1 {\displaystyle \delta <1} in the case of the circle is a significant improvement. The first to attain this was Sierpiński in 1906, who showed E ( r ) = O ( r 2 / 3 ) {\displaystyle E(r)=O(r^{2/3})} . In 1915, Hardy and Landau each showed that one does not have E ( r ) = O ( r 1 / 2 ) {\displaystyle E(r)=O(r^{1/2})} . Since then the goal has been to show that for each fixed ϵ > 0 {\displaystyle \epsilon >0} there exists a real number C ( ϵ ) {\displaystyle C(\epsilon )} such that E ( r ) ≤ C ( ϵ ) r 1 / 2 + ϵ {\displaystyle E(r)\leq C(\epsilon )r^{1/2+\epsilon }} . In 2000 Huxley showed that E ( r ) = O ( r 131 / 208 ) {\displaystyle E(r)=O(r^{131/208})} , which is the best published result. == Methods of analytic number theory == === Dirichlet series === One of the most useful tools in multiplicative number theory are Dirichlet series, which are functions of a complex variable defined by an infinite series of the form f ( s ) = ∑ n = 1 ∞ a n n − s . {\displaystyle f(s)=\sum _{n=1}^{\infty }a_{n}n^{-s}.} Depending on the choice of coefficients a n {\displaystyle a_{n}} , this series may converge everywhere, nowhere, or on some half plane. In many cases, even where the series does not converge everywhere, the holomorphic function it defines may be analytically continued to a meromorphic function on the entire complex plane. The utility of functions like this in multiplicative problems can be seen in the formal identity ( ∑ n = 1 ∞ a n n − s ) ( ∑ n = 1 ∞ b n n − s ) = ∑ n = 1 ∞ ( ∑ k ℓ = n a k b ℓ ) n − s ; {\displaystyle \left(\sum _{n=1}^{\infty }a_{n}n^{-s}\right)\left(\sum _{n=1}^{\infty }b_{n}n^{-s}\right)=\sum _{n=1}^{\infty }\left(\sum _{k\ell =n}a_{k}b_{\ell }\right)n^{-s};} hence the coefficients of the product of two Dirichlet series are the multiplicative convolutions of the original coefficients. Furthermore, techniques such as partial summation and Tauberian theorems can be used to get information about the coefficients from analytic information about the Dirichlet series. Thus a common method for estimating a multiplicative function is to express it as a Dirichlet series (or a product of simpler Dirichlet series using convolution identities), examine this series as a complex function and then convert this analytic information back into information about the original function. === Riemann zeta function === Euler showed that the fundamental theorem of arithmetic implies (at least formally) the Euler product ∑ n = 1 ∞ 1 n s = ∏ p ∞ 1 1 − p − s for s > 1 {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\prod _{p}^{\infty }{\frac {1}{1-p^{-s}}}{\text{ for }}s>1} where the product is taken over all prime numbers p. Euler's proof of the infinity of prime numbers makes use of the divergence of the term at the left hand side for s = 1 (the so-called harmonic series), a purely analytic result. Euler was also the first to use analytical arguments for the purpose of studying properties of integers, specifically by constructing generating power series. This was the beginning of analytic number theory. Later, Riemann considered this function for complex values of s and showed that this function can be extended to a meromorphic function on the entire plane with a simple pole at s = 1. This function is now known as the Riemann Zeta function and is denoted by ζ(s). There is a plethora of literature on this function and the function is a special case of the more general Dirichlet L-functions. Analytic number theorists are often interested in the error of approximations such as the prime number theorem. In this case, the error is smaller than x/log x. Riemann's formula for π(x) shows that the error term in this approximation can be expressed in terms of the zeros of the zeta function. In his 1859 paper, Riemann conjectured that all the "non-trivial" zeros of ζ lie on the line ℜ ( s ) = 1 / 2 {\displaystyle \Re (s)=1/2} but never provided a proof of this statement. This famous and long-standing conjecture is known as the Riemann Hypothesis and has many deep implications in number theory; in fact, many important theorems have been proved under the assumption that the hypothesis is true. For example, under the assumption of the Riemann Hypothesis, the error term in the prime number theorem is O ( x 1 / 2 + ε ) {\displaystyle O(x^{1/2+\varepsilon })} . In the early 20th century G. H. Hardy and Littlewood proved many results about the zeta function in an attempt to prove the Riemann Hypothesis. In fact, in 1914, Hardy proved that there were infinitely many zeros of the zeta function on the critical line ℜ ( z ) = 1 / 2. {\displaystyle \Re (z)=1/2.} This led to several theorems describing the density of the zeros on the critical line. == See also == Automorphic L-function Automorphic form Langlands program Maier's matrix method == Notes == == References == Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001 Borwein, Peter; Choi, Stephen; Rooney, Brendan; Weirathmueller, Andrea, eds. (2008), The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, CMS Books in Mathematics, New York: Springer, doi:10.1007/978-0-387-72126-2, ISBN 978-0-387-72125-5 Davenport, Harold (2000), Multiplicative number theory, Graduate Texts in Mathematics, vol. 74 (3rd revised ed.), New York: Springer-Verlag, ISBN 978-0-387-95097-6, MR 1790423 Edwards, H. M. (1974), Riemann's Zeta Function, New York: Dover Publications, ISBN 978-0-486-41740-0, MR 0466039 Tenenbaum, Gérald (1995), Introduction to Analytic and Probabilistic Number Theory, Cambridge studies in advanced mathematics, vol. 46, Cambridge University Press, ISBN 0-521-41261-7 == Further reading == Ayoub, Introduction to the Analytic Theory of Numbers H. L. Montgomery and R. C. Vaughan, Multiplicative Number Theory I : Classical Theory H. Iwaniec and E. Kowalski, Analytic Number Theory. D. J. Newman, Analytic number theory, Springer, 1998 On specialized aspects the following books have become especially well-known: Titchmarsh, Edward Charles (1986), The Theory of the Riemann Zeta Function (2nd ed.), Oxford University Press H. Halberstam and H. E. Richert, Sieve Methods R. C. Vaughan, The Hardy–Littlewood method, 2nd. edn. Certain topics have not yet reached book form in any depth. Some examples are (i) Montgomery's pair correlation conjecture and the work that initiated from it, (ii) the new results of Goldston, Pintz and Yilidrim on small gaps between primes, and (iii) the Green–Tao theorem showing that arbitrarily long arithmetic progressions of primes exist.
Wikipedia/Analytic_number_theory
In mathematics and particularly in algebra, a system of equations (either linear or nonlinear) is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations. If a system of equations is inconsistent, then the equations cannot be true together leading to contradictory information, such as the false statements 2 = 1, or x 3 + y 3 = 5 {\displaystyle x^{3}+y^{3}=5} and x 3 + y 3 = 6 {\displaystyle x^{3}+y^{3}=6} (which implies 5 = 6). Both types of equation system, inconsistent and consistent, can be any of overdetermined (having more equations than unknowns), underdetermined (having fewer equations than unknowns), or exactly determined. == Simple examples == === Underdetermined and consistent === The system x + y + z = 3 , x + y + 2 z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+2z&=4\end{aligned}}} has an infinite number of solutions, all of them having z = 1 (as can be seen by subtracting the first equation from the second), and all of them therefore having x + y = 2 for any values of x and y. The nonlinear system x 2 + y 2 + z 2 = 10 , x 2 + y 2 = 5 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=10,\\x^{2}+y^{2}&=5\end{aligned}}} has an infinitude of solutions, all involving z = ± 5 . {\displaystyle z=\pm {\sqrt {5}}.} Since each of these systems has more than one solution, it is an indeterminate system . === Underdetermined and inconsistent === The system x + y + z = 3 , x + y + z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+z&=4\end{aligned}}} has no solutions, as can be seen by subtracting the first equation from the second to obtain the impossible 0 = 1. The non-linear system x 2 + y 2 + z 2 = 17 , x 2 + y 2 + z 2 = 14 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=17,\\x^{2}+y^{2}+z^{2}&=14\end{aligned}}} has no solutions, because if one equation is subtracted from the other we obtain the impossible 0 = 3. === Exactly determined and consistent === The system x + y = 3 , x + 2 y = 5 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=5\end{aligned}}} has exactly one solution: x = 1, y = 2 The nonlinear system x + y = 1 , x 2 + y 2 = 1 {\displaystyle {\begin{aligned}x+y&=1,\\x^{2}+y^{2}&=1\end{aligned}}} has the two solutions (x, y) = (1, 0) and (x, y) = (0, 1), while x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 34 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=34\end{aligned}}} has an infinite number of solutions because the third equation is the first equation plus twice the second one and hence contains no independent information; thus any value of z can be chosen and values of x and y can be found to satisfy the first two (and hence the third) equations. === Exactly determined and inconsistent === The system x + y = 3 , 4 x + 4 y = 10 {\displaystyle {\begin{aligned}x+y&=3,\\4x+4y&=10\end{aligned}}} has no solutions; the inconsistency can be seen by multiplying the first equation by 4 and subtracting the second equation to obtain the impossible 0 = 2. Likewise, x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 32 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=32\end{aligned}}} is an inconsistent system because the first equation plus twice the second minus the third contains the contradiction 0 = 2. === Overdetermined and consistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 20 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=20\end{aligned}}} has a solution, x = –1, y = 4, because the first two equations do not contradict each other and the third equation is redundant (since it contains the same information as can be obtained from the first two equations by multiplying each through by 2 and summing them). The system x + 2 y = 7 , 3 x + 6 y = 21 , 7 x + 14 y = 49 {\displaystyle {\begin{aligned}x+2y&=7,\\3x+6y&=21,\\7x+14y&=49\end{aligned}}} has an infinitude of solutions since all three equations give the same information as each other (as can be seen by multiplying through the first equation by either 3 or 7). Any value of y is part of a solution, with the corresponding value of x being 7 – 2y. The nonlinear system x 2 − 1 = 0 , y 2 − 1 = 0 , ( x − 1 ) ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x^{2}-1&=0,\\y^{2}-1&=0,\\(x-1)(y-1)&=0\end{aligned}}} has the three solutions (x, y) = (1, –1), (–1, 1), (1, 1). === Overdetermined and inconsistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 21 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=21\end{aligned}}} is inconsistent because the last equation contradicts the information embedded in the first two, as seen by multiplying each of the first two through by 2 and summing them. The system x 2 + y 2 = 1 , x 2 + 2 y 2 = 2 , 2 x 2 + 3 y 2 = 4 {\displaystyle {\begin{aligned}x^{2}+y^{2}&=1,\\x^{2}+2y^{2}&=2,\\2x^{2}+3y^{2}&=4\end{aligned}}} is inconsistent because the sum of the first two equations contradicts the third one. == Criteria for consistency == As can be seen from the above examples, consistency versus inconsistency is a different issue from comparing the numbers of equations and unknowns. === Linear systems === A linear system is consistent if and only if its coefficient matrix has the same rank as does its augmented matrix (the coefficient matrix with an extra column added, that column being the column vector of constants). === Nonlinear systems === == References ==
Wikipedia/Consistent_and_inconsistent_equations
In mathematics, a geometric algebra (also known as a Clifford algebra) is an algebra that can represent and manipulate geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division (though generally not by all elements) and addition of objects of different dimensions. The geometric product was first briefly mentioned by Hermann Grassmann, who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). Clifford defined the Clifford algebra and its product as a unification of the Grassmann algebra and Hamilton's quaternion algebra. Adding the dual of the Grassmann exterior product allows the use of the Grassmann–Cayley algebra. In the late 1990s, plane-based geometric algebra and conformal geometric algebra (CGA) respectively provided a framework for euclidean geometry and classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations. For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized in the 1960s by David Hestenes, who advocated its importance to relativistic physics. The scalars and vectors have their usual interpretation and make up distinct subspaces of a geometric algebra. Bivectors provide a more natural representation of the pseudovector quantities of 3D vector calculus that are derived as a cross product, such as oriented area, oriented angle of rotation, torque, angular momentum and the magnetic field. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike a vector algebra, a geometric algebra naturally accommodates any number of dimensions and any quadratic form such as in relativity. Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space). Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis and differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes and Chris Doran, as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory, and relativity. GA has also found use as a computational tool in computer graphics and robotics. == Definition and notation == There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic, "full of geometric significance" and equivalent to the universal Clifford algebra. Given a finite-dimensional vector space ⁠ V {\displaystyle V} ⁠ over a field ⁠ F {\displaystyle F} ⁠ with a symmetric bilinear form (the inner product, e.g., the Euclidean or Lorentzian metric) ⁠ g : V × V → F {\displaystyle g:V\times V\to F} ⁠, the geometric algebra of the quadratic space ⁠ ( V , g ) {\displaystyle (V,g)} ⁠ is the Clifford algebra ⁠ Cl ⁡ ( V , g ) {\displaystyle \operatorname {Cl} (V,g)} ⁠, an element of which is called a multivector. The Clifford algebra is commonly defined as a quotient algebra of the tensor algebra, though this definition is abstract, so the following definition is presented without requiring abstract algebra. Definition A unital associative algebra ⁠ Cl ⁡ ( V , g ) {\displaystyle \operatorname {Cl} (V,g)} ⁠ with a nondegenerate symmetric bilinear form ⁠ g : V × V → F {\displaystyle g:V\times V\to F} ⁠ is the Clifford algebra of the quadratic space ⁠ ( V , g ) {\displaystyle (V,g)} ⁠ if it contains ⁠ F {\displaystyle F} ⁠ and ⁠ V {\displaystyle V} ⁠ as distinct subspaces ⁠ a 2 = g ( a , a ) 1 {\displaystyle a^{2}=g(a,a)1} ⁠ for ⁠ a ∈ V {\displaystyle a\in V} ⁠ ⁠ V {\displaystyle V} ⁠ generates ⁠ Cl ⁡ ( V , g ) {\displaystyle \operatorname {Cl} (V,g)} ⁠ as an algebra ⁠ Cl ⁡ ( V , g ) {\displaystyle \operatorname {Cl} (V,g)} ⁠ is not generated by any proper subspace of ⁠ V {\displaystyle V} ⁠. To cover degenerate symmetric bilinear forms, the last condition must be modified. It can be shown that these conditions uniquely characterize the geometric product. For the remainder of this article, only the real case, ⁠ F = R {\displaystyle F=\mathbb {R} } ⁠, will be considered. The notation ⁠ G ( p , q ) {\displaystyle {\mathcal {G}}(p,q)} ⁠ (respectively ⁠ G ( p , q , r ) {\displaystyle {\mathcal {G}}(p,q,r)} ⁠) will be used to denote a geometric algebra for which the bilinear form ⁠ g {\displaystyle g} ⁠ has the signature ⁠ ( p , q ) {\displaystyle (p,q)} ⁠ (respectively ⁠ ( p , q , r ) {\displaystyle (p,q,r)} ⁠). The product in the algebra is called the geometric product, and the product in the contained exterior algebra is called the exterior product (frequently called the wedge product or the outer product). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol ⁠ ∧ {\displaystyle \wedge } ⁠. The above definition of the geometric algebra is still somewhat abstract, so we summarize the properties of the geometric product here. For multivectors ⁠ A , B , C ∈ G ( p , q ) {\displaystyle A,B,C\in {\mathcal {G}}(p,q)} ⁠: ⁠ A B ∈ G ( p , q ) {\displaystyle AB\in {\mathcal {G}}(p,q)} ⁠ (closure) ⁠ 1 A = A 1 = A {\displaystyle 1A=A1=A} ⁠, where ⁠ 1 {\displaystyle 1} ⁠ is the identity element (existence of an identity element) ⁠ A ( B C ) = ( A B ) C {\displaystyle A(BC)=(AB)C} ⁠ (associativity) ⁠ A ( B + C ) = A B + A C {\displaystyle A(B+C)=AB+AC} ⁠ and ⁠ ( B + C ) A = B A + C A {\displaystyle (B+C)A=BA+CA} ⁠ (distributivity) ⁠ a 2 = g ( a , a ) 1 {\displaystyle a^{2}=g(a,a)1} ⁠ for ⁠ a ∈ V {\displaystyle a\in V} ⁠. The exterior product has the same properties, except that the last property above is replaced by ⁠ a ∧ a = 0 {\displaystyle a\wedge a=0} ⁠ for ⁠ a ∈ V {\displaystyle a\in V} ⁠. Note that in the last property above, the real number ⁠ g ( a , a ) {\displaystyle g(a,a)} ⁠ need not be nonnegative if ⁠ g {\displaystyle g} ⁠ is not positive-definite. An important property of the geometric product is the existence of elements that have a multiplicative inverse. For a vector ⁠ a {\displaystyle a} ⁠, if a 2 ≠ 0 {\displaystyle a^{2}\neq 0} then a − 1 {\displaystyle a^{-1}} exists and is equal to ⁠ g ( a , a ) − 1 a {\displaystyle g(a,a)^{-1}a} ⁠. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if u {\displaystyle u} is a vector in V {\displaystyle V} such that ⁠ u 2 = 1 {\displaystyle u^{2}=1} ⁠, the element 1 2 ( 1 + u ) {\displaystyle \textstyle {\frac {1}{2}}(1+u)} is both a nontrivial idempotent element and a nonzero zero divisor, and thus has no inverse. It is usual to identify R {\displaystyle \mathbb {R} } and V {\displaystyle V} with their images under the natural embeddings R → G ( p , q ) {\displaystyle \mathbb {R} \to {\mathcal {G}}(p,q)} and ⁠ V → G ( p , q ) {\displaystyle V\to {\mathcal {G}}(p,q)} ⁠. In this article, this identification is assumed. Throughout, the terms scalar and vector refer to elements of R {\displaystyle \mathbb {R} } and V {\displaystyle V} respectively (and of their images under this embedding). === Geometric product === For vectors ⁠ a {\displaystyle a} ⁠ and ⁠ b {\displaystyle b} ⁠, we may write the geometric product of any two vectors ⁠ a {\displaystyle a} ⁠ and ⁠ b {\displaystyle b} ⁠ as the sum of a symmetric product and an antisymmetric product: a b = 1 2 ( a b + b a ) + 1 2 ( a b − b a ) . {\displaystyle ab={\frac {1}{2}}(ab+ba)+{\frac {1}{2}}(ab-ba).} Thus we can define the inner product of vectors as a ⋅ b := g ( a , b ) , {\displaystyle a\cdot b:=g(a,b),} so that the symmetric product can be written as 1 2 ( a b + b a ) = 1 2 ( ( a + b ) 2 − a 2 − b 2 ) = a ⋅ b . {\displaystyle {\frac {1}{2}}(ab+ba)={\frac {1}{2}}\left((a+b)^{2}-a^{2}-b^{2}\right)=a\cdot b.} Conversely, ⁠ g {\displaystyle g} ⁠ is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained exterior algebra: a ∧ b := 1 2 ( a b − b a ) = − ( b ∧ a ) . {\displaystyle a\wedge b:={\frac {1}{2}}(ab-ba)=-(b\wedge a).} Then by simple addition: a b = a ⋅ b + a ∧ b {\displaystyle ab=a\cdot b+a\wedge b} the ungeneralized or vector form of the geometric product. The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, a {\displaystyle a} and b {\displaystyle b} are parallel if their geometric product is equal to their inner product, whereas a {\displaystyle a} and b {\displaystyle b} are perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The exterior product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in 3 {\displaystyle 3} dimensions with positive-definite quadratic form is closely related to their exterior product. Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras. The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities 1 ∧ a i = a i ∧ 1 = a i a 1 ∧ a 2 ∧ ⋯ ∧ a r = 1 r ! ∑ σ ∈ S r sgn ⁡ ( σ ) a σ ( 1 ) a σ ( 2 ) ⋯ a σ ( r ) , {\displaystyle {\begin{aligned}1\wedge a_{i}&=a_{i}\wedge 1=a_{i}\\a_{1}\wedge a_{2}\wedge \cdots \wedge a_{r}&={\frac {1}{r!}}\sum _{\sigma \in {\mathfrak {S}}_{r}}\operatorname {sgn} (\sigma )a_{\sigma (1)}a_{\sigma (2)}\cdots a_{\sigma (r)},\end{aligned}}} where the sum is over all permutations of the indices, with sgn ⁡ ( σ ) {\displaystyle \operatorname {sgn} (\sigma )} the sign of the permutation, and a i {\displaystyle a_{i}} are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an alternating algebra. The equivalent structure equation for Clifford algebra is a 1 a 2 a 3 … a n = ∑ i = 0 [ n 2 ] ∑ μ ∈ C ( − 1 ) k Pf ⁡ ( a μ 1 ⋅ a μ 2 , … , a μ 2 i − 1 ⋅ a μ 2 i ) a μ 2 i + 1 ∧ ⋯ ∧ a μ n {\displaystyle a_{1}a_{2}a_{3}\dots a_{n}=\sum _{i=0}^{[{\frac {n}{2}}]}\sum _{\mu \in {}{\mathcal {C}}}(-1)^{k}\operatorname {Pf} (a_{\mu _{1}}\cdot a_{\mu _{2}},\dots ,a_{\mu _{2i-1}}\cdot a_{\mu _{2i}})a_{\mu _{2i+1}}\land \dots \land a_{\mu _{n}}} where Pf ⁡ ( A ) {\displaystyle \operatorname {Pf} (A)} is the Pfaffian of ⁠ A {\displaystyle A} ⁠ and C = ( n 2 i ) {\textstyle {\mathcal {C}}={\binom {n}{2i}}} provides combinations, ⁠ μ {\displaystyle \mu } ⁠, of ⁠ n {\displaystyle n} ⁠ indices divided into ⁠ 2 i {\displaystyle 2i} ⁠ and ⁠ n − 2 i {\displaystyle n-2i} ⁠ parts and ⁠ k {\displaystyle k} ⁠ is the parity of the combination. The Pfaffian provides a metric for the exterior algebra and, as pointed out by Claude Chevalley, Clifford algebra reduces to the exterior algebra with a zero quadratic form. The role the Pfaffian plays can be understood from a geometric viewpoint by developing Clifford algebra from simplices. This derivation provides a better connection between Pascal's triangle and simplices because it provides an interpretation of the first column of ones. === Blades, grades, and basis === A multivector that is the exterior product of r {\displaystyle r} linearly independent vectors is called a blade, and is said to be of grade ⁠ r {\displaystyle r} ⁠. A multivector that is the sum of blades of grade r {\displaystyle r} is called a (homogeneous) multivector of grade ⁠ r {\displaystyle r} ⁠. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades. Consider a set of r {\displaystyle r} linearly independent vectors { a 1 , … , a r } {\displaystyle \{a_{1},\ldots ,a_{r}\}} spanning an ⁠ r {\displaystyle r} ⁠-dimensional subspace of the vector space. With these, we can define a real symmetric matrix (in the same way as a Gramian matrix) [ A ] i j = a i ⋅ a j {\displaystyle [\mathbf {A} ]_{ij}=a_{i}\cdot a_{j}} By the spectral theorem, A {\displaystyle \mathbf {A} } can be diagonalized to diagonal matrix D {\displaystyle \mathbf {D} } by an orthogonal matrix O {\displaystyle \mathbf {O} } via ∑ k , l [ O ] i k [ A ] k l [ O T ] l j = ∑ k , l [ O ] i k [ O ] j l [ A ] k l = [ D ] i j {\displaystyle \sum _{k,l}[\mathbf {O} ]_{ik}[\mathbf {A} ]_{kl}[\mathbf {O} ^{\mathrm {T} }]_{lj}=\sum _{k,l}[\mathbf {O} ]_{ik}[\mathbf {O} ]_{jl}[\mathbf {A} ]_{kl}=[\mathbf {D} ]_{ij}} Define a new set of vectors ⁠ { e 1 , … , e r } {\displaystyle \{e_{1},\ldots ,e_{r}\}} ⁠, known as orthogonal basis vectors, to be those transformed by the orthogonal matrix: e i = ∑ j [ O ] i j a j {\displaystyle e_{i}=\sum _{j}[\mathbf {O} ]_{ij}a_{j}} Since orthogonal transformations preserve inner products, it follows that e i ⋅ e j = [ D ] i j {\displaystyle e_{i}\cdot e_{j}=[\mathbf {D} ]_{ij}} and thus the { e 1 , … , e r } {\displaystyle \{e_{1},\ldots ,e_{r}\}} are perpendicular. In other words, the geometric product of two distinct vectors e i ≠ e j {\displaystyle e_{i}\neq e_{j}} is completely specified by their exterior product, or more generally e 1 e 2 ⋯ e r = e 1 ∧ e 2 ∧ ⋯ ∧ e r = ( ∑ j [ O ] 1 j a j ) ∧ ( ∑ j [ O ] 2 j a j ) ∧ ⋯ ∧ ( ∑ j [ O ] r j a j ) = ( det O ) a 1 ∧ a 2 ∧ ⋯ ∧ a r {\displaystyle {\begin{array}{rl}e_{1}e_{2}\cdots e_{r}&=e_{1}\wedge e_{2}\wedge \cdots \wedge e_{r}\\&=\left(\sum _{j}[\mathbf {O} ]_{1j}a_{j}\right)\wedge \left(\sum _{j}[\mathbf {O} ]_{2j}a_{j}\right)\wedge \cdots \wedge \left(\sum _{j}[\mathbf {O} ]_{rj}a_{j}\right)\\&=(\det \mathbf {O} )a_{1}\wedge a_{2}\wedge \cdots \wedge a_{r}\end{array}}} Therefore, every blade of grade r {\displaystyle r} can be written as the exterior product of r {\displaystyle r} vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to e i ^ = 1 | e i ⋅ e i | e i , {\displaystyle {\widehat {e_{i}}}={\frac {1}{\sqrt {|e_{i}\cdot e_{i}|}}}e_{i},} then these normalized vectors must square to + 1 {\displaystyle +1} or ⁠ − 1 {\displaystyle -1} ⁠. By Sylvester's law of inertia, the total number of ⁠ + 1 {\displaystyle +1} ⁠ and the total number of ⁠ − 1 {\displaystyle -1} ⁠s along the diagonal matrix is invariant. By extension, the total number p {\displaystyle p} of these vectors that square to + 1 {\displaystyle +1} and the total number q {\displaystyle q} that square to − 1 {\displaystyle -1} is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra ⁠ G ( p , q ) {\displaystyle {\mathcal {G}}(p,q)} ⁠. For example, G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} models three-dimensional Euclidean space, G ( 1 , 3 ) {\displaystyle {\mathcal {G}}(1,3)} relativistic spacetime and G ( 4 , 1 ) {\displaystyle {\mathcal {G}}(4,1)} a conformal geometric algebra of a three-dimensional space. The set of all possible products of n {\displaystyle n} orthogonal basis vectors with indices in increasing order, including 1 {\displaystyle 1} as the empty product, forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra ⁠ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} ⁠: { 1 , e 1 , e 2 , e 3 , e 1 e 2 , e 2 e 3 , e 3 e 1 , e 1 e 2 e 3 } {\displaystyle \{1,e_{1},e_{2},e_{3},e_{1}e_{2},e_{2}e_{3},e_{3}e_{1},e_{1}e_{2}e_{3}\}} A basis formed this way is called a standard basis for the geometric algebra, and any other orthogonal basis for V {\displaystyle V} will produce another standard basis. Each standard basis consists of 2 n {\displaystyle 2^{n}} elements. Every multivector of the geometric algebra can be expressed as a linear combination of the standard basis elements. If the standard basis elements are { B i ∣ i ∈ S } {\displaystyle \{B_{i}\mid i\in S\}} with S {\displaystyle S} being an index set, then the geometric product of any two multivectors is ( ∑ i α i B i ) ( ∑ j β j B j ) = ∑ i , j α i β j B i B j . {\displaystyle \left(\sum _{i}\alpha _{i}B_{i}\right)\left(\sum _{j}\beta _{j}B_{j}\right)=\sum _{i,j}\alpha _{i}\beta _{j}B_{i}B_{j}.} The terminology " k {\displaystyle k} -vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of k {\displaystyle k} vectors). By way of example, e 1 ∧ e 2 + e 3 ∧ e 4 {\displaystyle e_{1}\wedge e_{2}+e_{3}\wedge e_{4}} in G ( 4 , 0 ) {\displaystyle {\mathcal {G}}(4,0)} cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only ⁠ 0 {\displaystyle 0} ⁠-, ⁠ 1 {\displaystyle 1} ⁠-, ⁠ ( n − 1 ) {\displaystyle (n-1)} ⁠- and ⁠ n {\displaystyle n} ⁠-vectors are always blades in ⁠ n {\displaystyle n} ⁠-space. === Versor === A ⁠ k {\displaystyle k} ⁠-versor is a multivector that can be expressed as the geometric product of k {\displaystyle k} invertible vectors. Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst. Some authors use the term "versor product" to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form. Specifically, a mapping of vectors of the form V → V : a ↦ R a R − 1 {\displaystyle V\to V:a\mapsto RaR^{-1}} extends to the outermorphism G ( V ) → G ( V ) : A ↦ R A R − 1 . {\displaystyle {\mathcal {G}}(V)\to {\mathcal {G}}(V):A\mapsto RAR^{-1}.} Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations. By the Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors. In group terms, for a real, non-degenerate ⁠ G ( p , q ) {\displaystyle {\mathcal {G}}(p,q)} ⁠, having identified the group G × {\displaystyle {\mathcal {G}}^{\times }} as the group of all invertible elements of ⁠ G {\displaystyle {\mathcal {G}}} ⁠, Lundholm gives a proof that the "versor group" { v 1 v 2 ⋯ v k ∈ G ∣ v i ∈ V × } {\displaystyle \{v_{1}v_{2}\cdots v_{k}\in {\mathcal {G}}\mid v_{i}\in V^{\times }\}} (the set of invertible versors) is equal to the Lipschitz group Γ {\displaystyle \Gamma } (a.k.a. Clifford group, although Lundholm deprecates this usage). === Subgroups of the Lipschitz group === We denote the grade involution as ⁠ S ^ {\displaystyle {\widehat {S}}} ⁠ and reversion as ⁠ S ~ {\displaystyle {\widetilde {S}}} ⁠. Although the Lipschitz group (defined as ⁠ { S ∈ G × ∣ S ^ V S − 1 ⊆ V } {\displaystyle \{S\in {\mathcal {G}}^{\times }\mid {\widehat {S}}VS^{-1}\subseteq V\}} ⁠) and the versor group (defined as ⁠ { ∏ i = 0 k v i ∣ v i ∈ V × , k ∈ N } {\displaystyle \textstyle \{\prod _{i=0}^{k}v_{i}\mid v_{i}\in V^{\times },k\in \mathbb {N} \}} ⁠) have divergent definitions, they are the same group. Lundholm defines the ⁠ Pin {\displaystyle \operatorname {Pin} } ⁠, ⁠ Spin {\displaystyle \operatorname {Spin} } ⁠, and ⁠ Spin + {\displaystyle \operatorname {Spin} ^{+}} ⁠ subgroups of the Lipschitz group. Multiple analyses of spinors use GA as a representation. === Grade projection === A ⁠ Z {\displaystyle \mathbb {Z} } ⁠-graded vector space structure can be established on a geometric algebra by use of the exterior product that is naturally induced by the geometric product. Since the geometric product and the exterior product are equal on orthogonal vectors, this grading can be conveniently constructed by using an orthogonal basis ⁠ { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} ⁠. Elements of the geometric algebra that are scalar multiples of 1 {\displaystyle 1} are of grade 0 {\displaystyle 0} and are called scalars. Elements that are in the span of { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} are of grade ⁠ 1 {\displaystyle 1} ⁠ and are the ordinary vectors. Elements in the span of { e i e j ∣ 1 ≤ i < j ≤ n } {\displaystyle \{e_{i}e_{j}\mid 1\leq i<j\leq n\}} are of grade 2 {\displaystyle 2} and are the bivectors. This terminology continues through to the last grade of ⁠ n {\displaystyle n} ⁠-vectors. Alternatively, ⁠ n {\displaystyle n} ⁠-vectors are called pseudoscalars, ⁠ ( n − 1 ) {\displaystyle (n-1)} ⁠-vectors are called pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of mixed grade. The grading of multivectors is independent of the basis chosen originally. This is a grading as a vector space, but not as an algebra. Because the product of an ⁠ r {\displaystyle r} ⁠-blade and an ⁠ s {\displaystyle s} ⁠-blade is contained in the span of 0 {\displaystyle 0} through ⁠ r + s {\displaystyle r+s} ⁠-blades, the geometric algebra is a filtered algebra. A multivector A {\displaystyle A} may be decomposed with the grade-projection operator ⁠ ⟨ A ⟩ r {\displaystyle \langle A\rangle _{r}} ⁠, which outputs the grade-⁠ r {\displaystyle r} ⁠ portion of ⁠ A {\displaystyle A} ⁠. As a result: A = ∑ r = 0 n ⟨ A ⟩ r {\displaystyle A=\sum _{r=0}^{n}\langle A\rangle _{r}} As an example, the geometric product of two vectors a b = a ⋅ b + a ∧ b = ⟨ a b ⟩ 0 + ⟨ a b ⟩ 2 {\displaystyle ab=a\cdot b+a\wedge b=\langle ab\rangle _{0}+\langle ab\rangle _{2}} since ⟨ a b ⟩ 0 = a ⋅ b {\displaystyle \langle ab\rangle _{0}=a\cdot b} and ⟨ a b ⟩ 2 = a ∧ b {\displaystyle \langle ab\rangle _{2}=a\wedge b} and ⁠ ⟨ a b ⟩ i = 0 {\displaystyle \langle ab\rangle _{i}=0} ⁠, for i {\displaystyle i} other than 0 {\displaystyle 0} and ⁠ 2 {\displaystyle 2} ⁠. A multivector A {\displaystyle A} may also be decomposed into even and odd components, which may respectively be expressed as the sum of the even and the sum of the odd grade components above: A [ 0 ] = ⟨ A ⟩ 0 + ⟨ A ⟩ 2 + ⟨ A ⟩ 4 + ⋯ {\displaystyle A^{[0]}=\langle A\rangle _{0}+\langle A\rangle _{2}+\langle A\rangle _{4}+\cdots } A [ 1 ] = ⟨ A ⟩ 1 + ⟨ A ⟩ 3 + ⟨ A ⟩ 5 + ⋯ {\displaystyle A^{[1]}=\langle A\rangle _{1}+\langle A\rangle _{3}+\langle A\rangle _{5}+\cdots } This is the result of forgetting structure from a ⁠ Z {\displaystyle \mathrm {Z} } ⁠-graded vector space to ⁠ Z 2 {\displaystyle \mathrm {Z} _{2}} ⁠-graded vector space. The geometric product respects this coarser grading. Thus in addition to being a ⁠ Z 2 {\displaystyle \mathrm {Z} _{2}} ⁠-graded vector space, the geometric algebra is a ⁠ Z 2 {\displaystyle \mathrm {Z} _{2}} ⁠-graded algebra, a.k.a. a superalgebra. Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an even subalgebra. The even subalgebra of an ⁠ n {\displaystyle n} ⁠-dimensional geometric algebra is algebra-isomorphic (without preserving either filtration or grading) to a full geometric algebra of ( n − 1 ) {\displaystyle (n-1)} dimensions. Examples include G [ 0 ] ( 2 , 0 ) ≅ G ( 0 , 1 ) {\displaystyle {\mathcal {G}}^{[0]}(2,0)\cong {\mathcal {G}}(0,1)} and ⁠ G [ 0 ] ( 1 , 3 ) ≅ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}^{[0]}(1,3)\cong {\mathcal {G}}(3,0)} ⁠. === Representation of subspaces === Geometric algebra represents subspaces of V {\displaystyle V} as blades, and so they coexist in the same algebra with vectors from ⁠ V {\displaystyle V} ⁠. A ⁠ k {\displaystyle k} ⁠-dimensional subspace W {\displaystyle W} of V {\displaystyle V} is represented by taking an orthogonal basis { b 1 , b 2 , … , b k } {\displaystyle \{b_{1},b_{2},\ldots ,b_{k}\}} and using the geometric product to form the blade ⁠ D = b 1 b 2 ⋯ b k {\displaystyle D=b_{1}b_{2}\cdots b_{k}} ⁠. There are multiple blades representing ⁠ W {\displaystyle W} ⁠; all those representing W {\displaystyle W} are scalar multiples of ⁠ D {\displaystyle D} ⁠. These blades can be separated into two sets: positive multiples of D {\displaystyle D} and negative multiples of ⁠ D {\displaystyle D} ⁠. The positive multiples of D {\displaystyle D} are said to have the same orientation as ⁠ D {\displaystyle D} ⁠, and the negative multiples the opposite orientation. Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) ⁠ n {\displaystyle n} ⁠-blades provide but that (the generalized class of) grade-⁠ n {\displaystyle n} ⁠ multivectors do not when ⁠ n ≥ 4 {\displaystyle n\geq 4} ⁠. === Unit pseudoscalars === Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace W {\displaystyle W} of V {\displaystyle V} is a blade that is the product of the members of an orthonormal basis for ⁠ W {\displaystyle W} ⁠. It can be shown that if I {\displaystyle I} and I ′ {\displaystyle I'} are both unit pseudoscalars for ⁠ W {\displaystyle W} ⁠, then I = ± I ′ {\displaystyle I=\pm I'} and ⁠ I 2 = ± 1 {\displaystyle I^{2}=\pm 1} ⁠. If one doesn't choose an orthonormal basis for ⁠ W {\displaystyle W} ⁠, then the Plücker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the equivalence class of α I {\displaystyle \alpha I} for all ⁠ α ≠ 0 {\displaystyle \alpha \neq 0} ⁠. Orthonormality gets rid of this ambiguity except for the signs above. Suppose the geometric algebra G ( n , 0 ) {\displaystyle {\mathcal {G}}(n,0)} with the familiar positive definite inner product on R n {\displaystyle \mathbb {R} ^{n}} is formed. Given a plane (two-dimensional subspace) of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, one can find an orthonormal basis { b 1 , b 2 } {\displaystyle \{b_{1},b_{2}\}} spanning the plane, and thus find a unit pseudoscalar I = b 1 b 2 {\displaystyle I=b_{1}b_{2}} representing this plane. The geometric product of any two vectors in the span of b 1 {\displaystyle b_{1}} and b 2 {\displaystyle b_{2}} lies in ⁠ { α 0 + α 1 I ∣ α i ∈ R } {\displaystyle \{\alpha _{0}+\alpha _{1}I\mid \alpha _{i}\in \mathbb {R} \}} ⁠, that is, it is the sum of a ⁠ 0 {\displaystyle 0} ⁠-vector and a ⁠ 2 {\displaystyle 2} ⁠-vector. By the properties of the geometric product, ⁠ I 2 = b 1 b 2 b 1 b 2 = − b 1 b 2 b 2 b 1 = − 1 {\displaystyle I^{2}=b_{1}b_{2}b_{1}b_{2}=-b_{1}b_{2}b_{2}b_{1}=-1} ⁠. The resemblance to the imaginary unit is not incidental: the subspace { α 0 + α 1 I ∣ α i ∈ R } {\displaystyle \{\alpha _{0}+\alpha _{1}I\mid \alpha _{i}\in \mathbb {R} \}} is ⁠ R {\displaystyle \mathbb {R} } ⁠-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each two-dimensional subspace of V {\displaystyle V} on which the quadratic form is definite. It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to ⁠ − 1 {\displaystyle -1} ⁠, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. In ⁠ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} ⁠, a further familiar case occurs. Given a standard basis consisting of orthonormal vectors e i {\displaystyle e_{i}} of ⁠ V {\displaystyle V} ⁠, the set of all ⁠ 2 {\displaystyle 2} ⁠-vectors is spanned by { e 3 e 2 , e 1 e 3 , e 2 e 1 } . {\displaystyle \{e_{3}e_{2},e_{1}e_{3},e_{2}e_{1}\}.} Labelling these ⁠ i {\displaystyle i} ⁠, j {\displaystyle j} and k {\displaystyle k} (momentarily deviating from our uppercase convention), the subspace generated by ⁠ 0 {\displaystyle 0} ⁠-vectors and ⁠ 2 {\displaystyle 2} ⁠-vectors is exactly ⁠ { α 0 + i α 1 + j α 2 + k α 3 ∣ α i ∈ R } {\displaystyle \{\alpha _{0}+i\alpha _{1}+j\alpha _{2}+k\alpha _{3}\mid \alpha _{i}\in \mathbb {R} \}} ⁠. This set is seen to be the even subalgebra of ⁠ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} ⁠, and furthermore is isomorphic as an ⁠ R {\displaystyle \mathbb {R} } ⁠-algebra to the quaternions, another important algebraic system. === Extensions of the inner and exterior products === It is common practice to extend the exterior product on vectors to the entire algebra. This may be done through the use of the above-mentioned grade projection operator: C ∧ D := ∑ r , s ⟨ ⟨ C ⟩ r ⟨ D ⟩ s ⟩ r + s {\displaystyle C\wedge D:=\sum _{r,s}\langle \langle C\rangle _{r}\langle D\rangle _{s}\rangle _{r+s}} (the exterior product) This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product: C × D := 1 2 ( C D − D C ) {\displaystyle C\times D:={\tfrac {1}{2}}(CD-DC)} (the commutator product) The regressive product is the dual of the exterior product (respectively corresponding to the "meet" and "join" in this context). The dual specification of elements permits, for blades ⁠ C {\displaystyle C} ⁠ and ⁠ D {\displaystyle D} ⁠, the intersection (or meet) where the duality is to be taken relative to the a blade containing both ⁠ C {\displaystyle C} ⁠ and ⁠ D {\displaystyle D} ⁠ (the smallest such blade being the join). C ∨ D := ( ( C I − 1 ) ∧ ( D I − 1 ) ) I {\displaystyle C\vee D:=((CI^{-1})\wedge (DI^{-1}))I} with ⁠ I {\displaystyle I} ⁠ the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative. The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper (Dorst 2002) gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged. Among these several different generalizations of the inner product on vectors are: C ⌋ D := ∑ r , s ⟨ ⟨ C ⟩ r ⟨ D ⟩ s ⟩ s − r {\displaystyle C\;\rfloor \;D:=\sum _{r,s}\langle \langle C\rangle _{r}\langle D\rangle _{s}\rangle _{s-r}} (the left contraction) C ⌊ D := ∑ r , s ⟨ ⟨ C ⟩ r ⟨ D ⟩ s ⟩ r − s {\displaystyle C\;\lfloor \;D:=\sum _{r,s}\langle \langle C\rangle _{r}\langle D\rangle _{s}\rangle _{r-s}} (the right contraction) C ∗ D := ∑ r , s ⟨ ⟨ C ⟩ r ⟨ D ⟩ s ⟩ 0 {\displaystyle C*D:=\sum _{r,s}\langle \langle C\rangle _{r}\langle D\rangle _{s}\rangle _{0}} (the scalar product) C ∙ D := ∑ r , s ⟨ ⟨ C ⟩ r ⟨ D ⟩ s ⟩ | s − r | {\displaystyle C\bullet D:=\sum _{r,s}\langle \langle C\rangle _{r}\langle D\rangle _{s}\rangle _{|s-r|}} (the "(fat) dot" product) Dorst (2002) makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. For example, C ⌋ D = ( C ∧ ( D I − 1 ) ) I {\displaystyle C\;\rfloor \;D=(C\wedge (DI^{-1}))I} C ⌊ D = I ( ( I − 1 C ) ∧ D ) {\displaystyle C\;\lfloor \;D=I((I^{-1}C)\wedge D)} ( A ∧ B ) ∗ C = A ∗ ( B ⌋ C ) {\displaystyle (A\wedge B)*C=A*(B\;\rfloor \;C)} C ∗ ( B ∧ A ) = ( C ⌊ B ) ∗ A {\displaystyle C*(B\wedge A)=(C\;\lfloor \;B)*A} A ⌋ ( B ⌋ C ) = ( A ∧ B ) ⌋ C {\displaystyle A\;\rfloor \;(B\;\rfloor \;C)=(A\wedge B)\;\rfloor \;C} ( A ⌋ B ) ⌊ C = A ⌋ ( B ⌊ C ) . {\displaystyle (A\;\rfloor \;B)\;\lfloor \;C=A\;\rfloor \;(B\;\lfloor \;C).} Benefits of using the left contraction as an extension of the inner product on vectors include that the identity a b = a ⋅ b + a ∧ b {\displaystyle ab=a\cdot b+a\wedge b} is extended to a B = a ⌋ B + a ∧ B {\displaystyle aB=a\;\rfloor \;B+a\wedge B} for any vector a {\displaystyle a} and multivector ⁠ B {\displaystyle B} ⁠, and that the projection operation P b ( a ) = ( a ⋅ b − 1 ) b {\displaystyle {\mathcal {P}}_{b}(a)=(a\cdot b^{-1})b} is extended to P B ( A ) = ( A ⌋ B − 1 ) ⌋ B {\displaystyle {\mathcal {P}}_{B}(A)=(A\;\rfloor \;B^{-1})\;\rfloor \;B} for any blade B {\displaystyle B} and any multivector A {\displaystyle A} (with a minor modification to accommodate null ⁠ B {\displaystyle B} ⁠, given below). === Dual basis === Let { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} be a basis of ⁠ V {\displaystyle V} ⁠, i.e. a set of n {\displaystyle n} linearly independent vectors that span the ⁠ n {\displaystyle n} ⁠-dimensional vector space ⁠ V {\displaystyle V} ⁠. The basis that is dual to { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} is the set of elements of the dual vector space V ∗ {\displaystyle V^{*}} that forms a biorthogonal system with this basis, thus being the elements denoted { e 1 , … , e n } {\displaystyle \{e^{1},\ldots ,e^{n}\}} satisfying e i ⋅ e j = δ i j , {\displaystyle e^{i}\cdot e_{j}=\delta ^{i}{}_{j},} where δ {\displaystyle \delta } is the Kronecker delta. Given a nondegenerate quadratic form on ⁠ V {\displaystyle V} ⁠, V ∗ {\displaystyle V^{*}} becomes naturally identified with ⁠ V {\displaystyle V} ⁠, and the dual basis may be regarded as elements of ⁠ V {\displaystyle V} ⁠, but are not in general the same set as the original basis. Given further a GA of ⁠ V {\displaystyle V} ⁠, let I = e 1 ∧ ⋯ ∧ e n {\displaystyle I=e_{1}\wedge \cdots \wedge e_{n}} be the pseudoscalar (which does not necessarily square to ⁠ ± 1 {\displaystyle \pm 1} ⁠) formed from the basis ⁠ { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} ⁠. The dual basis vectors may be constructed as e i = ( − 1 ) i − 1 ( e 1 ∧ ⋯ ∧ e ˇ i ∧ ⋯ ∧ e n ) I − 1 , {\displaystyle e^{i}=(-1)^{i-1}(e_{1}\wedge \cdots \wedge {\check {e}}_{i}\wedge \cdots \wedge e_{n})I^{-1},} where the e ˇ i {\displaystyle {\check {e}}_{i}} denotes that the ⁠ i {\displaystyle i} ⁠th basis vector is omitted from the product. A dual basis is also known as a reciprocal basis or reciprocal frame. A major usage of a dual basis is to separate vectors into components. Given a vector ⁠ a {\displaystyle a} ⁠, scalar components a i {\displaystyle a^{i}} can be defined as a i = a ⋅ e i , {\displaystyle a^{i}=a\cdot e^{i}\ ,} in terms of which a {\displaystyle a} can be separated into vector components as a = ∑ i a i e i . {\displaystyle a=\sum _{i}a^{i}e_{i}\ .} We can also define scalar components a i {\displaystyle a_{i}} as a i = a ⋅ e i , {\displaystyle a_{i}=a\cdot e_{i}\ ,} in terms of which a {\displaystyle a} can be separated into vector components in terms of the dual basis as a = ∑ i a i e i . {\displaystyle a=\sum _{i}a_{i}e^{i}\ .} A dual basis as defined above for the vector subspace of a geometric algebra can be extended to cover the entire algebra. For compactness, we'll use a single capital letter to represent an ordered set of vector indices. I.e., writing J = ( j 1 , … , j n ) , {\displaystyle J=(j_{1},\dots ,j_{n})\ ,} where ⁠ j 1 < j 2 < ⋯ < j n {\displaystyle j_{1}<j_{2}<\dots <j_{n}} ⁠, we can write a basis blade as e J = e j 1 ∧ e j 2 ∧ ⋯ ∧ e j n . {\displaystyle e_{J}=e_{j_{1}}\wedge e_{j_{2}}\wedge \cdots \wedge e_{j_{n}}\ .} The corresponding reciprocal blade has the indices in opposite order: e J = e j n ∧ ⋯ ∧ e j 2 ∧ e j 1 . {\displaystyle e^{J}=e^{j_{n}}\wedge \cdots \wedge e^{j_{2}}\wedge e^{j_{1}}\ .} Similar to the case above with vectors, it can be shown that e J ∗ e K = δ K J , {\displaystyle e^{J}*e_{K}=\delta _{K}^{J}\ ,} where ∗ {\displaystyle *} is the scalar product. With A {\displaystyle A} a multivector, we can define scalar components as A i j ⋯ k = ( e k ∧ ⋯ ∧ e j ∧ e i ) ∗ A , {\displaystyle A^{ij\cdots k}=(e^{k}\wedge \cdots \wedge e^{j}\wedge e^{i})*A\ ,} in terms of which A {\displaystyle A} can be separated into component blades as A = ∑ i < j < ⋯ < k A i j ⋯ k e i ∧ e j ∧ ⋯ ∧ e k . {\displaystyle A=\sum _{i<j<\cdots <k}A^{ij\cdots k}e_{i}\wedge e_{j}\wedge \cdots \wedge e_{k}\ .} We can alternatively define scalar components A i j ⋯ k = ( e k ∧ ⋯ ∧ e j ∧ e i ) ∗ A , {\displaystyle A_{ij\cdots k}=(e_{k}\wedge \cdots \wedge e_{j}\wedge e_{i})*A\ ,} in terms of which A {\displaystyle A} can be separated into component blades as A = ∑ i < j < ⋯ < k A i j ⋯ k e i ∧ e j ∧ ⋯ ∧ e k . {\displaystyle A=\sum _{i<j<\cdots <k}A_{ij\cdots k}e^{i}\wedge e^{j}\wedge \cdots \wedge e^{k}\ .} === Linear functions === Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of linear functions on multivectors, which can still be used when necessary. The geometric algebra of an ⁠ n {\displaystyle n} ⁠-dimensional vector space is spanned by a basis of 2 n {\displaystyle 2^{n}} elements. If a multivector is represented by a 2 n × 1 {\displaystyle 2^{n}\times 1} real column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the matrix multiplication by a 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation. A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the outermorphism of the linear transformation is the unique extension of the versor. If f {\displaystyle f} is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule f _ ( a 1 ∧ a 2 ∧ ⋯ ∧ a r ) = f ( a 1 ) ∧ f ( a 2 ) ∧ ⋯ ∧ f ( a r ) {\displaystyle {\underline {\mathsf {f}}}(a_{1}\wedge a_{2}\wedge \cdots \wedge a_{r})=f(a_{1})\wedge f(a_{2})\wedge \cdots \wedge f(a_{r})} for a blade, extended to the whole algebra through linearity. == Modeling geometries == Although a lot of attention has been placed on CGA, it is to be noted that GA is not just one algebra, it is one of a family of algebras with the same essential structure. === Vector space model === The even subalgebra of G ( 2 , 0 ) {\displaystyle {\mathcal {G}}(2,0)} is isomorphic to the complex numbers, as may be seen by writing a vector P {\displaystyle P} in terms of its components in an orthonormal basis and left multiplying by the basis vector ⁠ e 1 {\displaystyle e_{1}} ⁠, yielding Z = e 1 P = e 1 ( x e 1 + y e 2 ) = x ( 1 ) + y ( e 1 e 2 ) , {\displaystyle Z=e_{1}P=e_{1}(xe_{1}+ye_{2})=x(1)+y(e_{1}e_{2}),} where we identify i ↦ e 1 e 2 {\displaystyle i\mapsto e_{1}e_{2}} since ( e 1 e 2 ) 2 = e 1 e 2 e 1 e 2 = − e 1 e 1 e 2 e 2 = − 1. {\displaystyle (e_{1}e_{2})^{2}=e_{1}e_{2}e_{1}e_{2}=-e_{1}e_{1}e_{2}e_{2}=-1.} Similarly, the even subalgebra of G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} with basis { 1 , e 2 e 3 , e 3 e 1 , e 1 e 2 } {\displaystyle \{1,e_{2}e_{3},e_{3}e_{1},e_{1}e_{2}\}} is isomorphic to the quaternions as may be seen by identifying ⁠ i ↦ − e 2 e 3 {\displaystyle i\mapsto -e_{2}e_{3}} ⁠, j ↦ − e 3 e 1 {\displaystyle j\mapsto -e_{3}e_{1}} and ⁠ k ↦ − e 1 e 2 {\displaystyle k\mapsto -e_{1}e_{2}} ⁠. Every associative algebra has a matrix representation; replacing the three Cartesian basis vectors by the Pauli matrices gives a representation of ⁠ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} ⁠: e 1 = σ 1 = σ x = ( 0 1 1 0 ) e 2 = σ 2 = σ y = ( 0 − i i 0 ) e 3 = σ 3 = σ z = ( 1 0 0 − 1 ) . {\displaystyle {\begin{aligned}e_{1}=\sigma _{1}=\sigma _{x}&={\begin{pmatrix}0&1\\1&0\end{pmatrix}}\\e_{2}=\sigma _{2}=\sigma _{y}&={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\\e_{3}=\sigma _{3}=\sigma _{z}&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\,.\end{aligned}}} Dotting the "Pauli vector" (a dyad): σ = σ 1 e 1 + σ 2 e 2 + σ 3 e 3 {\displaystyle \sigma =\sigma _{1}e_{1}+\sigma _{2}e_{2}+\sigma _{3}e_{3}} with arbitrary vectors a {\displaystyle a} and b {\displaystyle b} and multiplying through gives: ( σ ⋅ a ) ( σ ⋅ b ) = a ⋅ b + a ∧ b {\displaystyle (\sigma \cdot a)(\sigma \cdot b)=a\cdot b+a\wedge b} (Equivalently, by inspection, ⁠ a ⋅ b + i σ ⋅ ( a × b ) {\displaystyle a\cdot b+i\sigma \cdot (a\times b)} ⁠) === Spacetime model === In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, ⁠ G ( 1 , 3 ) {\displaystyle {\mathcal {G}}(1,3)} ⁠, called spacetime algebra (STA), or less commonly, ⁠ G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} ⁠, interpreted the algebra of physical space (APS). While in STA, points of spacetime are represented simply by vectors, in APS, points of ⁠ ( 3 + 1 ) {\displaystyle (3+1)} ⁠-dimensional spacetime are instead represented by paravectors, a three-dimensional vector (space) plus a one-dimensional scalar (time). In spacetime algebra the electromagnetic field tensor has a bivector representation ⁠ F = ( E + i c B ) γ 0 {\displaystyle {F}=({E}+ic{B})\gamma _{0}} ⁠. Here, the i = γ 0 γ 1 γ 2 γ 3 {\displaystyle i=\gamma _{0}\gamma _{1}\gamma _{2}\gamma _{3}} is the unit pseudoscalar (or four-dimensional volume element), γ 0 {\displaystyle \gamma _{0}} is the unit vector in time direction, and E {\displaystyle E} and B {\displaystyle B} are the classic electric and magnetic field vectors (with a zero time component). Using the four-current ⁠ J {\displaystyle {J}} ⁠, Maxwell's equations then become In geometric calculus, juxtaposition of vectors such as in D F {\displaystyle DF} indicate the geometric product and can be decomposed into parts as ⁠ D F = D ⌋ F + D ∧ F {\displaystyle DF=D~\rfloor ~F+D\wedge F} ⁠. Here D {\displaystyle D} is the covector derivative in any spacetime and reduces to ∇ {\displaystyle \nabla } in flat spacetime. Where ▽ {\displaystyle \bigtriangledown } plays a role in Minkowski ⁠ 4 {\displaystyle 4} ⁠-spacetime which is synonymous to the role of ∇ {\displaystyle \nabla } in Euclidean ⁠ 3 {\displaystyle 3} ⁠-space and is related to the d'Alembertian by ⁠ ◻ = ▽ 2 {\displaystyle \Box =\bigtriangledown ^{2}} ⁠. Indeed, given an observer represented by a future pointing timelike vector γ 0 {\displaystyle \gamma _{0}} we have γ 0 ⋅ ▽ = 1 c ∂ ∂ t {\displaystyle \gamma _{0}\cdot \bigtriangledown ={\frac {1}{c}}{\frac {\partial }{\partial t}}} γ 0 ∧ ▽ = ∇ {\displaystyle \gamma _{0}\wedge \bigtriangledown =\nabla } Boosts in this Lorentzian metric space have the same expression e β {\displaystyle e^{\beta }} as rotation in Euclidean space, where β {\displaystyle {\beta }} is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity. The Dirac matrices are a representation of ⁠ G ( 1 , 3 ) {\displaystyle {\mathcal {G}}(1,3)} ⁠, showing the equivalence with matrix representations used by physicists. === Homogeneous models === Homogeneous models generally refer to a projective representation in which the elements of the one-dimensional subspaces of a vector space represent points of a geometry. In a geometric algebra of a space of n {\displaystyle n} dimensions, the rotors represent a set of transformations with n ( n − 1 ) / 2 {\displaystyle n(n-1)/2} degrees of freedom, corresponding to rotations – for example, 3 {\displaystyle 3} when n = 3 {\displaystyle n=3} and 6 {\displaystyle 6} when ⁠ n = 4 {\displaystyle n=4} ⁠. Geometric algebra is often used to model a projective space, i.e. as a homogeneous model: a point, line, plane, etc. is represented by an equivalence class of elements of the algebra that differ by an invertible scalar factor. The rotors in a space of dimension n + 1 {\displaystyle n+1} have n ( n − 1 ) / 2 + n n(n-1)/2+n degrees of freedom, the same as the number of degrees of freedom in the rotations and translations combined for an ⁠ n {\displaystyle n} ⁠-dimensional space. This is the case in Projective Geometric Algebra (PGA), which is used to represent Euclidean isometries in Euclidean geometry (thereby covering the large majority of engineering applications of geometry). In this model, a degenerate dimension is added to the three Euclidean dimensions to form the algebra ⁠ G ( 3 , 0 , 1 ) {\displaystyle {\mathcal {G}}(3,0,1)} ⁠. With a suitable identification of subspaces to represent points, lines and planes, the versors of this algebra represent all proper Euclidean isometries, which are always screw motions in 3-dimensional space, along with all improper Euclidean isometries, which includes reflections, rotoreflections, transflections, and point reflections. PGA allows projection, meet, and angle formulas to be derived from G ( 3 , 0 , 1 ) {\displaystyle {\mathcal {G}}(3,0,1)} - with a very minor extension to the algebra it is also possible to derive distances and joins. PGA is a widely used system that combines geometric algebra with homogeneous representations in geometry, but there exist several other such systems. The conformal model discussed below is homogeneous, as is "Conic Geometric Algebra", and see Plane-based geometric algebra for discussion of homogeneous models of elliptic and hyperbolic geometry compared with the Euclidean geometry derived from PGA. === Conformal model === Working within GA, Euclidean space E 3 {\displaystyle \mathbb {E} ^{3}} (along with a conformal point at infinity) is embedded projectively in the CGA G ( 4 , 1 ) {\displaystyle {\mathcal {G}}(4,1)} via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace. This allows all conformal transformations to be performed as rotations and reflections and is covariant, extending incidence relations of projective geometry to rounds objects such as circles and spheres. Specifically, we add orthogonal basis vectors e + {\displaystyle e_{+}} and e − {\displaystyle e_{-}} such that e + 2 = + 1 {\displaystyle e_{+}^{2}=+1} and e − 2 = − 1 {\displaystyle e_{-}^{2}=-1} to the basis of the vector space that generates G ( 3 , 0 ) {\displaystyle {\mathcal {G}}(3,0)} and identify null vectors n o = 1 2 ( e − − e + ) {\displaystyle n_{\text{o}}={\tfrac {1}{2}}(e_{-}-e_{+})} as the point at the origin and n ∞ = e − + e + {\displaystyle n_{\infty }=e_{-}+e_{+}} as a conformal point at infinity (see Compactification), giving n ∞ ⋅ n o = − 1. {\displaystyle n_{\infty }\cdot n_{\text{o}}=-1.} (Some authors set e 4 = n o {\displaystyle e_{4}=n_{\text{o}}} and ⁠ e 5 = n ∞ {\displaystyle e_{5}=n_{\infty }} ⁠.) This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry, and in this case allows the modeling of Euclidean transformations of R 3 {\displaystyle \mathbb {R} ^{3}} as orthogonal transformations of a subset of ⁠ R 4 , 1 {\displaystyle \mathbf {R} ^{4,1}} ⁠. A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics. === Table of models === Note in this list that ⁠ p {\displaystyle p} ⁠ and ⁠ q {\displaystyle q} ⁠ can be swapped and the same name applies; for example, with relatively little change occurring, see sign convention. For example, G ( 3 , 1 , 0 ) {\displaystyle {\mathcal {G}}(3,1,0)} and G ( 1 , 3 , 0 ) {\displaystyle {\mathcal {G}}(1,3,0)} are both referred to as Spacetime Algebra. == Geometric interpretation in the vector space model == === Projection and rejection === For any vector a {\displaystyle a} and any invertible vector ⁠ m {\displaystyle m} ⁠, a = a m m − 1 = ( a ⋅ m + a ∧ m ) m − 1 = a ‖ m + a ⊥ m , {\displaystyle a=amm^{-1}=(a\cdot m+a\wedge m)m^{-1}=a_{\|m}+a_{\perp m},} where the projection of a {\displaystyle a} onto m {\displaystyle m} (or the parallel part) is a ‖ m = ( a ⋅ m ) m − 1 {\displaystyle a_{\|m}=(a\cdot m)m^{-1}} and the rejection of a {\displaystyle a} from m {\displaystyle m} (or the orthogonal part) is a ⊥ m = a − a ‖ m = ( a ∧ m ) m − 1 . {\displaystyle a_{\perp m}=a-a_{\|m}=(a\wedge m)m^{-1}.} Using the concept of a ⁠ k {\displaystyle k} ⁠-blade ⁠ B {\displaystyle B} ⁠ as representing a subspace of ⁠ V {\displaystyle V} ⁠ and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible ⁠ k {\displaystyle k} ⁠-blade ⁠ B {\displaystyle B} ⁠ as P B ( A ) = ( A ⌋ B ) ⌋ B − 1 , {\displaystyle {\mathcal {P}}_{B}(A)=(A\;\rfloor \;B)\;\rfloor \;B^{-1},} with the rejection being defined as P B ⊥ ( A ) = A − P B ( A ) . {\displaystyle {\mathcal {P}}_{B}^{\perp }(A)=A-{\mathcal {P}}_{B}(A).} The projection and rejection generalize to null blades B {\displaystyle B} by replacing the inverse B − 1 {\displaystyle B^{-1}} with the pseudoinverse B + {\displaystyle B^{+}} with respect to the contractive product. The outcome of the projection coincides in both cases for non-null blades. For null blades ⁠ B {\displaystyle B} ⁠, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used, as only then is the result necessarily in the subspace represented by ⁠ B {\displaystyle B} ⁠. The projection generalizes through linearity to general multivectors ⁠ A {\displaystyle A} ⁠. The projection is not linear in ⁠ B {\displaystyle B} ⁠ and does not generalize to objects ⁠ B {\displaystyle B} ⁠ that are not blades. === Reflection === Simple reflections in a hyperplane are readily expressed in the algebra through conjugation with a single vector. These serve to generate the group of general rotoreflections and rotations. The reflection c ′ {\displaystyle c'} of a vector c {\displaystyle c} along a vector ⁠ m {\displaystyle m} ⁠, or equivalently in the hyperplane orthogonal to ⁠ m {\displaystyle m} ⁠, is the same as negating the component of a vector parallel to ⁠ m {\displaystyle m} ⁠. The result of the reflection will be c ′ = − c ‖ m + c ⊥ m = − ( c ⋅ m ) m − 1 + ( c ∧ m ) m − 1 = ( − m ⋅ c − m ∧ c ) m − 1 = − m c m − 1 {\displaystyle c'={-c_{\|m}+c_{\perp m}}={-(c\cdot m)m^{-1}+(c\wedge m)m^{-1}}={(-m\cdot c-m\wedge c)m^{-1}}=-mcm^{-1}} This is not the most general operation that may be regarded as a reflection when the dimension ⁠ n ≥ 4 {\displaystyle n\geq 4} ⁠. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection a ′ {\displaystyle a'} of a vector a {\displaystyle a} may be written a ↦ a ′ = − M a M − 1 , {\displaystyle a\mapsto a'=-MaM^{-1},} where M = p q ⋯ r {\displaystyle M=pq\cdots r} and M − 1 = ( p q ⋯ r ) − 1 = r − 1 ⋯ q − 1 p − 1 . {\displaystyle M^{-1}=(pq\cdots r)^{-1}=r^{-1}\cdots q^{-1}p^{-1}.} If we define the reflection along a non-null vector m {\displaystyle m} of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example, ( a b c ) ′ = a ′ b ′ c ′ = ( − m a m − 1 ) ( − m b m − 1 ) ( − m c m − 1 ) = − m a ( m − 1 m ) b ( m − 1 m ) c m − 1 = − m a b c m − 1 {\displaystyle (abc)'=a'b'c'=(-mam^{-1})(-mbm^{-1})(-mcm^{-1})=-ma(m^{-1}m)b(m^{-1}m)cm^{-1}=-mabcm^{-1}\,} and for the product of an even number of vectors that ( a b c d ) ′ = a ′ b ′ c ′ d ′ = ( − m a m − 1 ) ( − m b m − 1 ) ( − m c m − 1 ) ( − m d m − 1 ) = m a b c d m − 1 . {\displaystyle (abcd)'=a'b'c'd'=(-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1})=mabcdm^{-1}.} Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector A {\displaystyle A} using any reflection versor M {\displaystyle M} may be written A ↦ M α ( A ) M − 1 , {\displaystyle A\mapsto M\alpha (A)M^{-1},} where α {\displaystyle \alpha } is the automorphism of reflection through the origin of the vector space (⁠ v ↦ − v {\displaystyle v\mapsto -v} ⁠) extended through linearity to the whole algebra. === Rotations === If we have a product of vectors R = a 1 a 2 ⋯ a r {\displaystyle R=a_{1}a_{2}\cdots a_{r}} then we denote the reverse as R ~ = a r ⋯ a 2 a 1 . {\displaystyle {\widetilde {R}}=a_{r}\cdots a_{2}a_{1}.} As an example, assume that R = a b {\displaystyle R=ab} we get R R ~ = a b b a = a b 2 a = a 2 b 2 = b a 2 b = b a a b = R ~ R . {\displaystyle R{\widetilde {R}}=abba=ab^{2}a=a^{2}b^{2}=ba^{2}b=baab={\widetilde {R}}R.} Scaling R {\displaystyle R} so that R R ~ = 1 {\displaystyle R{\widetilde {R}}=1} then ( R v R ~ ) 2 = R v 2 R ~ = v 2 R R ~ = v 2 {\displaystyle (Rv{\widetilde {R}})^{2}=Rv^{2}{\widetilde {R}}=v^{2}R{\widetilde {R}}=v^{2}} so R v R ~ {\displaystyle Rv{\widetilde {R}}} leaves the length of v {\displaystyle v} unchanged. We can also show that ( R v 1 R ~ ) ⋅ ( R v 2 R ~ ) = v 1 ⋅ v 2 {\displaystyle (Rv_{1}{\widetilde {R}})\cdot (Rv_{2}{\widetilde {R}})=v_{1}\cdot v_{2}} so the transformation R v R ~ {\displaystyle Rv{\widetilde {R}}} preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; R {\displaystyle R} is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor. There is a general method for rotating a vector involving the formation of a multivector of the form R = e − B θ / 2 {\displaystyle R=e^{-B\theta /2}} that produces a rotation θ {\displaystyle \theta } in the plane and with the orientation defined by a ⁠ 2 {\displaystyle 2} ⁠-blade ⁠ B {\displaystyle B} ⁠. Rotors are a generalization of quaternions to ⁠ n {\displaystyle n} ⁠-dimensional spaces. == Examples and applications == === Hypervolume of a parallelotope spanned by vectors === For vectors ⁠ a {\displaystyle a} ⁠ and ⁠ b {\displaystyle b} ⁠ spanning a parallelogram we have a ∧ b = ( ( a ∧ b ) b − 1 ) b = a ⊥ b b {\displaystyle a\wedge b=((a\wedge b)b^{-1})b=a_{\perp b}b} with the result that ⁠ a ∧ b {\displaystyle a\wedge b} ⁠ is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area. Similar interpretations are true for any number of vectors spanning an ⁠ n {\displaystyle n} ⁠-dimensional parallelotope; the exterior product of vectors ⁠ a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} ⁠, that is ⁠ ⋀ i = 1 n a i {\displaystyle \textstyle \bigwedge _{i=1}^{n}a_{i}} ⁠, has a magnitude equal to the volume of the ⁠ n {\displaystyle n} ⁠-parallelotope. An ⁠ n {\displaystyle n} ⁠-vector does not necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope. === Intersection of a line and a plane === We may define the line parametrically by ⁠ p = t + α v {\displaystyle p=t+\alpha \ v} ⁠, where ⁠ p {\displaystyle p} ⁠ and ⁠ t {\displaystyle t} ⁠ are position vectors for points P and T and ⁠ v {\displaystyle v} ⁠ is the direction vector for the line. Then B ∧ ( p − q ) = 0 {\displaystyle B\wedge (p-q)=0} and B ∧ ( t + α v − q ) = 0 {\displaystyle B\wedge (t+\alpha v-q)=0} so α = B ∧ ( q − t ) B ∧ v {\displaystyle \alpha ={\frac {B\wedge (q-t)}{B\wedge v}}} and p = t + ( B ∧ ( q − t ) B ∧ v ) v . {\displaystyle p=t+\left({\frac {B\wedge (q-t)}{B\wedge v}}\right)v.} === Rotating systems === A rotational quantity such as torque or angular momentum is described in geometric algebra as a bivector. Suppose a circular path in an arbitrary plane containing orthonormal vectors ⁠ u ^ {\displaystyle {\widehat {u}}} ⁠ and ⁠ v ^ {\displaystyle {\widehat {\ \!v}}} ⁠ is parameterized by angle. r = r ( u ^ cos ⁡ θ + v ^ sin ⁡ θ ) = r u ^ ( cos ⁡ θ + u ^ v ^ sin ⁡ θ ) {\displaystyle \mathbf {r} =r({\widehat {u}}\cos \theta +{\widehat {\ \!v}}\sin \theta )=r{\widehat {u}}(\cos \theta +{\widehat {u}}{\widehat {\ \!v}}\sin \theta )} By designating the unit bivector of this plane as the imaginary number i = u ^ v ^ = u ^ ∧ v ^ {\displaystyle {i}={\widehat {u}}{\widehat {\ \!v}}={\widehat {u}}\wedge {\widehat {\ \!v}}} i 2 = − 1 {\displaystyle i^{2}=-1} this path vector can be conveniently written in complex exponential form r = r u ^ e i θ {\displaystyle \mathbf {r} =r{\widehat {u}}e^{i\theta }} and the derivative with respect to angle is d r d θ = r u ^ i e i θ = r i . {\displaystyle {\frac {d\mathbf {r} }{d\theta }}=r{\widehat {u}}ie^{i\theta }=\mathbf {r} i.} For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle. Thus the torque, the rate of change of work ⁠ W {\displaystyle W} ⁠ with respect to angle, due to a force ⁠ F {\displaystyle F} ⁠, is τ = d W d θ = F ⋅ d r d θ = F ⋅ ( r i ) . {\displaystyle \tau ={\frac {dW}{d\theta }}=F\cdot {\frac {dr}{d\theta }}=F\cdot (\mathbf {r} i).} Rotational quantities are represented in vector calculus in three dimensions using the cross product. Together with a choice of an oriented volume form ⁠ I {\displaystyle I} ⁠, these can be related to the exterior product with its more natural geometric interpretation of such quantities as a bivectors by using the dual relationship a × b = − I ( a ∧ b ) . {\displaystyle a\times b=-I(a\wedge b).} Unlike the cross product description of torque, ⁠ τ = r × F {\displaystyle \tau =\mathbf {r} \times F} ⁠, the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors ⁠ u ^ {\displaystyle {\widehat {u}}} ⁠ and ⁠ v ^ {\displaystyle {\widehat {\ \!v}}} ⁠. == Geometric calculus == Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms. Essentially, the vector derivative is defined so that the GA version of Green's theorem is true, ∫ A d A ∇ f = ∮ ∂ A d x f {\displaystyle \int _{A}dA\,\nabla f=\oint _{\partial A}dx\,f} and then one can write ∇ f = ∇ ⋅ f + ∇ ∧ f {\displaystyle \nabla f=\nabla \cdot f+\nabla \wedge f} as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it). In 1D when ⁠ A {\displaystyle A} ⁠ is a curve with endpoints ⁠ a {\displaystyle a} ⁠ and ⁠ b {\displaystyle b} ⁠, then ∫ A d A ∇ f = ∮ ∂ A d x f {\displaystyle \int _{A}dA\,\nabla f=\oint _{\partial A}dx\,f} reduces to ∫ a b d x ∇ f = ∫ a b d x ⋅ ∇ f = ∫ a b d f = f ( b ) − f ( a ) {\displaystyle \int _{a}^{b}dx\,\nabla f=\int _{a}^{b}dx\cdot \nabla f=\int _{a}^{b}df=f(b)-f(a)} or the fundamental theorem of integral calculus. Also developed are the concept of vector manifold and geometric integration theory (which generalizes differential forms). == History == === Before the 20th century === Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the third century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space. Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view, the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product – the geometric product – on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently, Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in ⁠ n {\displaystyle n} ⁠ dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra. Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following lectures of Gibbs. In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of ⁠ i {\displaystyle i} ⁠, ⁠ j {\displaystyle j} ⁠, ⁠ k {\displaystyle k} ⁠ to indicate the basis vectors of ⁠ R 3 {\displaystyle \mathbf {R} ^{3}} ⁠: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, the even subalgebra of the Space Time Algebra is isomorphic to the GA of 3D Euclidean space and quaternions are isomorphic to the even subalgebra of the GA of 3D Euclidean space, which unifies the three approaches. === 20th century and present === Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Élie Cartan, Hermann Weyl and Claude Chevalley. The geometrical approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebra discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory. David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra. In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010). == See also == Comparison of vector algebra and geometric algebra Clifford algebra Grassmann–Cayley algebra Spacetime algebra Spinor Quaternion Algebra of physical space Universal geometric algebra == Notes == == Citations == == References and further reading == == External links ==
Wikipedia/Geometric_algebra
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. == History of probability == The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657. In the 19th century, what is considered the classical definition of probability was completed by Pierre Laplace. Initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti. == Treatment == Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more. === Motivation === Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. The power set of the sample space (or equivalently, the event space) is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called events. In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events. The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5} has a probability of 1/6, and the event {1,2,3,4,5,6} has a probability of 1, that is, absolute certainty. When doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable. A random variable is a function that assigns to each elementary event in the sample space a real number. This function is usually denoted by a capital letter. In the case of a die, the assignment of a number to certain elementary events can be done using the identity function. This does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" ( X ( heads ) = 0 {\textstyle X({\text{heads}})=0} ) and to the outcome "tails" the number "1" ( X ( tails ) = 1 {\displaystyle X({\text{tails}})=1} ). === Discrete probability distributions === Discrete probability theory deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins. Classical definition: Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by 3 6 = 1 2 {\displaystyle {\tfrac {3}{6}}={\tfrac {1}{2}}} , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing. Modern definition: The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω {\displaystyle \Omega } . It is then assumed that for each element x ∈ Ω {\displaystyle x\in \Omega \,} , an intrinsic "probability" value f ( x ) {\displaystyle f(x)\,} is attached, which satisfies the following properties: f ( x ) ∈ [ 0 , 1 ] for all x ∈ Ω ; {\displaystyle f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;} ∑ x ∈ Ω f ( x ) = 1 . {\displaystyle \sum _{x\in \Omega }f(x)=1\,.} That is, the probability function f(x) lies between zero and one for every value of x in the sample space Ω, and the sum of f(x) over all values x in the sample space Ω is equal to 1. An event is defined as any subset E {\displaystyle E\,} of the sample space Ω {\displaystyle \Omega \,} . The probability of the event E {\displaystyle E\,} is defined as P ( E ) = ∑ x ∈ E f ( x ) . {\displaystyle P(E)=\sum _{x\in E}f(x)\,.} So, the probability of the entire sample space is 1, and the probability of the null event is 0. The function f ( x ) {\displaystyle f(x)\,} mapping a point in the sample space to the "probability" value is called a probability mass function abbreviated as pmf. === Continuous probability distributions === Continuous probability theory deals with events that occur in a continuous sample space. Classical definition: The classical definition breaks down when confronted with the continuous case. See Bertrand's paradox. Modern definition: If the sample space of a random variable X is the set of real numbers ( R {\displaystyle \mathbb {R} } ) or a subset thereof, then a function called the cumulative distribution function (CDF) F {\displaystyle F\,} exists, defined by F ( x ) = P ( X ≤ x ) {\displaystyle F(x)=P(X\leq x)\,} . That is, F(x) returns the probability that X will be less than or equal to x. The CDF necessarily satisfies the following properties. F {\displaystyle F\,} is a monotonically non-decreasing, right-continuous function; lim x → − ∞ F ( x ) = 0 ; {\displaystyle \lim _{x\rightarrow -\infty }F(x)=0\,;} lim x → ∞ F ( x ) = 1 . {\displaystyle \lim _{x\rightarrow \infty }F(x)=1\,.} The random variable X {\displaystyle X} is said to have a continuous probability distribution if the corresponding CDF F {\displaystyle F} is continuous. If F {\displaystyle F\,} is absolutely continuous, then its derivative exists almost everywhere and integrating the derivative gives us the CDF back again. In this case, the random variable X is said to have a probability density function (PDF) or simply density f ( x ) = d F ( x ) d x . {\displaystyle f(x)={\frac {dF(x)}{dx}}\,.} For a set E ⊆ R {\displaystyle E\subseteq \mathbb {R} } , the probability of the random variable X being in E {\displaystyle E\,} is P ( X ∈ E ) = ∫ x ∈ E d F ( x ) . {\displaystyle P(X\in E)=\int _{x\in E}dF(x)\,.} In case the PDF exists, this can be written as P ( X ∈ E ) = ∫ x ∈ E f ( x ) d x . {\displaystyle P(X\in E)=\int _{x\in E}f(x)\,dx\,.} Whereas the PDF exists only for continuous random variables, the CDF exists for all random variables (including discrete random variables) that take values in R . {\displaystyle \mathbb {R} \,.} These concepts can be generalized for multidimensional cases on R n {\displaystyle \mathbb {R} ^{n}} and other continuous sample spaces. === Measure-theoretic probability theory === The utility of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two. An example of such distributions could be a mix of discrete and continuous distributions—for example, a random variable that is 0 with probability 1/2, and takes a random value from a normal distribution with probability 1/2. It can still be studied to some extent by considering it to have a PDF of ( δ [ x ] + φ ( x ) ) / 2 {\displaystyle (\delta [x]+\varphi (x))/2} , where δ [ x ] {\displaystyle \delta [x]} is the Dirac delta function. Other distributions may not even be a mix, for example, the Cantor distribution has no positive probability for any single point, neither does it have a density. The modern approach to probability theory solves these problems using measure theory to define the probability space: Given any set Ω {\displaystyle \Omega \,} (also called sample space) and a σ-algebra F {\displaystyle {\mathcal {F}}\,} on it, a measure P {\displaystyle P\,} defined on F {\displaystyle {\mathcal {F}}\,} is called a probability measure if P ( Ω ) = 1. {\displaystyle P(\Omega )=1.\,} If F {\displaystyle {\mathcal {F}}\,} is the Borel σ-algebra on the set of real numbers, then there is a unique probability measure on F {\displaystyle {\mathcal {F}}\,} for any CDF, and vice versa. The measure corresponding to a CDF is said to be induced by the CDF. This measure coincides with the pmf for discrete variables and PDF for continuous variables, making the measure-theoretic approach free of fallacies. The probability of a set E {\displaystyle E\,} in the σ-algebra F {\displaystyle {\mathcal {F}}\,} is defined as P ( E ) = ∫ ω ∈ E μ F ( d ω ) {\displaystyle P(E)=\int _{\omega \in E}\mu _{F}(d\omega )\,} where the integration is with respect to the measure μ F {\displaystyle \mu _{F}\,} induced by F . {\displaystyle F\,.} Along with providing better understanding and unification of discrete and continuous probabilities, measure-theoretic treatment also allows us to work on probabilities outside R n {\displaystyle \mathbb {R} ^{n}} , as in the theory of stochastic processes. For example, to study Brownian motion, probability is defined on a space of functions. When it is convenient to work with a dominating measure, the Radon-Nikodym theorem is used to define a density as the Radon-Nikodym derivative of the probability distribution of interest with respect to this dominating measure. Discrete densities are usually defined as this derivative with respect to a counting measure over the set of all possible outcomes. Densities for absolutely continuous distributions are usually defined as this derivative with respect to the Lebesgue measure. If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions. == Classical probability distributions == Certain random variables occur very often in probability theory because they well describe many natural or physical processes. Their distributions, therefore, have gained special importance in probability theory. Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions. Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions. == Convergence of random variables == In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. Weak convergence A sequence of random variables X 1 , X 2 , … , {\displaystyle X_{1},X_{2},\dots ,\,} converges weakly to the random variable X {\displaystyle X\,} if their respective CDF converges F 1 , F 2 , … {\displaystyle F_{1},F_{2},\dots \,} converges to the CDF F {\displaystyle F\,} of X {\displaystyle X\,} , wherever F {\displaystyle F\,} is continuous. Weak convergence is also called convergence in distribution. Most common shorthand notation: X n → D X {\displaystyle \displaystyle X_{n}\,{\xrightarrow {\mathcal {D}}}\,X} Convergence in probability The sequence of random variables X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} is said to converge towards the random variable X {\displaystyle X\,} in probability if lim n → ∞ P ( | X n − X | ≥ ε ) = 0 {\displaystyle \lim _{n\rightarrow \infty }P\left(\left|X_{n}-X\right|\geq \varepsilon \right)=0} for every ε > 0. Most common shorthand notation: X n → P X {\displaystyle \displaystyle X_{n}\,{\xrightarrow {P}}\,X} Strong convergence The sequence of random variables X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} is said to converge towards the random variable X {\displaystyle X\,} strongly if P ( lim n → ∞ X n = X ) = 1 {\displaystyle P(\lim _{n\rightarrow \infty }X_{n}=X)=1} . Strong convergence is also known as almost sure convergence. Most common shorthand notation: X n → a . s . X {\displaystyle \displaystyle X_{n}\,{\xrightarrow {\mathrm {a.s.} }}\,X} As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true. === Law of large numbers === Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the law of large numbers. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence. The law of large numbers (LLN) states that the sample average X ¯ n = 1 n ∑ k = 1 n X k {\displaystyle {\overline {X}}_{n}={\frac {1}{n}}{\sum _{k=1}^{n}X_{k}}} of a sequence of independent and identically distributed random variables X k {\displaystyle X_{k}} converges towards their common expectation (expected value) μ {\displaystyle \mu } , provided that the expectation of | X k | {\displaystyle |X_{k}|} is finite. It is in the different forms of convergence of random variables that separates the weak and the strong law of large numbers Weak law: X ¯ n → P μ {\displaystyle \displaystyle {\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu } for n → ∞ {\displaystyle n\to \infty } Strong law: X ¯ n → a . s . μ {\displaystyle \displaystyle {\overline {X}}_{n}\,{\xrightarrow {\mathrm {a.\,s.} }}\,\mu } for n → ∞ . {\displaystyle n\to \infty .} It follows from the LLN that if an event of probability p is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards p. For example, if Y 1 , Y 2 , . . . {\displaystyle Y_{1},Y_{2},...\,} are independent Bernoulli random variables taking values 1 with probability p and 0 with probability 1-p, then E ( Y i ) = p {\displaystyle {\textrm {E}}(Y_{i})=p} for all i, so that Y ¯ n {\displaystyle {\bar {Y}}_{n}} converges to p almost surely. === Central limit theorem === The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics." The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} be independent random variables with mean μ {\displaystyle \mu } and variance σ 2 > 0. {\displaystyle \sigma ^{2}>0.\,} Then the sequence of random variables Z n = ∑ i = 1 n ( X i − μ ) σ n {\displaystyle Z_{n}={\frac {\sum _{i=1}^{n}(X_{i}-\mu )}{\sigma {\sqrt {n}}}}\,} converges in distribution to a standard normal random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT). == See also == Mathematical Statistics – Branch of statisticsPages displaying short descriptions of redirect targets Expected value – Average value of a random variable Variance – Statistical measure of how far values spread from their average Fuzzy logic – System for reasoning about vagueness Fuzzy measure theory – theory of generalized measures in which the additive property is replaced by the weaker property of monotonicityPages displaying wikidata descriptions as a fallback Glossary of probability and statistics Likelihood function – Function related to statistics and probability theory Notation in probability Predictive modelling – Form of modelling that uses statistics to predict outcomes Probabilistic logic – Applications of logic under uncertainty Probabilistic proofs of non-probabilistic theorems Probability distribution – Mathematical function for the probability a given outcome occurs in an experiment Probability axioms – Foundations of probability theory Probability interpretations – Philosophical interpretation of the axioms of probability Probability space – Mathematical concept Statistical independence – When the occurrence of one event does not affect the likelihood of anotherPages displaying short descriptions of redirect targets Statistical physics – Physics of many interacting particlesPages displaying short descriptions of redirect targets Subjective logic – Type of probabilistic logic Pairwise independence§Probability of the union of pairwise independent events – Set of random variables of which any two are independent === Lists === Catalog of articles in probability theory List of probability topics List of publications in statistics List of statistical topics == References == === Citations === === Sources ===
Wikipedia/Probability_theory
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation. Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables. The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length. == Description == An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides. The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms. For example, the equation A x 2 + B x + C − y = 0 {\displaystyle Ax^{2}+Bx+C-y=0} has left-hand side A x 2 + B x + C − y {\displaystyle Ax^{2}+Bx+C-y} , which has four terms, and right-hand side 0 {\displaystyle 0} , consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables). An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side. == Properties == Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to: Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero. Multiplying or dividing both sides of an equation by a non-zero quantity. Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum. For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity. If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation x = 1 {\displaystyle x=1} has the solution x = 1. {\displaystyle x=1.} Raising both sides to the exponent of 2 (which means applying the function f ( s ) = s 2 {\displaystyle f(s)=s^{2}} to both sides of the equation) changes the equation to x 2 = 1 {\displaystyle x^{2}=1} , which not only has the previous solution but also introduces the extraneous solution, x = − 1. {\displaystyle x=-1.} Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation. The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination. == Examples == === Analogous illustration === An equation is analogous to a weighing scale, balance, or seesaw. Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation). In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same. === Parameters and unknowns === Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters. An example of an equation involving x and y as unknowns and the parameter R is x 2 + y 2 = R 2 . {\displaystyle x^{2}+y^{2}=R^{2}.} When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle. Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0. The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions. A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system 3 x + 5 y = 2 5 x + 8 y = 3 {\displaystyle {\begin{aligned}3x+5y&=2\\5x+8y&=3\end{aligned}}} has the unique solution x = −1, y = 1. === Identities === An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable. In algebra, an example of an identity is the difference of two squares: x 2 − y 2 = ( x + y ) ( x − y ) {\displaystyle x^{2}-y^{2}=(x+y)(x-y)} which is true for all x and y. Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are: sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1 {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1} and sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) {\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )} which are both true for all values of θ. For example, to solve for the value of θ that satisfies the equation: 3 sin ⁡ ( θ ) cos ⁡ ( θ ) = 1 , {\displaystyle 3\sin(\theta )\cos(\theta )=1\,,} where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give: 3 2 sin ⁡ ( 2 θ ) = 1 , {\displaystyle {\frac {3}{2}}\sin(2\theta )=1\,,} yielding the following solution for θ: θ = 1 2 arcsin ⁡ ( 2 3 ) ≈ 20.9 ∘ . {\displaystyle \theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.} Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number. == Algebra == Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions. === Polynomial equations === In general, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} , or P = Q {\displaystyle P=Q} where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.). For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is a univariate algebraic (polynomial) equation with integer coefficients and y 4 + x y 2 = x 3 3 − x y 2 + y 2 − 1 7 {\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}} is a multivariate polynomial equation over the rational numbers. Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). === Systems of linear equations === A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example, 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x = 1 y = − 2 z = − 2 {\displaystyle {\begin{alignedat}{2}x&\,=\,&1\\y&\,=\,&-2\\z&\,=\,&-2\end{alignedat}}} since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually. In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. == Geometry == === Analytic geometry === In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form a x + b y + c z + d = 0 {\displaystyle ax+by+cz+d=0} , where a , b , c {\displaystyle a,b,c} and d {\displaystyle d} are real numbers and x , y , z {\displaystyle x,y,z} are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values a , b , c {\displaystyle a,b,c} are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in R 2 {\displaystyle \mathbb {R} ^{2}} or as the solution set of two linear equations with values in R 3 . {\displaystyle \mathbb {R} ^{3}.} A conic section is the intersection of a cone with equation x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic. The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians. Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra. === Cartesian equations === In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics. One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4. === Parametric equations === A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example, x = cos ⁡ t y = sin ⁡ t {\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}} are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve. The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). == Number theory == === Diophantine equations === A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. === Algebraic and transcendental numbers === An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental. === Algebraic geometry === Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations. == Differential equations == A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics. In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. === Ordinary differential equations === An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions. === Partial differential equations === A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations. == Types of equations == Equations can be classified according to the types of operations and quantities involved. Important types include: An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree: linear equation for degree one quadratic equation for degree two cubic equation for degree three quartic equation for degree four quintic equation for degree five sextic equation for degree six septic equation for degree seven octic equation for degree eight A Diophantine equation is an equation where the unknowns are required to be integers A transcendental equation is an equation involving a transcendental function of its unknowns A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations A functional equation is an equation in which the unknowns are functions rather than simple quantities Equations involving derivatives, integrals and finite differences: A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as f ′ ( x ) = x 2 {\displaystyle f'(x)=x^{2}} . Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable. A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as f ′ ( x ) = f ( x − 2 ) {\displaystyle f'(x)=f(x-2)} A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process == See also == == Notes == == References == == External links == Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations. Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y).
Wikipedia/Equation
In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality. A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set. An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions. For example, the equation x + y = 2x – 1 is solved for the unknown x by the expression x = y + 1, because substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, a = 0 gives (x, y) = (1, 0) (that is, x = 1, y = 0), and a = 1 gives (x, y) = (2, 1). The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in x and y", or "solve for x and y", which indicate the unknowns, here x and y. However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role. Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution. == Overview == One general form of an equation is f ( x 1 , … , x n ) = c , {\displaystyle f\left(x_{1},\dots ,x_{n}\right)=c,} where f is a function, x1, ..., xn are the unknowns, and c is a constant. Its solutions are the elements of the inverse image (fiber) f − 1 ( c ) = { ( a 1 , … , a n ) ∈ D ∣ f ( a 1 , … , a n ) = c } , {\displaystyle f^{-1}(c)={\bigl \{}(a_{1},\dots ,a_{n})\in D\mid f\left(a_{1},\dots ,a_{n}\right)=c{\bigr \}},} where D is the domain of the function f. The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions). For example, an equation such as 3 x + 2 y = 21 z , {\displaystyle 3x+2y=21z,} with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain 3 x + 2 y − 21 z = 0 {\displaystyle 3x+2y-21z=0} In this particular case there is not just one solution, but an infinite set of solutions, which can be written using set builder notation as { ( x , y , z ) ∣ 3 x + 2 y − 21 z = 0 } . {\displaystyle {\bigl \{}(x,y,z)\mid 3x+2y-21z=0{\bigr \}}.} One particular solution is x = 0, y = 0, z = 0. Two other solutions are x = 3, y = 6, z = 1, and x = 8, y = 9, z = 2. There is a unique plane in three-dimensional space which passes through the three points with these coordinates, and this plane is the set of all points whose coordinates are solutions of the equation. == Solution sets == The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all the equations or inequalities. If the solution set is empty, then there are no values of the unknowns that satisfy simultaneously all equations and inequalities. For a simple example, consider the equation x 2 = 2. {\displaystyle x^{2}=2.} This equation can be viewed as a Diophantine equation, that is, an equation for which only integer solutions are sought. In this case, the solution set is the empty set, since 2 is not the square of an integer. However, if one searches for real solutions, there are two solutions, √2 and –√2; in other words, the solution set is {√2, −√2}. When an equation contains several unknowns, and when one has several equations with more unknowns than equations, the solution set is often infinite. In this case, the solutions cannot be listed. For representing them, a parametrization is often useful, which consists of expressing the solutions in terms of some of the unknowns or auxiliary variables. This is always possible when all the equations are linear. Such infinite solution sets can naturally be interpreted as geometric shapes such as lines, curves (see picture), planes, and more generally algebraic varieties or manifolds. In particular, algebraic geometry may be viewed as the study of solution sets of algebraic equations. == Methods of solution == The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below. In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970. For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success. === Brute force, trial and error, inspired guess === If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods. As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess. === Elementary algebra === Equations involving linear or simple rational functions of a single real-valued unknown, say x, such as 8 x + 7 = 4 x + 35 or 4 x + 9 3 x + 4 = 2 , {\displaystyle 8x+7=4x+35\quad {\text{or}}\quad {\frac {4x+9}{3x+4}}=2\,,} can be solved using the methods of elementary algebra. === Systems of linear equations === Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra. See Gaussian elimination and numerical solution of linear systems. === Polynomial equations === Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example 4 x 5 − x 3 − 3 = 0 {\displaystyle 4x^{5}-x^{3}-3=0} (by using the rational root theorem), and x 6 − 5 x 3 + 6 = 0 , {\displaystyle x^{6}-5x^{3}+6=0\,,} (by using the substitution x = z1⁄3, which simplifies this to a quadratic equation in z). === Diophantine equations === In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation 2 x 5 − 5 x 4 − x 3 − 7 x 2 + 2 x + 3 = 0 {\displaystyle 2x^{5}-5x^{4}-x^{3}-7x^{2}+2x+3=0\,} has as rational solutions x = −⁠1/2⁠ and x = 3, and so, viewed as a Diophantine equation, it has the unique solution x = 3. In general, however, Diophantine equations are among the most difficult equations to solve. === Inverse functions === In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h. Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that h − 1 ( h ( x ) ) = h ( h − 1 ( x ) ) = x . {\displaystyle h^{-1}{\bigl (}h(x){\bigr )}=h{\bigl (}h^{-1}(x){\bigr )}=x\,.} Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain h − 1 ( h ( x ) ) = h − 1 ( c ) x = h − 1 ( c ) {\displaystyle {\begin{aligned}h^{-1}{\bigl (}h(x){\bigr )}&=h^{-1}(c)\\x&=h^{-1}(c)\\\end{aligned}}} and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set B (only on some subset), and have many values at some point. If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity h ( h − 1 ( x ) ) = x {\displaystyle h\left(h^{-1}(x)\right)=x} holds. For example, the projection π1 : R2 → R defined by π1(x, y) = x has no post-inverse, but it has a pre-inverse π−11 defined by π−11(x) = (x, 0). Indeed, the equation π1(x, y) = c is solved by ( x , y ) = π 1 − 1 ( c ) = ( c , 0 ) . {\displaystyle (x,y)=\pi _{1}^{-1}(c)=(c,0).} Examples of inverse functions include the nth root (inverse of xn); the logarithm (inverse of ax); the inverse trigonometric functions; and Lambert's W function (inverse of xex). === Factorization === If the left-hand side expression of an equation P = 0 can be factorized as P = QR, the solution set of the original solution consists of the union of the solution sets of the two equations Q = 0 and R = 0. For example, the equation tan ⁡ x + cot ⁡ x = 2 {\displaystyle \tan x+\cot x=2} can be rewritten, using the identity tan x cot x = 1 as tan 2 ⁡ x − 2 tan ⁡ x + 1 tan ⁡ x = 0 , {\displaystyle {\frac {\tan ^{2}x-2\tan x+1}{\tan x}}=0,} which can be factorized into ( tan ⁡ x − 1 ) 2 tan ⁡ x = 0. {\displaystyle {\frac {\left(\tan x-1\right)^{2}}{\tan x}}=0.} The solutions are thus the solutions of the equation tan x = 1, and are thus the set x = π 4 + k π , k = 0 , ± 1 , ± 2 , … . {\displaystyle x={\tfrac {\pi }{4}}+k\pi ,\quad k=0,\pm 1,\pm 2,\ldots .} === Numerical methods === With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton–Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem. There are also numerical methods for systems of linear equations. === Matrix equations === Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra. === Differential equations === There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration. Solutions of differential equations can be implicit or explicit. == See also == Extraneous and missing solutions Simultaneous equations Equating coefficients Solving the geodesic equations Unification (computer science) — solving equations involving symbolic expressions == References ==
Wikipedia/Equation_solving
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories. A covariety is the class of all coalgebraic structures of a given signature. == Terminology == A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common. The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication. == Definition == A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as v = w. A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function oA : An → A such that for each axiom v = w and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras. Given two algebras of a theory T, say A and B, a homomorphism is a function f : A → B such that f ( o A ( a 1 , … , a n ) ) = o B ( f ( a 1 ) , … , f ( a n ) ) {\displaystyle f(o_{A}(a_{1},\dots ,a_{n}))=o_{B}(f(a_{1}),\dots ,f(a_{n}))} for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms. == Examples == The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law: x ( y z ) = ( x y ) z . {\displaystyle x(yz)=(xy)z.} The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities: x ( y z ) = ( x y ) z {\displaystyle x(yz)=(xy)z} 1 x = x 1 = x {\displaystyle 1x=x1=x} x x − 1 = x − 1 x = 1. {\displaystyle xx^{-1}=x^{-1}x=1.} The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation). If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras. The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below). The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity. == Birkhoff's variety theorem == Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product. One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult. Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety. == Subvarieties == A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities. Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different. Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains ⟨ Z , + ⟩ {\displaystyle \langle \mathbb {Z} ,+\rangle } and does not contain its subalgebra (more precisely, submonoid) ⟨ N , + ⟩ {\displaystyle \langle \mathbb {N} ,+\rangle } . However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying xy = yx, with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated. Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V. == Free objects == Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map i : S → FS that satisfies the following universal property: given any algebra A in V and any map k : S → A, there exists a unique V-homomorphism f : FS → A such that f ∘ i = k. This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra. == Category theory == Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor G : V → Set has a left adjoint F : Set → V, namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad T = GF. Moreover the monad T is finitary, meaning it commutes with filtered colimits. The monad T : Set → Set is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories. Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set). Every finitary algebraic category is a locally presentable category. == Pseudovariety of finite algebras == Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities. A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived. Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set of profinite identities. Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups. == See also == Quasivariety == Notes == == External links ==
Wikipedia/Variety_(universal_algebra)
In mathematics, a Lie algebra (pronounced LEE) is a vector space g {\displaystyle {\mathfrak {g}}} together with an operation called the Lie bracket, an alternating bilinear map g × g → g {\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}} , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors x {\displaystyle x} and y {\displaystyle y} is denoted [ x , y ] {\displaystyle [x,y]} . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra. In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space g {\displaystyle {\mathfrak {g}}} to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give g {\displaystyle {\mathfrak {g}}} the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces. In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics. An elementary example (not directly coming from an associative algebra) is the 3-dimensional space g = R 3 {\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}} with Lie bracket defined by the cross product [ x , y ] = x × y . {\displaystyle [x,y]=x\times y.} This is skew-symmetric since x × y = − y × x {\displaystyle x\times y=-y\times x} , and instead of associativity it satisfies the Jacobi identity: x × ( y × z ) + y × ( z × x ) + z × ( x × y ) = 0. {\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.} This is the Lie algebra of the Lie group of rotations of space, and each vector v ∈ R 3 {\displaystyle v\in \mathbb {R} ^{3}} may be pictured as an infinitesimal rotation around the axis v {\displaystyle v} , with angular speed equal to the magnitude of v {\displaystyle v} . The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property [ x , x ] = x × x = 0 {\displaystyle [x,x]=x\times x=0} . == History == Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used. == Definition of a Lie algebra == A Lie algebra is a vector space g {\displaystyle \,{\mathfrak {g}}} over a field F {\displaystyle F} together with a binary operation [ ⋅ , ⋅ ] : g × g → g {\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} called the Lie bracket, satisfying the following axioms: Bilinearity, [ a x + b y , z ] = a [ x , z ] + b [ y , z ] , {\displaystyle [ax+by,z]=a[x,z]+b[y,z],} [ z , a x + b y ] = a [ z , x ] + b [ z , y ] {\displaystyle [z,ax+by]=a[z,x]+b[z,y]} for all scalars a , b {\displaystyle a,b} in F {\displaystyle F} and all elements x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . The Alternating property, [ x , x ] = 0 {\displaystyle [x,x]=0\ } for all x {\displaystyle x} in g {\displaystyle {\mathfrak {g}}} . The Jacobi identity, [ x , [ y , z ] ] + [ z , [ x , y ] ] + [ y , [ z , x ] ] = 0 {\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ } for all x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation. Using bilinearity to expand the Lie bracket [ x + y , x + y ] {\displaystyle [x+y,x+y]} and using the alternating property shows that [ x , y ] + [ y , x ] = 0 {\displaystyle [x,y]+[y,x]=0} for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . Thus bilinearity and the alternating property together imply Anticommutativity, [ x , y ] = − [ y , x ] , {\displaystyle [x,y]=-[y,x],\ } for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies [ x , x ] = − [ x , x ] . {\displaystyle [x,x]=-[x,x].} It is customary to denote a Lie algebra by a lower-case fraktur letter such as g , h , b , n {\displaystyle {\mathfrak {g,h,b,n}}} . If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is s u ( n ) {\displaystyle {\mathfrak {su}}(n)} . === Generators and dimension === The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra g {\displaystyle {\mathfrak {g}}} means a subset of g {\displaystyle {\mathfrak {g}}} such that any Lie subalgebra (as defined below) that contains S must be all of g {\displaystyle {\mathfrak {g}}} . Equivalently, g {\displaystyle {\mathfrak {g}}} is spanned (as a vector space) by all iterated brackets of elements of S. == Basic examples == === Abelian Lie algebras === A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space V {\displaystyle V} endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket. === The Lie algebra of matrices === On an associative algebra A {\displaystyle A} over a field F {\displaystyle F} with multiplication written as x y {\displaystyle xy} , a Lie bracket may be defined by the commutator [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . With this bracket, A {\displaystyle A} is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on A {\displaystyle A} .) The endomorphism ring of an F {\displaystyle F} -vector space V {\displaystyle V} with the above Lie bracket is denoted g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . For a field F and a positive integer n, the space of n × n matrices over F, denoted g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} or g l n ( F ) {\displaystyle {\mathfrak {gl}}_{n}(F)} , is a Lie algebra with bracket given by the commutator of matrices: [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra. When F is the real numbers, g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} is the Lie algebra of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} , the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise, g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} is the Lie algebra of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . The Lie bracket on g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} can be viewed as the Lie algebra of the algebraic group G L ( n ) {\displaystyle \mathrm {GL} (n)} over F. == Definitions == === Subalgebras, ideals and homomorphisms === The Lie bracket is not required to be associative, meaning that [ [ x , y ] , z ] {\displaystyle [[x,y],z]} need not be equal to [ x , [ y , z ] ] {\displaystyle [x,[y,z]]} . Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace h ⊆ g {\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}} which is closed under the Lie bracket. An ideal i ⊆ g {\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}} is a linear subspace that satisfies the stronger condition: [ g , i ] ⊆ i . {\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.} In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals. A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets: ϕ : g → h , ϕ ( [ x , y ] ) = [ ϕ ( x ) , ϕ ( y ) ] for all x , y ∈ g . {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.} An isomorphism of Lie algebras is a bijective homomorphism. As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra g {\displaystyle {\mathfrak {g}}} and an ideal i {\displaystyle {\mathfrak {i}}} in it, the quotient Lie algebra g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} is defined, with a surjective homomorphism g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism ϕ : g → h {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}} of Lie algebras, the image of ϕ {\displaystyle \phi } is a Lie subalgebra of h {\displaystyle {\mathfrak {h}}} that is isomorphic to g / ker ( ϕ ) {\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )} . For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} are said to commute if their bracket vanishes: [ x , y ] = 0 {\displaystyle [x,y]=0} . The centralizer subalgebra of a subset S ⊂ g {\displaystyle S\subset {\mathfrak {g}}} is the set of elements commuting with S {\displaystyle S} : that is, z g ( S ) = { x ∈ g : [ x , s ] = 0 for all s ∈ S } {\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}} . The centralizer of g {\displaystyle {\mathfrak {g}}} itself is the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} . Similarly, for a subspace S, the normalizer subalgebra of S {\displaystyle S} is n g ( S ) = { x ∈ g : [ x , s ] ∈ S for all s ∈ S } {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}} . If S {\displaystyle S} is a Lie subalgebra, n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} is the largest subalgebra such that S {\displaystyle S} is an ideal of n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} . ==== Example ==== The subspace t n {\displaystyle {\mathfrak {t}}_{n}} of diagonal matrices in g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is an abelian Lie subalgebra. (It is a Cartan subalgebra of g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} , analogous to a maximal torus in the theory of compact Lie groups.) Here t n {\displaystyle {\mathfrak {t}}_{n}} is not an ideal in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} for n ≥ 2 {\displaystyle n\geq 2} . For example, when n = 2 {\displaystyle n=2} , this follows from the calculation: [ [ a b c d ] , [ x 0 0 y ] ] = [ a x b y c x d y ] − [ a x b x c y d y ] = [ 0 b ( y − x ) c ( x − y ) 0 ] {\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}} (which is not always in t 2 {\displaystyle {\mathfrak {t}}_{2}} ). Every one-dimensional linear subspace of a Lie algebra g {\displaystyle {\mathfrak {g}}} is an abelian Lie subalgebra, but it need not be an ideal. === Product and semidirect product === For two Lie algebras g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g'}}} , the product Lie algebra is the vector space g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} consisting of all ordered pairs ( x , x ′ ) , x ∈ g , x ′ ∈ g ′ {\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}} , with Lie bracket [ ( x , x ′ ) , ( y , y ′ ) ] = ( [ x , y ] , [ x ′ , y ′ ] ) . {\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).} This is the product in the category of Lie algebras. Note that the copies of g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g}}'} in g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} commute with each other: [ ( x , 0 ) , ( 0 , x ′ ) ] = 0. {\displaystyle [(x,0),(0,x')]=0.} Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra and i {\displaystyle {\mathfrak {i}}} an ideal of g {\displaystyle {\mathfrak {g}}} . If the canonical map g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} splits (i.e., admits a section g / i → g {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}} , as a homomorphism of Lie algebras), then g {\displaystyle {\mathfrak {g}}} is said to be a semidirect product of i {\displaystyle {\mathfrak {i}}} and g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} , g = g / i ⋉ i {\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}} . See also semidirect sum of Lie algebras. === Derivations === For an algebra A over a field F, a derivation of A over F is a linear map D : A → A {\displaystyle D\colon A\to A} that satisfies the Leibniz rule D ( x y ) = D ( x ) y + x D ( y ) {\displaystyle D(xy)=D(x)y+xD(y)} for all x , y ∈ A {\displaystyle x,y\in A} . (The definition makes sense for a possibly non-associative algebra.) Given two derivations D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , their commutator [ D 1 , D 2 ] := D 1 D 2 − D 2 D 1 {\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}} is again a derivation. This operation makes the space Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} of all derivations of A over F into a Lie algebra. Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that ( 1 + ϵ D ) ( x y ) ≡ ( 1 + ϵ D ) ( x ) ⋅ ( 1 + ϵ D ) ( y ) ( mod ϵ 2 ) {\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}} (where 1 denotes the identity map on A) gives exactly the definition of D being a derivation. Example: the Lie algebra of vector fields. Let A be the ring C ∞ ( X ) {\displaystyle C^{\infty }(X)} of smooth functions on a smooth manifold X. Then a derivation of A over R {\displaystyle \mathbb {R} } is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space Vect ( X ) {\displaystyle {\text{Vect}}(X)} of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking, Vect ( X ) {\displaystyle {\text{Vect}}(X)} is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras g → Vect ( X ) {\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)} . (An example is illustrated below.) A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra g {\displaystyle {\mathfrak {g}}} over a field F determines its Lie algebra of derivations, Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} . That is, a derivation of g {\displaystyle {\mathfrak {g}}} is a linear map D : g → g {\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}} such that D ( [ x , y ] ) = [ D ( x ) , y ] + [ x , D ( y ) ] {\displaystyle D([x,y])=[D(x),y]+[x,D(y)]} . The inner derivation associated to any x ∈ g {\displaystyle x\in {\mathfrak {g}}} is the adjoint mapping a d x {\displaystyle \mathrm {ad} _{x}} defined by a d x ( y ) := [ x , y ] {\displaystyle \mathrm {ad} _{x}(y):=[x,y]} . (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras, ad : g → Der F ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})} . The image Inn F ( g ) {\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})} is an ideal in Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} , and the Lie algebra of outer derivations is defined as the quotient Lie algebra, Out F ( g ) = Der F ( g ) / Inn F ( g ) {\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})} . (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite. In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space V {\displaystyle V} with Lie bracket zero, the Lie algebra Out F ( V ) {\displaystyle {\text{Out}}_{F}(V)} can be identified with g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . == Examples == === Matrix Lie algebras === A matrix group is a Lie group consisting of invertible matrices, G ⊂ G L ( n , R ) {\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )} , where the group operation of G is matrix multiplication. The corresponding Lie algebra g {\displaystyle {\mathfrak {g}}} is the space of matrices which are tangent vectors to G inside the linear space M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} : this consists of derivatives of smooth curves in G at the identity matrix I {\displaystyle I} : g = { X = c ′ ( 0 ) ∈ M n ( R ) : smooth c : R → G , c ( 0 ) = I } . {\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.} The Lie bracket of g {\displaystyle {\mathfrak {g}}} is given by the commutator of matrices, [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . Given a Lie algebra g ⊂ g l ( n , R ) {\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )} , one can recover the Lie group as the subgroup generated by the matrix exponential of elements of g {\displaystyle {\mathfrak {g}}} . (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping exp : M n ( R ) → M n ( R ) {\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )} is defined by exp ⁡ ( X ) = I + X + 1 2 ! X 2 + 1 3 ! X 3 + ⋯ {\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots } , which converges for every matrix X {\displaystyle X} . The same comments apply to complex Lie subgroups of G L ( n , C ) {\displaystyle GL(n,\mathbb {C} )} and the complex matrix exponential, exp : M n ( C ) → M n ( C ) {\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )} (defined by the same formula). Here are some matrix Lie groups and their Lie algebras. For a positive integer n, the special linear group S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} consists of all real n × n matrices with determinant 1. This is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve volume and orientation. More abstractly, S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} is the commutator subgroup of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} . Its Lie algebra s l ( n , R ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )} consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group S L ( n , C ) {\displaystyle {\rm {SL}}(n,\mathbb {C} )} and its Lie algebra s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} . The orthogonal group O ( n ) {\displaystyle \mathrm {O} (n)} plays a basic role in geometry: it is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve the length of vectors. For example, rotations and reflections belong to O ( n ) {\displaystyle \mathrm {O} (n)} . Equivalently, this is the group of n x n orthogonal matrices, meaning that A T = A − 1 {\displaystyle A^{\mathrm {T} }=A^{-1}} , where A T {\displaystyle A^{\mathrm {T} }} denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group S O ( n ) {\displaystyle \mathrm {SO} (n)} , consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} , the subspace of skew-symmetric matrices in g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} ( X T = − X {\displaystyle X^{\rm {T}}=-X} ). See also infinitesimal rotations with skew-symmetric matrices. The complex orthogonal group O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} , its identity component S O ( n , C ) {\displaystyle \mathrm {SO} (n,\mathbb {C} )} , and the Lie algebra s o ( n , C ) {\displaystyle {\mathfrak {so}}(n,\mathbb {C} )} are given by the same formulas applied to n x n complex matrices. Equivalently, O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the standard symmetric bilinear form on C n {\displaystyle \mathbb {C} ^{n}} . The unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the length of vectors in C n {\displaystyle \mathbb {C} ^{n}} (with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying A ∗ = A − 1 {\displaystyle A^{*}=A^{-1}} , where A ∗ {\displaystyle A^{*}} denotes the conjugate transpose of a matrix). Its Lie algebra u ( n ) {\displaystyle {\mathfrak {u}}(n)} consists of the skew-hermitian matrices in g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} ( X ∗ = − X {\displaystyle X^{*}=-X} ). This is a Lie algebra over R {\displaystyle \mathbb {R} } , not over C {\displaystyle \mathbb {C} } . (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is a real Lie subgroup of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . For example, U ( 1 ) {\displaystyle \mathrm {U} (1)} is the circle group, and its Lie algebra (from this point of view) is i R ⊂ C = g l ( 1 , C ) {\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )} . The special unitary group S U ( n ) {\displaystyle \mathrm {SU} (n)} is the subgroup of matrices with determinant 1 in U ( n ) {\displaystyle \mathrm {U} (n)} . Its Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} consists of the skew-hermitian matrices with trace zero. The symplectic group S p ( 2 n , R ) {\displaystyle \mathrm {Sp} (2n,\mathbb {R} )} is the subgroup of G L ( 2 n , R ) {\displaystyle \mathrm {GL} (2n,\mathbb {R} )} that preserves the standard alternating bilinear form on R 2 n {\displaystyle \mathbb {R} ^{2n}} . Its Lie algebra is the symplectic Lie algebra s p ( 2 n , R ) {\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )} . The classical Lie algebras are those listed above, along with variants over any field. === Two dimensions === Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples. There is a unique nonabelian Lie algebra g {\displaystyle {\mathfrak {g}}} of dimension 2 over any field F, up to isomorphism. Here g {\displaystyle {\mathfrak {g}}} has a basis X , Y {\displaystyle X,Y} for which the bracket is given by [ X , Y ] = Y {\displaystyle \left[X,Y\right]=Y} . (This determines the Lie bracket completely, because the axioms imply that [ X , X ] = 0 {\displaystyle [X,X]=0} and [ Y , Y ] = 0 {\displaystyle [Y,Y]=0} .) Over the real numbers, g {\displaystyle {\mathfrak {g}}} can be viewed as the Lie algebra of the Lie group G = A f f ( 1 , R ) {\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )} of affine transformations of the real line, x ↦ a x + b {\displaystyle x\mapsto ax+b} . The affine group G can be identified with the group of matrices ( a b 0 1 ) {\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)} under matrix multiplication, with a , b ∈ R {\displaystyle a,b\in \mathbb {R} } , a ≠ 0 {\displaystyle a\neq 0} . Its Lie algebra is the Lie subalgebra g {\displaystyle {\mathfrak {g}}} of g l ( 2 , R ) {\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )} consisting of all matrices ( c d 0 0 ) . {\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).} In these terms, the basis above for g {\displaystyle {\mathfrak {g}}} is given by the matrices X = ( 1 0 0 0 ) , Y = ( 0 1 0 0 ) . {\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).} For any field F {\displaystyle F} , the 1-dimensional subspace F ⋅ Y {\displaystyle F\cdot Y} is an ideal in the 2-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , by the formula [ X , Y ] = Y ∈ F ⋅ Y {\displaystyle [X,Y]=Y\in F\cdot Y} . Both of the Lie algebras F ⋅ Y {\displaystyle F\cdot Y} and g / ( F ⋅ Y ) {\displaystyle {\mathfrak {g}}/(F\cdot Y)} are abelian (because 1-dimensional). In this sense, g {\displaystyle {\mathfrak {g}}} can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below. === Three dimensions === The Heisenberg algebra h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} over a field F is the three-dimensional Lie algebra with a basis X , Y , Z {\displaystyle X,Y,Z} such that [ X , Y ] = Z , [ X , Z ] = 0 , [ Y , Z ] = 0 {\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0} . It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis X = ( 0 1 0 0 0 0 0 0 0 ) , Y = ( 0 0 0 0 0 1 0 0 0 ) , Z = ( 0 0 1 0 0 0 0 0 0 ) . {\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad } Over the real numbers, h 3 ( R ) {\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )} is the Lie algebra of the Heisenberg group H 3 ( R ) {\displaystyle \mathrm {H} _{3}(\mathbb {R} )} , that is, the group of matrices ( 1 a c 0 1 b 0 0 1 ) {\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)} under matrix multiplication. For any field F, the center of h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is the 1-dimensional ideal F ⋅ Z {\displaystyle F\cdot Z} , and the quotient h 3 ( F ) / ( F ⋅ Z ) {\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)} is abelian, isomorphic to F 2 {\displaystyle F^{2}} . In the terminology below, it follows that h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is nilpotent (though not abelian). The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over R {\displaystyle \mathbb {R} } . A basis is given by the three matrices F 1 = ( 0 0 0 0 0 − 1 0 1 0 ) , F 2 = ( 0 0 1 0 0 0 − 1 0 0 ) , F 3 = ( 0 − 1 0 1 0 0 0 0 0 ) . {\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad } The commutation relations among these generators are [ F 1 , F 2 ] = F 3 , {\displaystyle [F_{1},F_{2}]=F_{3},} [ F 2 , F 3 ] = F 1 , {\displaystyle [F_{2},F_{3}]=F_{1},} [ F 3 , F 1 ] = F 2 . {\displaystyle [F_{3},F_{1}]=F_{2}.} The cross product of vectors in R 3 {\displaystyle \mathbb {R} ^{3}} is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Also, s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics. The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Another simple Lie algebra of dimension 3, in this case over C {\displaystyle \mathbb {C} } , is the space s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} of 2 x 2 matrices of trace zero. A basis is given by the three matrices H = ( 1 0 0 − 1 ) , E = ( 0 1 0 0 ) , F = ( 0 0 1 0 ) . {\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).} The Lie bracket is given by: [ H , E ] = 2 E , {\displaystyle [H,E]=2E,} [ H , F ] = − 2 F , {\displaystyle [H,F]=-2F,} [ E , F ] = H . {\displaystyle [E,F]=H.} Using these formulas, one can show that the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} , the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the ( c + 2 ) {\displaystyle (c+2)} -eigenspace, while F maps the c-eigenspace into the ( c − 2 ) {\displaystyle (c-2)} -eigenspace. The Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is isomorphic to the complexification of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , meaning the tensor product s o ( 3 ) ⊗ R C {\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} } . The formulas for the Lie bracket are easier to analyze in the case of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . As a result, it is common to analyze complex representations of the group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} by relating them to representations of the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . === Infinite dimensions === The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over R {\displaystyle \mathbb {R} } . The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over C {\displaystyle \mathbb {C} } , with structure much like that of the finite-dimensional simple Lie algebras (such as s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} ). The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras. The Virasoro algebra is important in string theory. The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint V ↦ L ( V ) {\displaystyle V\mapsto L(V)} , called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra L ( V ) {\displaystyle L(V)} is infinite-dimensional for V of dimension at least 2. == Representations == === Definitions === Given a vector space V, let g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . A representation of a Lie algebra g {\displaystyle {\mathfrak {g}}} on V is a Lie algebra homomorphism π : g → g l ( V ) . {\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).} That is, π {\displaystyle \pi } sends each element of g {\displaystyle {\mathfrak {g}}} to a linear map from V to itself, in such a way that the Lie bracket on g {\displaystyle {\mathfrak {g}}} corresponds to the commutator of linear maps. A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} for some positive integer n. === Adjoint representation === For any Lie algebra g {\displaystyle {\mathfrak {g}}} , the adjoint representation is the representation ad : g → g l ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})} given by ad ⁡ ( x ) ( y ) = [ x , y ] {\displaystyle \operatorname {ad} (x)(y)=[x,y]} . (This is a representation of g {\displaystyle {\mathfrak {g}}} by the Jacobi identity.) === Goals of representation theory === One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra g {\displaystyle {\mathfrak {g}}} . Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of g {\displaystyle {\mathfrak {g}}} . For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula. === Universal enveloping algebra === The functor that takes an associative algebra A over a field F to A as a Lie algebra (by [ X , Y ] := X Y − Y X {\displaystyle [X,Y]:=XY-YX} ) has a left adjoint g ↦ U ( g ) {\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})} , called the universal enveloping algebra. To construct this: given a Lie algebra g {\displaystyle {\mathfrak {g}}} over F, let T ( g ) = F ⊕ g ⊕ ( g ⊗ g ) ⊕ ( g ⊗ g ⊗ g ) ⊕ ⋯ {\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots } be the tensor algebra on g {\displaystyle {\mathfrak {g}}} , also called the free associative algebra on the vector space g {\displaystyle {\mathfrak {g}}} . Here ⊗ {\displaystyle \otimes } denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in T ( g ) {\displaystyle T({\mathfrak {g}})} generated by the elements X Y − Y X − [ X , Y ] {\displaystyle XY-YX-[X,Y]} for X , Y ∈ g {\displaystyle X,Y\in {\mathfrak {g}}} ; then the universal enveloping algebra is the quotient ring U ( g ) = T ( g ) / I {\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I} . It satisfies the Poincaré–Birkhoff–Witt theorem: if e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} is a basis for g {\displaystyle {\mathfrak {g}}} as an F-vector space, then a basis for U ( g ) {\displaystyle U({\mathfrak {g}})} is given by all ordered products e 1 i 1 ⋯ e n i n {\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}} with i 1 , … , i n {\displaystyle i_{1},\ldots ,i_{n}} natural numbers. In particular, the map g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective. Representations of g {\displaystyle {\mathfrak {g}}} are equivalent to modules over the universal enveloping algebra. The fact that g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on U ( g ) {\displaystyle U({\mathfrak {g}})} . This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra. === Representation theory in physics === The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} . Typically, the space of states is far from being irreducible under the pertinent operators, but one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . == Structure theory and classification == Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups. === Abelian, nilpotent, and solvable === Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras. A Lie algebra g {\displaystyle {\mathfrak {g}}} is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in g {\displaystyle {\mathfrak {g}}} . In particular, the Lie algebra of an abelian Lie group (such as the group R n {\displaystyle \mathbb {R} ^{n}} under addition or the torus group T n {\displaystyle \mathbb {T} ^{n}} ) is abelian. Every finite-dimensional abelian Lie algebra over a field F {\displaystyle F} is isomorphic to F n {\displaystyle F^{n}} for some n ≥ 0 {\displaystyle n\geq 0} , meaning an n-dimensional vector space with Lie bracket zero. A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra g {\displaystyle {\mathfrak {g}}} is [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} , meaning the linear subspace spanned by all brackets [ x , y ] {\displaystyle [x,y]} with x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} . The commutator subalgebra is an ideal in g {\displaystyle {\mathfrak {g}}} , in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group. A Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if the lower central series g ⊇ [ g , g ] ⊇ [ [ g , g ] , g ] ⊇ [ [ [ g , g ] , g ] , g ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is nilpotent if there is a finite sequence of ideals in g {\displaystyle {\mathfrak {g}}} , 0 = a 0 ⊆ a 1 ⊆ ⋯ ⊆ a r = g , {\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},} such that a j / a j − 1 {\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}} is central in g / a j − 1 {\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}} for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in g {\displaystyle {\mathfrak {g}}} the adjoint endomorphism ad ⁡ ( u ) : g → g , ad ⁡ ( u ) v = [ u , v ] {\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]} is nilpotent. More generally, a Lie algebra g {\displaystyle {\mathfrak {g}}} is said to be solvable if the derived series: g ⊇ [ g , g ] ⊇ [ [ g , g ] , [ g , g ] ] ⊇ [ [ [ g , g ] , [ g , g ] ] , [ [ g , g ] , [ g , g ] ] ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is solvable if there is a finite sequence of Lie subalgebras, 0 = m 0 ⊆ m 1 ⊆ ⋯ ⊆ m r = g , {\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},} such that m j − 1 {\displaystyle {\mathfrak {m}}_{j-1}} is an ideal in m j {\displaystyle {\mathfrak {m}}_{j}} with m j / m j − 1 {\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}} abelian for each j. Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over R {\displaystyle \mathbb {R} } . For example, for a positive integer n and a field F of characteristic zero, the radical of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space b n {\displaystyle {\mathfrak {b}}_{n}} of upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not nilpotent when n ≥ 2 {\displaystyle n\geq 2} . An example of a nilpotent Lie algebra is the space u n {\displaystyle {\mathfrak {u}}_{n}} of strictly upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not abelian when n ≥ 3 {\displaystyle n\geq 3} . === Simple and semisimple === A Lie algebra g {\displaystyle {\mathfrak {g}}} is called simple if it is not abelian and the only ideals in g {\displaystyle {\mathfrak {g}}} are 0 and g {\displaystyle {\mathfrak {g}}} . (In particular, a one-dimensional—necessarily abelian—Lie algebra g {\displaystyle {\mathfrak {g}}} is by definition not simple, even though its only ideals are 0 and g {\displaystyle {\mathfrak {g}}} .) A finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} is called semisimple if the only solvable ideal in g {\displaystyle {\mathfrak {g}}} is 0. In characteristic zero, a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if it is isomorphic to a product of simple Lie algebras, g ≅ g 1 × ⋯ × g r {\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}} . For example, the Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple for every n ≥ 2 {\displaystyle n\geq 2} and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} over R {\displaystyle \mathbb {R} } is simple for every n ≥ 2 {\displaystyle n\geq 2} . The Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} over R {\displaystyle \mathbb {R} } is simple if n = 3 {\displaystyle n=3} or n ≥ 5 {\displaystyle n\geq 5} . (There are "exceptional isomorphisms" s o ( 3 ) ≅ s u ( 2 ) {\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)} and s o ( 4 ) ≅ s u ( 2 ) × s u ( 2 ) {\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)} .) The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations). A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra. For example, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is reductive for F of characteristic zero: for n ≥ 2 {\displaystyle n\geq 2} , it is isomorphic to the product g l ( n , F ) ≅ F × s l ( n , F ) , {\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),} where F denotes the center of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} , the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} contains few ideals: only 0, the center F, s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} , and all of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} . === Cartan's criterion === Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on g {\displaystyle {\mathfrak {g}}} defined by K ( u , v ) = tr ⁡ ( ad ⁡ ( u ) ad ⁡ ( v ) ) , {\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),} where tr denotes the trace of a linear operator. Namely: a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if the Killing form is nondegenerate. A Lie algebra g {\displaystyle {\mathfrak {g}}} is solvable if and only if K ( g , [ g , g ] ) = 0. {\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.} === Classification === The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras. The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is s l ( n + 1 , F ) {\displaystyle {\mathfrak {sl}}(n+1,F)} , Bn is s o ( 2 n + 1 , F ) {\displaystyle {\mathfrak {so}}(2n+1,F)} , Cn is s p ( 2 n , F ) {\displaystyle {\mathfrak {sp}}(2n,F)} , and Dn is s o ( 2 n , F ) {\displaystyle {\mathfrak {so}}(2n,F)} . The other five are known as the exceptional Lie algebras. The classification of finite-dimensional simple Lie algebras over R {\displaystyle \mathbb {R} } is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra g {\displaystyle {\mathfrak {g}}} over R {\displaystyle \mathbb {R} } by considering its complexification g ⊗ R C {\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} } . In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic p > 3 {\displaystyle p>3} were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero. == Relation to Lie groups == Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups. The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over R {\displaystyle \mathbb {R} } (concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , there is a connected Lie group G {\displaystyle G} with Lie algebra g {\displaystyle {\mathfrak {g}}} . This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3). For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over R {\displaystyle \mathbb {R} } . The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra. Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood. For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group. Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group G = R {\displaystyle G=\mathbb {R} } , an infinite-dimensional representation of G {\displaystyle G} can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras. == Real form and complexification == Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a real Lie algebra g 0 {\displaystyle {\mathfrak {g}}_{0}} is said to be a real form of g {\displaystyle {\mathfrak {g}}} if the complexification g 0 ⊗ R C {\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} } is isomorphic to g {\displaystyle {\mathfrak {g}}} . A real form need not be unique; for example, s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} has two real forms up to isomorphism, s l ( 2 , R ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )} and s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} . Given a semisimple complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism. == Lie algebra with additional structures == A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex. For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers Q {\displaystyle \mathbb {Q} } to describe rational homotopy theory in algebraic terms. == Lie ring == The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra g {\displaystyle {\mathfrak {g}}} over R is an R-module with an alternating R-bilinear map [ , ] : g × g → g {\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} that satisfies the Jacobi identity. A Lie algebra over the ring Z {\displaystyle \mathbb {Z} } of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.) Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below. p-adic Lie groups are related to Lie algebras over the field Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers as well as over the ring Z p {\displaystyle \mathbb {Z} _{p}} of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers. === Examples === Here is a construction of Lie rings arising from the study of abstract groups. For elements x , y {\displaystyle x,y} of a group, define the commutator [ x , y ] = x − 1 y − 1 x y {\displaystyle [x,y]=x^{-1}y^{-1}xy} . Let G = G 1 ⊇ G 2 ⊇ G 3 ⊇ ⋯ ⊇ G n ⊇ ⋯ {\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots } be a filtration of a group G {\displaystyle G} , that is, a chain of subgroups such that [ G i , G j ] {\displaystyle [G_{i},G_{j}]} is contained in G i + j {\displaystyle G_{i+j}} for all i , j {\displaystyle i,j} . (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then L = ⨁ i ≥ 1 G i / G i + 1 {\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}} is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group G i / G i + 1 {\displaystyle G_{i}/G_{i+1}} ), and with Lie bracket G i / G i + 1 × G j / G j + 1 → G i + j / G i + j + 1 {\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}} given by commutators in the group: [ x G i + 1 , y G j + 1 ] := [ x , y ] G i + j + 1 . {\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.} For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . == Definition using category-theoretic notation == The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.) For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism τ : A ⊗ A → A ⊗ A {\displaystyle \tau :A\otimes A\to A\otimes A} is defined by τ ( x ⊗ y ) = y ⊗ x . {\displaystyle \tau (x\otimes y)=y\otimes x.} The cyclic-permutation braiding σ : A ⊗ A ⊗ A → A ⊗ A ⊗ A {\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A} is defined as σ = ( i d ⊗ τ ) ∘ ( τ ⊗ i d ) , {\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),} where i d {\displaystyle \mathrm {id} } is the identity morphism. Equivalently, σ {\displaystyle \sigma } is defined by σ ( x ⊗ y ⊗ z ) = y ⊗ z ⊗ x . {\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.} With this notation, a Lie algebra can be defined as an object A {\displaystyle A} in the category of vector spaces together with a morphism [ ⋅ , ⋅ ] : A ⊗ A → A {\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A} that satisfies the two morphism equalities [ ⋅ , ⋅ ] ∘ ( i d + τ ) = 0 , {\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,} and [ ⋅ , ⋅ ] ∘ ( [ ⋅ , ⋅ ] ⊗ i d ) ∘ ( i d + σ + σ 2 ) = 0. {\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.} == Generalization == Several generalizations of a Lie algebra have been proposed, many from physics. Among them are graded Lie algebras, Lie superalgebras, Lie n-algebras, == See also == == Remarks == == References == == Sources == Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312. Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229. Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562. Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927. Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819 Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691 Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252 O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive. O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive. Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031 Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691. Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308. Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help) == External links == Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20. "Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists".
Wikipedia/Lie_algebra
Quantum mechanics is the fundamental physical theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms.: 1.1  It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. == Overview and fundamental concepts == Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and subatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.: 67–87  One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.: 427–435  Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.: 102–111 : 1.1–1.8  The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).: 109  However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. == Mathematical formulation == In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ {\displaystyle \psi } belonging to a (separable) complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, ψ {\displaystyle \psi } and e i α ψ {\displaystyle e^{i\alpha }\psi } represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L 2 ( C ) {\displaystyle L^{2}(\mathbb {C} )} , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ → , ψ ⟩ | 2 {\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}} , where λ → {\displaystyle {\vec {\lambda }}} is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ , P λ ψ ⟩ {\displaystyle \langle \psi ,P_{\lambda }\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result λ {\displaystyle \lambda } was obtained, the quantum state is postulated to collapse to λ → {\displaystyle {\vec {\lambda }}} , in the non-degenerate case, or to P λ ψ / ⟨ ψ , P λ ψ ⟩ {\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}} , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). === Time evolution of a quantum state === The time evolution of a quantum state is described by the Schrödinger equation: i ℏ ∂ ∂ t ψ ( t ) = H ψ ( t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).} Here H {\displaystyle H} denotes the Hamiltonian, the observable corresponding to the total energy of the system, and ℏ {\displaystyle \hbar } is the reduced Planck constant. The constant i ℏ {\displaystyle i\hbar } is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by ψ ( t ) = e − i H t / ℏ ψ ( 0 ) . {\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).} The operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state ψ ( 0 ) {\displaystyle \psi (0)} – it makes a definite prediction of what the quantum state ψ ( t ) {\displaystyle \psi (t)} will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian.: 133–137  Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy.: 793  Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.: 849  === Uncertainty principle === One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator X ^ {\displaystyle {\hat {X}}} and momentum operator P ^ {\displaystyle {\hat {P}}} do not commute, but rather satisfy the canonical commutation relation: [ X ^ , P ^ ] = i ℏ . {\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .} Given a quantum state, the Born rule lets us compute expectation values for both X {\displaystyle X} and P {\displaystyle P} , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have σ X = ⟨ X 2 ⟩ − ⟨ X ⟩ 2 , {\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},} and likewise for the momentum: σ P = ⟨ P 2 ⟩ − ⟨ P ⟩ 2 . {\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.} The uncertainty principle states that σ X σ P ≥ ℏ 2 . {\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.} Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators A {\displaystyle A} and B {\displaystyle B} . The commutator of these two operators is [ A , B ] = A B − B A , {\displaystyle [A,B]=AB-BA,} and this provides the lower bound on the product of standard deviations: σ A σ B ≥ 1 2 | ⟨ [ A , B ] ⟩ | . {\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an i / ℏ {\displaystyle i/\hbar } factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum p i {\displaystyle p_{i}} is replaced by − i ℏ ∂ ∂ x {\displaystyle -i\hbar {\frac {\partial }{\partial x}}} , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times − ℏ 2 {\displaystyle -\hbar ^{2}} . === Composite systems and entanglement === When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces H A {\displaystyle {\mathcal {H}}_{A}} and H B {\displaystyle {\mathcal {H}}_{B}} , respectively. The Hilbert space of the composite system is then H A B = H A ⊗ H B . {\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.} If the state for the first system is the vector ψ A {\displaystyle \psi _{A}} and the state for the second system is ψ B {\displaystyle \psi _{B}} , then the state of the composite system is ψ A ⊗ ψ B . {\displaystyle \psi _{A}\otimes \psi _{B}.} Not all states in the joint Hilbert space H A B {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if ψ A {\displaystyle \psi _{A}} and ϕ A {\displaystyle \phi _{A}} are both possible states for system A {\displaystyle A} , and likewise ψ B {\displaystyle \psi _{B}} and ϕ B {\displaystyle \phi _{B}} are both possible states for system B {\displaystyle B} , then 1 2 ( ψ A ⊗ ψ B + ϕ A ⊗ ϕ B ) {\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)} is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. === Equivalence between formulations === There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. === Symmetries and conservation laws === The Hamiltonian H {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} for each value of t {\displaystyle t} . From this relation between U ( t ) {\displaystyle U(t)} and H {\displaystyle H} , it follows that any observable A {\displaystyle A} that commutes with H {\displaystyle H} will be conserved: its expectation value will not change over time.: 471  This statement generalizes, as mathematically, any Hermitian operator A {\displaystyle A} can generate a family of unitary operators parameterized by a variable t {\displaystyle t} . Under the evolution generated by A {\displaystyle A} , any observable B {\displaystyle B} that commutes with A {\displaystyle A} will be conserved. Moreover, if B {\displaystyle B} is conserved by evolution under A {\displaystyle A} , then A {\displaystyle A} is conserved under the evolution generated by B {\displaystyle B} . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. == Examples == === Free particle === The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: H = 1 2 m P 2 = − ℏ 2 2 m d 2 d x 2 . {\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.} The general solution of the Schrödinger equation is given by ψ ( x , t ) = 1 2 π ∫ − ∞ ∞ ψ ^ ( k , 0 ) e i ( k x − ℏ k 2 2 m t ) d k , {\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,} which is a superposition of all possible plane waves e i ( k x − ℏ k 2 2 m t ) {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}} , which are eigenstates of the momentum operator with momentum p = ℏ k {\displaystyle p=\hbar k} . The coefficients of the superposition are ψ ^ ( k , 0 ) {\displaystyle {\hat {\psi }}(k,0)} , which is the Fourier transform of the initial quantum state ψ ( x , 0 ) {\displaystyle \psi (x,0)} . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: ψ ( x , 0 ) = 1 π a 4 e − x 2 2 a {\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}} which has Fourier transform, and therefore momentum distribution ψ ^ ( k , 0 ) = a π 4 e − a k 2 2 . {\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.} We see that as we make a {\displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making a {\displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === As in the classical case, the potential for the quantum harmonic oscillator is given by: 234  V ( x ) = 1 2 m ω 2 x 2 . {\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by ψ n ( x ) = 1 2 n n ! ⋅ ( m ω π ℏ ) 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad } n = 0 , 1 , 2 , … . {\displaystyle n=0,1,2,\ldots .} where Hn are the Hermite polynomials H n ( x ) = ( − 1 ) n e x 2 d n d x n ( e − x 2 ) , {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),} and the corresponding energy levels are E n = ℏ ω ( n + 1 2 ) . {\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy for bound states. === Mach–Zehnder interferometer === The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector ψ ∈ C 2 {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the "lower" path ψ l = ( 1 0 ) {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the "upper" path ψ u = ( 0 1 ) {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}} , that is, ψ = α ψ l + β ψ u {\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}} for complex α , β {\displaystyle \alpha ,\beta } . In order to respect the postulate that ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} we require that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . Both beam splitters are modelled as the unitary matrix B = 1 2 ( 1 i i 1 ) {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}} , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of 1 / 2 {\displaystyle 1/{\sqrt {2}}} , or be reflected to the other path with a probability amplitude of i / 2 {\displaystyle i/{\sqrt {2}}} . The phase shifter on the upper arm is modelled as the unitary matrix P = ( 1 0 0 e i Δ Φ ) {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}} , which means that if the photon is on the "upper" path it will gain a relative phase of Δ Φ {\displaystyle \Delta \Phi } , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter B {\displaystyle B} , a phase shifter P {\displaystyle P} , and another beam splitter B {\displaystyle B} , and so end up in the state B P B ψ l = i e i Δ Φ / 2 ( − sin ⁡ ( Δ Φ / 2 ) cos ⁡ ( Δ Φ / 2 ) ) , {\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},} and the probabilities that it will be detected at the right or at the top are given respectively by p ( u ) = | ⟨ ψ u , B P B ψ l ⟩ | 2 = cos 2 ⁡ Δ Φ 2 , {\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},} p ( l ) = | ⟨ ψ l , B P B ψ l ⟩ | 2 = sin 2 ⁡ Δ Φ 2 . {\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.} One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by p ( u ) = p ( l ) = 1 / 2 {\displaystyle p(u)=p(l)=1/2} , independently of the phase Δ Φ {\displaystyle \Delta \Phi } . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. == Applications == Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. == Relation to other scientific theories == === Classical mechanics === The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.: 299  When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.: 234  Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.: 353  Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.: 687–730  Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. === Special relativity and electrodynamics === Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical − e 2 / ( 4 π ϵ 0 r ) {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential.: 285  Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.: 26  This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. === Relation to general relativity === Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. == Philosophical implications == Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. == History == Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word "atom" deriving from the Greek for 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): E = h ν {\displaystyle E=h\nu \ } , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids. == See also == == Explanatory notes == == References == == Further reading == == External links == Introduction to Quantum Theory at Quantiki. Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe. Course material Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware. Modern Physics: With waves, thermodynamics, and optics – an online textbook. MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06. ⁠5+1/2⁠ Examples in Quantum Mechanics. Philosophy Ismael, Jenann. "Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Philosophical Issues in Quantum Theory". Stanford Encyclopedia of Philosophy.
Wikipedia/Quantum_mechanics
In mathematics, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} , where P is a polynomial with coefficients in some field, often the field of the rational numbers. For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is an algebraic equation with integer coefficients and y 4 + x y 2 − x 3 3 + x y 2 + y 2 + 1 7 = 0 {\displaystyle y^{4}+{\frac {xy}{2}}-{\frac {x^{3}}{3}}+xy^{2}+y^{2}+{\frac {1}{7}}=0} is a multivariate polynomial equation over the rationals. For many authors, the term algebraic equation refers only to the univariate case, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables (the multivariate case), in which case the term polynomial equation is usually preferred. Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). == Terminology == The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory. Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve nth roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations. == History == The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets). Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like x = 1 + 5 2 {\displaystyle x={\frac {1+{\sqrt {5}}}{2}}} for the positive solution of x 2 − x − 1 = 0 {\displaystyle x^{2}-x-1=0} . The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals. == Areas of study == The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations. Two equations are equivalent if they have the same set of solutions. In particular the equation P = Q {\displaystyle P=Q} is equivalent to P − Q = 0 {\displaystyle P-Q=0} . It follows that the study of algebraic equations is equivalent to the study of polynomials. A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation y 4 + x y 2 = x 3 3 − x y 2 + y 2 − 1 7 {\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}} becomes 42 y 4 + 21 x y − 14 x 3 + 42 x y 2 − 42 y 2 + 6 = 0. {\displaystyle 42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0.} Because sine, exponentiation, and 1/T are not polynomial functions, e T x 2 + 1 T x y + sin ⁡ ( T ) z − 2 = 0 {\displaystyle e^{T}x^{2}+{\frac {1}{T}}xy+\sin(T)z-2=0} is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T. == Theory == === Polynomials === Given an equation in unknown x ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , with coefficients in a field K, one can equivalently say that the solutions of (E) in K are the roots in K of the polynomial P = a n X n + a n − 1 X n − 1 + ⋯ + a 1 X + a 0 ∈ K [ X ] {\displaystyle P=a_{n}X^{n}+a_{n-1}X^{n-1}+\dots +a_{1}X+a_{0}\quad \in K[X]} . It can be shown that a polynomial of degree n in a field has at most n roots. The equation (E) therefore has at most n solutions. If K' is a field extension of K, one may consider (E) to be an equation with coefficients in K and the solutions of (E) in K are also solutions in K' (the converse does not hold in general). It is always possible to find a field extension of K known as the rupture field of the polynomial P, in which (E) has at least one solution. === Existence of solutions to real and complex equations === The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution. It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as x 2 + 1 = 0 {\displaystyle x^{2}+1=0} does not have a solution in R {\displaystyle \mathbb {R} } (the solutions are the imaginary units i and −i). While the real solutions of real equations are intuitive (they are the x-coordinates of the points where the curve y = P(x) intersects the x-axis), the existence of complex solutions to real equations can be surprising and less easy to visualize. However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in x is continuous, and it approaches − ∞ {\displaystyle -\infty } as x approaches − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as x approaches + ∞ {\displaystyle +\infty } . By the intermediate value theorem, it must therefore assume the value zero at some real x, which is then a solution of the polynomial equation. === Connection to Galois theory === There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals. == Explicit solution of numerical equations == === Approach === The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree n reduces to factoring the associated polynomial, that is, rewriting (E) in the form a n ( x − z 1 ) … ( x − z n ) = 0 {\displaystyle a_{n}(x-z_{1})\dots (x-z_{n})=0} , where the solutions are then the z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} . The problem is then to express the z i {\displaystyle z_{i}} in terms of the a i {\displaystyle a_{i}} . This approach applies more generally if the coefficients and solutions belong to an integral domain. === General techniques === ==== Factoring ==== If an equation P(x) = 0 of degree n has a rational root α, the associated polynomial can be factored to give the form P(X) = (X − α)Q(X) (by dividing P(X) by X − α or by writing P(X) − P(α) as a linear combination of terms of the form Xk − αk, and factoring out X − α. Solving P(x) = 0 thus reduces to solving the degree n − 1 equation Q(x) = 0. See for example the case n = 3. ==== Elimination of the sub-dominant term ==== To solve an equation of degree n, ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , a common preliminary step is to eliminate the degree-n - 1 term: by setting x = y − a n − 1 n a n {\displaystyle x=y-{\frac {a_{n-1}}{n\,a_{n}}}} , equation (E) becomes a n y n + b n − 2 y n − 2 + ⋯ + b 1 y + b 0 = 0 {\displaystyle a_{n}y^{n}+b_{n-2}y^{n-2}+\dots +b_{1}y+b_{0}=0} . Leonhard Euler developed this technique for the case n = 3 but it is also applicable to the case n = 4, for example. === Quadratic equations === To solve a quadratic equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} one calculates the discriminant Δ defined by Δ = b 2 − 4 a c {\displaystyle \Delta =b^{2}-4ac} . If the polynomial has real coefficients, it has: two distinct real roots if Δ > 0 {\displaystyle \Delta >0} ; one real double root if Δ = 0 {\displaystyle \Delta =0} ; no real root if Δ < 0 {\displaystyle \Delta <0} , but two complex conjugate roots. === Cubic equations === The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula. === Quartic equations === For detailed discussions of some solution methods see: Tschirnhaus transformation (general method, not guaranteed to succeed); Bezout method (general method, not guaranteed to succeed); Ferrari method (solutions for degree 4); Euler method (solutions for degree 4); Lagrange method (solutions for degree 4); Descartes method (solutions for degree 2 or 4); A quartic equation a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0} with a ≠ 0 {\displaystyle a\neq 0} may be reduced to a quadratic equation by a change of variable provided it is either biquadratic (b = d = 0) or quasi-palindromic (e = a, d = b). Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions. === Higher-degree equations === Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17. Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions. Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method. == See also == Algebraic function Algebraic number Root finding Linear equation (degree = 1) Quadratic equation (degree = 2) Cubic equation (degree = 3) Quartic equation (degree = 4) Quintic equation (degree = 5) Sextic equation (degree = 6) Septic equation (degree = 7) System of linear equations System of polynomial equations Linear Diophantine equation Linear equation over a ring Cramer's theorem (algebraic curves), on the number of points usually sufficient to determine a bivariate n-th degree curve == References == "Algebraic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Algebraic Equation". MathWorld.
Wikipedia/Polynomial_equations
In mathematics, the exterior algebra or Grassmann algebra of a vector space V {\displaystyle V} is an associative algebra that contains V , {\displaystyle V,} which has a product, called exterior product or wedge product and denoted with ∧ {\displaystyle \wedge } , such that v ∧ v = 0 {\displaystyle v\wedge v=0} for every vector v {\displaystyle v} in V . {\displaystyle V.} The exterior algebra is named after Hermann Grassmann, and the names of the product come from the "wedge" symbol ∧ {\displaystyle \wedge } and the fact that the product of two elements of V {\displaystyle V} is "outside" V . {\displaystyle V.} The wedge product of k {\displaystyle k} vectors v 1 ∧ v 2 ∧ ⋯ ∧ v k {\displaystyle v_{1}\wedge v_{2}\wedge \dots \wedge v_{k}} is called a blade of degree k {\displaystyle k} or k {\displaystyle k} -blade. The wedge product was introduced originally as an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues: the magnitude of a 2-blade v ∧ w {\displaystyle v\wedge w} is the area of the parallelogram defined by v {\displaystyle v} and w , {\displaystyle w,} and, more generally, the magnitude of a k {\displaystyle k} -blade is the (hyper)volume of the parallelotope defined by the constituent vectors. The alternating property that v ∧ v = 0 {\displaystyle v\wedge v=0} implies a skew-symmetric property that v ∧ w = − w ∧ v , {\displaystyle v\wedge w=-w\wedge v,} and more generally any blade flips sign whenever two of its constituent vectors are exchanged, corresponding to a parallelotope of opposite orientation. The full exterior algebra contains objects that are not themselves blades, but linear combinations of blades; a sum of blades of homogeneous degree k {\displaystyle k} is called a k-vector, while a more general sum of blades of arbitrary degree is called a multivector. The linear span of the k {\displaystyle k} -blades is called the k {\displaystyle k} -th exterior power of V . {\displaystyle V.} The exterior algebra is the direct sum of the k {\displaystyle k} -th exterior powers of V , {\displaystyle V,} and this makes the exterior algebra a graded algebra. The exterior algebra is universal in the sense that every equation that relates elements of V {\displaystyle V} in the exterior algebra is also valid in every associative algebra that contains V {\displaystyle V} and in which the square of every element of V {\displaystyle V} is zero. The definition of the exterior algebra can be extended for spaces built from vector spaces, such as vector fields and functions whose domain is a vector space. Moreover, the field of scalars may be any field. More generally, the exterior algebra can be defined for modules over a commutative ring. In particular, the algebra of differential forms in k {\displaystyle k} variables is an exterior algebra over the ring of the smooth functions in k {\displaystyle k} variables. == Motivating examples == === Areas in the plane === The two-dimensional Euclidean vector space R 2 {\displaystyle \mathbf {R} ^{2}} is a real vector space equipped with a basis consisting of a pair of orthogonal unit vectors e 1 = [ 1 0 ] , e 2 = [ 0 1 ] . {\displaystyle \mathbf {e} _{1}={\begin{bmatrix}1\\0\end{bmatrix}},\quad \mathbf {e} _{2}={\begin{bmatrix}0\\1\end{bmatrix}}.} Suppose that v = [ a b ] = a e 1 + b e 2 , w = [ c d ] = c e 1 + d e 2 {\displaystyle \mathbf {v} ={\begin{bmatrix}a\\b\end{bmatrix}}=a\mathbf {e} _{1}+b\mathbf {e} _{2},\quad \mathbf {w} ={\begin{bmatrix}c\\d\end{bmatrix}}=c\mathbf {e} _{1}+d\mathbf {e} _{2}} are a pair of given vectors in ⁠ R 2 {\displaystyle \mathbf {R} ^{2}} ⁠, written in components. There is a unique parallelogram having v {\displaystyle \mathbf {v} } and w {\displaystyle \mathbf {w} } as two of its sides. The area of this parallelogram is given by the standard determinant formula: Area = | det [ v w ] | = | det [ a c b d ] | = | a d − b c | . {\displaystyle {\text{Area}}=\left|\det {\begin{bmatrix}\mathbf {v} &\mathbf {w} \end{bmatrix}}\right|=\left|\det {\begin{bmatrix}a&c\\b&d\end{bmatrix}}\right|=\left|ad-bc\right|.} Consider now the exterior product of v {\displaystyle \mathbf {v} } and ⁠ w {\displaystyle \mathbf {w} } ⁠: v ∧ w = ( a e 1 + b e 2 ) ∧ ( c e 1 + d e 2 ) = a c e 1 ∧ e 1 + a d e 1 ∧ e 2 + b c e 2 ∧ e 1 + b d e 2 ∧ e 2 = ( a d − b c ) e 1 ∧ e 2 , {\displaystyle {\begin{aligned}\mathbf {v} \wedge \mathbf {w} &=(a\mathbf {e} _{1}+b\mathbf {e} _{2})\wedge (c\mathbf {e} _{1}+d\mathbf {e} _{2})\\&=ac\mathbf {e} _{1}\wedge \mathbf {e} _{1}+ad\mathbf {e} _{1}\wedge \mathbf {e} _{2}+bc\mathbf {e} _{2}\wedge \mathbf {e} _{1}+bd\mathbf {e} _{2}\wedge \mathbf {e} _{2}\\&=\left(ad-bc\right)\mathbf {e} _{1}\wedge \mathbf {e} _{2},\end{aligned}}} where the first step uses the distributive law for the exterior product, and the last uses the fact that the exterior product is an alternating map, and in particular e 2 ∧ e 1 = − ( e 1 ∧ e 2 ) . {\displaystyle \mathbf {e} _{2}\wedge \mathbf {e} _{1}=-(\mathbf {e} _{1}\wedge \mathbf {e} _{2}).} (The fact that the exterior product is an alternating map also forces e 1 ∧ e 1 = e 2 ∧ e 2 = 0. {\displaystyle \mathbf {e} _{1}\wedge \mathbf {e} _{1}=\mathbf {e} _{2}\wedge \mathbf {e} _{2}=0.} ) Note that the coefficient in this last expression is precisely the determinant of the matrix [v w]. The fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and the sign determines its orientation. The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if A(v, w) denotes the signed area of the parallelogram of which the pair of vectors v and w form two adjacent sides, then A must satisfy the following properties: A(rv, sw) = rsA(v, w) for any real numbers r and s, since rescaling either of the sides rescales the area by the same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram). A(v, v) = 0, since the area of the degenerate parallelogram determined by v (i.e., a line segment) is zero. A(w, v) = −A(v, w), since interchanging the roles of v and w reverses the orientation of the parallelogram. A(v + rw, w) = A(v, w) for any real number r, since adding a multiple of w to v affects neither the base nor the height of the parallelogram and consequently preserves its area. A(e1, e2) = 1, since the area of the unit square is one. With the exception of the last property, the exterior product of two vectors satisfies the same properties as the area. In a certain sense, the exterior product generalizes the final property by allowing the area of a parallelogram to be compared to that of any chosen parallelogram in a parallel plane (here, the one with sides e1 and e2). In other words, the exterior product provides a basis-independent formulation of area. === Cross and triple products === For vectors in R3, the exterior algebra is closely related to the cross product and triple product. Using the standard basis {e1, e2, e3}, the exterior product of a pair of vectors u = u 1 e 1 + u 2 e 2 + u 3 e 3 {\displaystyle \mathbf {u} =u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2}+u_{3}\mathbf {e} _{3}} and v = v 1 e 1 + v 2 e 2 + v 3 e 3 {\displaystyle \mathbf {v} =v_{1}\mathbf {e} _{1}+v_{2}\mathbf {e} _{2}+v_{3}\mathbf {e} _{3}} is u ∧ v = ( u 1 v 2 − u 2 v 1 ) ( e 1 ∧ e 2 ) {\displaystyle \mathbf {u} \wedge \mathbf {v} =(u_{1}v_{2}-u_{2}v_{1})(\mathbf {e} _{1}\wedge \mathbf {e} _{2})} u ∧ v + ( u 3 v 1 − u 1 v 3 ) ( e 3 ∧ e 1 ) {\displaystyle {\phantom {\mathbf {u} \wedge \mathbf {v} }}+(u_{3}v_{1}-u_{1}v_{3})(\mathbf {e} _{3}\wedge \mathbf {e} _{1})} u ∧ v + ( u 2 v 3 − u 3 v 2 ) ( e 2 ∧ e 3 ) {\displaystyle {\phantom {\mathbf {u} \wedge \mathbf {v} }}+(u_{2}v_{3}-u_{3}v_{2})(\mathbf {e} _{2}\wedge \mathbf {e} _{3})} where {e1 ∧ e2, e3 ∧ e1, e2 ∧ e3} is the basis for the three-dimensional space ⋀2(R3). The coefficients above are the same as those in the usual definition of the cross product of vectors in three dimensions, the only difference being that the exterior product is not an ordinary vector, but instead is a bivector. Bringing in a third vector w = w 1 e 1 + w 2 e 2 + w 3 e 3 , {\displaystyle \mathbf {w} =w_{1}\mathbf {e} _{1}+w_{2}\mathbf {e} _{2}+w_{3}\mathbf {e} _{3},} the exterior product of three vectors is u ∧ v ∧ w = ( u 1 v 2 w 3 + u 2 v 3 w 1 + u 3 v 1 w 2 − u 1 v 3 w 2 − u 2 v 1 w 3 − u 3 v 2 w 1 ) ( e 1 ∧ e 2 ∧ e 3 ) {\displaystyle \mathbf {u} \wedge \mathbf {v} \wedge \mathbf {w} =(u_{1}v_{2}w_{3}+u_{2}v_{3}w_{1}+u_{3}v_{1}w_{2}-u_{1}v_{3}w_{2}-u_{2}v_{1}w_{3}-u_{3}v_{2}w_{1})(\mathbf {e} _{1}\wedge \mathbf {e} _{2}\wedge \mathbf {e} _{3})} where e1 ∧ e2 ∧ e3 is the basis vector for the one-dimensional space ⋀3(R3). The scalar coefficient is the triple product of the three vectors. The cross product and triple product in three dimensions each admit both geometric and algebraic interpretations. The cross product u × v can be interpreted as a vector which is perpendicular to both u and v and whose magnitude is equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector consisting of the minors of the matrix with columns u and v. The triple product of u, v, and w is geometrically a (signed) volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product in three dimensions allows for similar interpretations. In fact, in the presence of a positively oriented orthonormal basis, the exterior product generalizes these notions to higher dimensions. == Formal definition == The exterior algebra ⋀ ( V ) {\displaystyle \bigwedge (V)} of a vector space V {\displaystyle V} over a field K {\displaystyle K} is defined as the quotient algebra of the tensor algebra T(V), where T ( V ) = ⨁ k = 0 ∞ T k V = K ⊕ V ⊕ ( V ⊗ V ) ⊕ ( V ⊗ V ⊗ V ) ⊕ ⋯ , {\displaystyle T(V)=\bigoplus _{k=0}^{\infty }T^{k}V=K\oplus V\oplus (V\otimes V)\oplus (V\otimes V\otimes V)\oplus \cdots ,} by the two-sided ideal I {\displaystyle I} generated by all elements of the form x ⊗ x {\displaystyle x\otimes x} such that x ∈ V {\displaystyle x\in V} . Symbolically, ⋀ ( V ) := T ( V ) / I . {\displaystyle \bigwedge (V):=T(V)/I.\,} The exterior product ∧ {\displaystyle \wedge } of two elements of ⋀ ( V ) {\displaystyle \bigwedge (V)} is defined by α ∧ β = α ⊗ β ( mod I ) . {\displaystyle \alpha \wedge \beta =\alpha \otimes \beta {\pmod {I}}.} == Algebraic properties == === Alternating product === The exterior product is by construction alternating on elements of ⁠ V {\displaystyle V} ⁠, which means that x ∧ x = 0 {\displaystyle x\wedge x=0} for all x ∈ V , {\displaystyle x\in V,} by the above construction. It follows that the product is also anticommutative on elements of ⁠ V {\displaystyle V} ⁠, for supposing that ⁠ x , y ∈ V {\displaystyle x,y\in V} ⁠, 0 = ( x + y ) ∧ ( x + y ) = x ∧ x + x ∧ y + y ∧ x + y ∧ y = x ∧ y + y ∧ x {\displaystyle 0=(x+y)\wedge (x+y)=x\wedge x+x\wedge y+y\wedge x+y\wedge y=x\wedge y+y\wedge x} hence x ∧ y = − ( y ∧ x ) . {\displaystyle x\wedge y=-(y\wedge x).} More generally, if σ {\displaystyle \sigma } is a permutation of the integers ⁠ [ 1 , … , k ] {\displaystyle [1,\dots ,k]} ⁠, and ⁠ x 1 {\displaystyle x_{1}} ⁠, ⁠ x 2 {\displaystyle x_{2}} ⁠, ..., ⁠ x k {\displaystyle x_{k}} ⁠ are elements of ⁠ V {\displaystyle V} ⁠, it follows that x σ ( 1 ) ∧ x σ ( 2 ) ∧ ⋯ ∧ x σ ( k ) = sgn ⁡ ( σ ) x 1 ∧ x 2 ∧ ⋯ ∧ x k , {\displaystyle x_{\sigma (1)}\wedge x_{\sigma (2)}\wedge \cdots \wedge x_{\sigma (k)}=\operatorname {sgn}(\sigma )x_{1}\wedge x_{2}\wedge \cdots \wedge x_{k},} where sgn ⁡ ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} is the signature of the permutation ⁠ σ {\displaystyle \sigma } ⁠. In particular, if x i = x j {\displaystyle x_{i}=x_{j}} for some ⁠ i ≠ j {\displaystyle i\neq j} ⁠, then the following generalization of the alternating property also holds: x 1 ∧ x 2 ∧ ⋯ ∧ x k = 0. {\displaystyle x_{1}\wedge x_{2}\wedge \cdots \wedge x_{k}=0.} Together with the distributive property of the exterior product, one further generalization is that a necessary and sufficient condition for { x 1 , x 2 , … , x k } {\displaystyle \{x_{1},x_{2},\dots ,x_{k}\}} to be a linearly dependent set of vectors is that x 1 ∧ x 2 ∧ ⋯ ∧ x k = 0. {\displaystyle x_{1}\wedge x_{2}\wedge \cdots \wedge x_{k}=0.} === Exterior power === The kth exterior power of ⁠ V {\displaystyle V} ⁠, denoted ⁠ ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} ⁠, is the vector subspace of ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠ spanned by elements of the form x 1 ∧ x 2 ∧ ⋯ ∧ x k , x i ∈ V , i = 1 , 2 , … , k . {\displaystyle x_{1}\wedge x_{2}\wedge \cdots \wedge x_{k},\quad x_{i}\in V,i=1,2,\dots ,k.} If ⁠ α ∈ ⋀ k ( V ) {\displaystyle \alpha \in {\textstyle \bigwedge }^{\!k}(V)} ⁠, then α {\displaystyle \alpha } is said to be a k-vector. If, furthermore, α {\displaystyle \alpha } can be expressed as an exterior product of k {\displaystyle k} elements of ⁠ V {\displaystyle V} ⁠, then α {\displaystyle \alpha } is said to be decomposable (or simple, by some authors; or a blade, by others). Although decomposable ⁠ k {\displaystyle k} ⁠-vectors span ⁠ ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} ⁠, not every element of ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} is decomposable. For example, given ⁠ R 4 {\displaystyle \mathbf {R} ^{4}} ⁠ with a basis ⁠ { e 1 , e 2 , e 3 , e 4 } {\displaystyle \{e_{1},e_{2},e_{3},e_{4}\}} ⁠, the following 2-vector is not decomposable: α = e 1 ∧ e 2 + e 3 ∧ e 4 . {\displaystyle \alpha =e_{1}\wedge e_{2}+e_{3}\wedge e_{4}.} ==== Basis and dimension ==== If the dimension of V {\displaystyle V} is n {\displaystyle n} and { e 1 , … , e n } {\displaystyle \{e_{1},\dots ,e_{n}\}} is a basis for V {\displaystyle V} , then the set { e i 1 ∧ e i 2 ∧ ⋯ ∧ e i k | 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n } {\displaystyle \{\,e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}~{\big |}~~1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n\,\}} is a basis for ⁠ ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} ⁠. The reason is the following: given any exterior product of the form v 1 ∧ ⋯ ∧ v k , {\displaystyle v_{1}\wedge \cdots \wedge v_{k},} every vector v j {\displaystyle v_{j}} can be written as a linear combination of the basis vectors ⁠ e i {\displaystyle e_{i}} ⁠; using the bilinearity of the exterior product, this can be expanded to a linear combination of exterior products of those basis vectors. Any exterior product in which the same basis vector appears more than once is zero; any exterior product in which the basis vectors do not appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general, the resulting coefficients of the basis k-vectors can be computed as the minors of the matrix that describes the vectors v j {\displaystyle v_{j}} in terms of the basis ⁠ e i {\displaystyle e_{i}} ⁠. By counting the basis elements, the dimension of ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} is equal to a binomial coefficient: dim ⁡ ⋀ k ( V ) = ( n k ) , {\displaystyle \dim {\textstyle \bigwedge }^{\!k}(V)={\binom {n}{k}},} where ⁠ n {\displaystyle n} ⁠ is the dimension of the vectors, and ⁠ k {\displaystyle k} ⁠ is the number of vectors in the product. The binomial coefficient produces the correct result, even for exceptional cases; in particular, ⋀ k ( V ) = { 0 } {\displaystyle {\textstyle \bigwedge }^{\!k}(V)=\{0\}} for ⁠ k > n {\displaystyle k>n} ⁠. Any element of the exterior algebra can be written as a sum of k-vectors. Hence, as a vector space the exterior algebra is a direct sum ⋀ ( V ) = ⋀ 0 ( V ) ⊕ ⋀ 1 ( V ) ⊕ ⋀ 2 ( V ) ⊕ ⋯ ⊕ ⋀ n ( V ) {\displaystyle {\textstyle \bigwedge }(V)={\textstyle \bigwedge }^{\!0}(V)\oplus {\textstyle \bigwedge }^{\!1}(V)\oplus {\textstyle \bigwedge }^{\!2}(V)\oplus \cdots \oplus {\textstyle \bigwedge }^{\!n}(V)} (where, by convention, ⁠ ⋀ 0 ( V ) = K {\displaystyle {\textstyle \bigwedge }^{\!0}(V)=K} ⁠, the field underlying ⁠ V {\displaystyle V} ⁠, and ⁠ ⋀ 1 ( V ) = V {\displaystyle {\textstyle \bigwedge }^{\!1}(V)=V} ⁠), and therefore its dimension is equal to the sum of the binomial coefficients, which is ⁠ 2 n {\displaystyle 2^{n}} ⁠. ==== Rank of a k-vector ==== If ⁠ α ∈ ⋀ k ( V ) {\displaystyle \alpha \in {\textstyle \bigwedge }^{\!k}(V)} ⁠, then it is possible to express α {\displaystyle \alpha } as a linear combination of decomposable k-vectors: α = α ( 1 ) + α ( 2 ) + ⋯ + α ( s ) {\displaystyle \alpha =\alpha ^{(1)}+\alpha ^{(2)}+\cdots +\alpha ^{(s)}} where each α ( i ) {\displaystyle \alpha ^{(i)}} is decomposable, say α ( i ) = α 1 ( i ) ∧ ⋯ ∧ α k ( i ) , i = 1 , 2 , … , s . {\displaystyle \alpha ^{(i)}=\alpha _{1}^{(i)}\wedge \cdots \wedge \alpha _{k}^{(i)},\quad i=1,2,\ldots ,s.} The rank of the k-vector α {\displaystyle \alpha } is the minimal number of decomposable k-vectors in such an expansion of ⁠ α {\displaystyle \alpha } ⁠. This is similar to the notion of tensor rank. Rank is particularly important in the study of 2-vectors (Sternberg 1964, §III.6) (Bryant et al. 1991). The rank of a 2-vector α {\displaystyle \alpha } can be identified with half the rank of the matrix of coefficients of α {\displaystyle \alpha } in a basis. Thus if e i {\displaystyle e_{i}} is a basis for ⁠ V {\displaystyle V} ⁠, then α {\displaystyle \alpha } can be expressed uniquely as α = ∑ i , j a i j e i ∧ e j {\displaystyle \alpha =\sum _{i,j}a_{ij}e_{i}\wedge e_{j}} where a i j = − a j i {\displaystyle a_{ij}=-a_{ji}} (the matrix of coefficients is skew-symmetric). The rank of the matrix a i j {\displaystyle a_{ij}} is therefore even, and is twice the rank of the form α {\displaystyle \alpha } . In characteristic 0, the 2-vector α {\displaystyle \alpha } has rank p {\displaystyle p} if and only if α ∧ ⋯ ∧ α ⏟ p ≠ 0 {\displaystyle {\underset {p}{\underbrace {\alpha \wedge \cdots \wedge \alpha } }}\neq 0\ } and α ∧ ⋯ ∧ α ⏟ p + 1 = 0. {\displaystyle \ {\underset {p+1}{\underbrace {\alpha \wedge \cdots \wedge \alpha } }}=0.} === Graded structure === The exterior product of a k-vector with a p-vector is a ( k + p ) {\displaystyle (k+p)} -vector, once again invoking bilinearity. As a consequence, the direct sum decomposition of the preceding section ⋀ ( V ) = ⋀ 0 ( V ) ⊕ ⋀ 1 ( V ) ⊕ ⋀ 2 ( V ) ⊕ ⋯ ⊕ ⋀ n ( V ) {\displaystyle {\textstyle \bigwedge }(V)={\textstyle \bigwedge }^{\!0}(V)\oplus {\textstyle \bigwedge }^{\!1}(V)\oplus {\textstyle \bigwedge }^{\!2}(V)\oplus \cdots \oplus {\textstyle \bigwedge }^{\!n}(V)} gives the exterior algebra the additional structure of a graded algebra, that is ⋀ k ( V ) ∧ ⋀ p ( V ) ⊂ ⋀ k + p ( V ) . {\displaystyle {\textstyle \bigwedge }^{\!k}(V)\wedge {\textstyle \bigwedge }^{\!p}(V)\subset {\textstyle \bigwedge }^{\!k+p}(V).} Moreover, if K is the base field, we have ⋀ 0 ( V ) = K {\displaystyle {\textstyle \bigwedge }^{\!0}(V)=K} and ⋀ 1 ( V ) = V . {\displaystyle {\textstyle \bigwedge }^{\!1}(V)=V.} The exterior product is graded anticommutative, meaning that if α ∈ ⋀ k ( V ) {\displaystyle \alpha \in {\textstyle \bigwedge }^{\!k}(V)} and ⁠ β ∈ ⋀ p ( V ) {\displaystyle \beta \in {\textstyle \bigwedge }^{\!p}(V)} ⁠, then α ∧ β = ( − 1 ) k p β ∧ α . {\displaystyle \alpha \wedge \beta =(-1)^{kp}\beta \wedge \alpha .} In addition to studying the graded structure on the exterior algebra, Bourbaki (1989) studies additional graded structures on exterior algebras, such as those on the exterior algebra of a graded module (a module that already carries its own gradation). === Universal property === Let V be a vector space over the field K. Informally, multiplication in ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} is performed by manipulating symbols and imposing a distributive law, an associative law, and using the identity v ∧ v = 0 {\displaystyle v\wedge v=0} for v ∈ V. Formally, ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} is the "most general" algebra in which these rules hold for the multiplication, in the sense that any unital associative K-algebra containing V with alternating multiplication on V must contain a homomorphic image of ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠. In other words, the exterior algebra has the following universal property: To construct the most general algebra that contains V and whose multiplication is alternating on V, it is natural to start with the most general associative algebra that contains V, the tensor algebra T(V), and then enforce the alternating property by taking a suitable quotient. We thus take the two-sided ideal I in T(V) generated by all elements of the form v ⊗ v for v in V, and define ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} as the quotient ⋀ ( V ) = T ( V ) / I {\displaystyle {\textstyle \bigwedge }(V)=T(V)\,/\,I} (and use ∧ as the symbol for multiplication in ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠). It is then straightforward to show that ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} contains V and satisfies the above universal property. As a consequence of this construction, the operation of assigning to a vector space V its exterior algebra ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} is a functor from the category of vector spaces to the category of algebras. Rather than defining ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} first and then identifying the exterior powers ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} as certain subspaces, one may alternatively define the spaces ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} first and then combine them to form the algebra ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠. This approach is often used in differential geometry and is described in the next section. === Generalizations === Given a commutative ring R {\displaystyle R} and an R {\displaystyle R} -module ⁠ M {\displaystyle M} ⁠, we can define the exterior algebra ⋀ ( M ) {\displaystyle {\textstyle \bigwedge }(M)} just as above, as a suitable quotient of the tensor algebra ⁠ T ( M ) {\displaystyle \mathrm {T} (M)} ⁠. It will satisfy the analogous universal property. Many of the properties of ⋀ ( M ) {\displaystyle {\textstyle \bigwedge }(M)} also require that M {\displaystyle M} be a projective module. Where finite dimensionality is used, the properties further require that M {\displaystyle M} be finitely generated and projective. Generalizations to the most common situations can be found in Bourbaki (1989). Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essential differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of the exterior algebra of finitely generated projective modules, by the Serre–Swan theorem. More general exterior algebras can be defined for sheaves of modules. == Alternating tensor algebra == For a field of characteristic not 2, the exterior algebra of a vector space V {\displaystyle V} over K {\displaystyle K} can be canonically identified with the vector subspace of T ( V ) {\displaystyle \mathrm {T} (V)} that consists of antisymmetric tensors. For characteristic 0 (or higher than ⁠ dim ⁡ V {\displaystyle \dim V} ⁠), the vector space of k {\displaystyle k} -linear antisymmetric tensors is transversal to the ideal ⁠ I {\displaystyle I} ⁠, hence, a good choice to represent the quotient. But for nonzero characteristic, the vector space of ⁠ K {\displaystyle K} ⁠-linear antisymmetric tensors could be not transversal to the ideal (actually, for ⁠ k ≥ char ⁡ K {\displaystyle k\geq \operatorname {char} K} ⁠, the vector space of K {\displaystyle K} -linear antisymmetric tensors is contained in I {\displaystyle I} ); nevertheless, transversal or not, a product can be defined on this space such that the resulting algebra is isomorphic to the exterior algebra: in the first case the natural choice for the product is just the quotient product (using the available projection), in the second case, this product must be slightly modified as given below (along Arnold setting), but such that the algebra stays isomorphic with the exterior algebra, i.e. the quotient of T ( V ) {\displaystyle \mathrm {T} (V)} by the ideal I {\displaystyle I} generated by elements of the form ⁠ x ⊗ x {\displaystyle x\otimes x} ⁠. Of course, for characteristic ⁠ 0 {\displaystyle 0} ⁠ (or higher than the dimension of the vector space), one or the other definition of the product could be used, as the two algebras are isomorphic (see V. I. Arnold or Kobayashi-Nomizu). Let T r ( V ) {\displaystyle \mathrm {T} ^{r}(V)} be the space of homogeneous tensors of degree r {\displaystyle r} . This is spanned by decomposable tensors v 1 ⊗ ⋯ ⊗ v r , v i ∈ V . {\displaystyle v_{1}\otimes \cdots \otimes v_{r},\quad v_{i}\in V.} The antisymmetrization (or sometimes the skew-symmetrization) of a decomposable tensor is defined by A ( r ) ⁡ ( v 1 ⊗ ⋯ ⊗ v r ) = ∑ σ ∈ S r sgn ⁡ ( σ ) v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( r ) {\displaystyle \operatorname {{\mathcal {A}}^{(r)}} (v_{1}\otimes \cdots \otimes v_{r})=\sum _{\sigma \in {\mathfrak {S}}_{r}}\operatorname {sgn} (\sigma )v_{\sigma (1)}\otimes \cdots \otimes v_{\sigma (r)}} and, when r ! ≠ 0 {\displaystyle r!\neq 0} (for nonzero characteristic field r ! {\displaystyle r!} might be 0): Alt ( r ) ⁡ ( v 1 ⊗ ⋯ ⊗ v r ) = 1 r ! A ( r ) ⁡ ( v 1 ⊗ ⋯ ⊗ v r ) {\displaystyle \operatorname {Alt} ^{(r)}(v_{1}\otimes \cdots \otimes v_{r})={\frac {1}{r!}}\operatorname {{\mathcal {A}}^{(r)}} (v_{1}\otimes \cdots \otimes v_{r})} where the sum is taken over the symmetric group of permutations on the symbols ⁠ { 1 , … , r } {\displaystyle \{1,\dots ,r\}} ⁠. This extends by linearity and homogeneity to an operation, also denoted by A {\displaystyle {\mathcal {A}}} and A l t {\displaystyle {\rm {Alt}}} , on the full tensor algebra ⁠ T ( V ) {\displaystyle \mathrm {T} (V)} ⁠. Note that A ( r ) ⁡ A ( r ) = r ! A ( r ) . {\displaystyle \operatorname {{\mathcal {A}}^{(r)}} \operatorname {{\mathcal {A}}^{(r)}} =r!\operatorname {{\mathcal {A}}^{(r)}} .} Such that, when defined, Alt ( r ) {\displaystyle \operatorname {Alt} ^{(r)}} is the projection for the exterior (quotient) algebra onto the r-homogeneous alternating tensor subspace. On the other hand, the image A ( T ( V ) ) {\displaystyle {\mathcal {A}}(\mathrm {T} (V))} is always the alternating tensor graded subspace (not yet an algebra, as product is not yet defined), denoted ⁠ A ( V ) {\displaystyle A(V)} ⁠. This is a vector subspace of ⁠ T ( V ) {\displaystyle \mathrm {T} (V)} ⁠, and it inherits the structure of a graded vector space from that on ⁠ T ( V ) {\displaystyle \mathrm {T} (V)} ⁠. Moreover, the kernel of A ( r ) {\displaystyle {\mathcal {A}}^{(r)}} is precisely ⁠ I ( r ) {\displaystyle I^{(r)}} ⁠, the homogeneous subset of the ideal ⁠ I {\displaystyle I} ⁠, or the kernel of A {\displaystyle {\mathcal {A}}} is ⁠ I {\displaystyle I} ⁠. When Alt {\displaystyle \operatorname {Alt} } is defined, A ( V ) {\displaystyle A(V)} carries an associative graded product ⊗ ^ {\displaystyle {\widehat {\otimes }}} defined by (the same as the wedge product) t ∧ s = t ⊗ ^ s = Alt ⁡ ( t ⊗ s ) . {\displaystyle t\wedge s=t~{\widehat {\otimes }}~s=\operatorname {Alt} (t\otimes s).} Assuming K {\displaystyle K} has characteristic 0, as A ( V ) {\displaystyle A(V)} is a supplement of I {\displaystyle I} in ⁠ T ( V ) {\displaystyle \mathrm {T} (V)} ⁠, with the above given product, there is a canonical isomorphism A ( V ) ≅ ⋀ ( V ) . {\displaystyle A(V)\cong {\textstyle \bigwedge }(V).} When the characteristic of the field is nonzero, A {\displaystyle {\mathcal {A}}} will do what A l t {\displaystyle {\rm {Alt}}} did before, but the product cannot be defined as above. In such a case, isomorphism A ( V ) ≅ ⋀ ( V ) {\displaystyle A(V)\cong {\textstyle \bigwedge }(V)} still holds, in spite of A ( V ) {\displaystyle A(V)} not being a supplement of the ideal ⁠ I {\displaystyle I} ⁠, but then, the product should be modified as given below ( ∧ ˙ {\displaystyle {\dot {\wedge }}} product, Arnold setting). Finally, we always get ⁠ A ( V ) {\displaystyle A(V)} ⁠ isomorphic with ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠, but the product could (or should) be chosen in two ways (or only one). Actually, the product could be chosen in many ways, rescaling it on homogeneous spaces as c ( r + p ) / c ( r ) c ( p ) {\displaystyle c(r+p)/c(r)c(p)} for an arbitrary sequence c ( r ) {\displaystyle c(r)} in the field, as long as the division makes sense (this is such that the redefined product is also associative, i.e. defines an algebra on ⁠ A ( V ) {\displaystyle A(V)} ⁠). Also note, the interior product definition should be changed accordingly, in order to keep its skew derivation property. === Index notation === Suppose that V has finite dimension n, and that a basis e1, ..., en of V is given. Then any alternating tensor t ∈ Ar(V) ⊂ Tr(V) can be written in index notation with the Einstein summation convention as t = t i 1 i 2 ⋯ i r e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i r , {\displaystyle t=t^{i_{1}i_{2}\cdots i_{r}}\,{\mathbf {e} }_{i_{1}}\otimes {\mathbf {e} }_{i_{2}}\otimes \cdots \otimes {\mathbf {e} }_{i_{r}},} where ti1⋅⋅⋅ir is completely antisymmetric in its indices. The exterior product of two alternating tensors t and s of ranks r and p is given by t ⊗ ^ s = 1 ( r + p ) ! ∑ σ ∈ S r + p sgn ⁡ ( σ ) t i σ ( 1 ) ⋯ i σ ( r ) s i σ ( r + 1 ) ⋯ i σ ( r + p ) e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i r + p . {\displaystyle t~{\widehat {\otimes }}~s={\frac {1}{(r+p)!}}\sum _{\sigma \in {\mathfrak {S}}_{r+p}}\operatorname {sgn} (\sigma )t^{i_{\sigma (1)}\cdots i_{\sigma (r)}}s^{i_{\sigma (r+1)}\cdots i_{\sigma (r+p)}}{\mathbf {e} }_{i_{1}}\otimes {\mathbf {e} }_{i_{2}}\otimes \cdots \otimes {\mathbf {e} }_{i_{r+p}}.} The components of this tensor are precisely the skew part of the components of the tensor product s ⊗ t, denoted by square brackets on the indices: ( t ⊗ ^ s ) i 1 ⋯ i r + p = t [ i 1 ⋯ i r s i r + 1 ⋯ i r + p ] . {\displaystyle (t~{\widehat {\otimes }}~s)^{i_{1}\cdots i_{r+p}}=t^{[i_{1}\cdots i_{r}}s^{i_{r+1}\cdots i_{r+p}]}.} The interior product may also be described in index notation as follows. Let t = t i 0 i 1 ⋯ i r − 1 {\displaystyle t=t^{i_{0}i_{1}\cdots i_{r-1}}} be an antisymmetric tensor of rank ⁠ r {\displaystyle r} ⁠. Then, for α ∈ V∗, ⁠ ι α t {\displaystyle \iota _{\alpha }t} ⁠ is an alternating tensor of rank ⁠ r − 1 {\displaystyle r-1} ⁠, given by ( ι α t ) i 1 ⋯ i r − 1 = r ∑ j = 0 n α j t j i 1 ⋯ i r − 1 . {\displaystyle (\iota _{\alpha }t)^{i_{1}\cdots i_{r-1}}=r\sum _{j=0}^{n}\alpha _{j}t^{ji_{1}\cdots i_{r-1}}.} where n is the dimension of V. == Duality == === Alternating operators === Given two vector spaces V and X and a natural number k, an alternating operator from Vk to X is a multilinear map f : V k → X {\displaystyle f:V^{k}\to X} such that whenever v1, ..., vk are linearly dependent vectors in V, then f ( v 1 , … , v k ) = 0. {\displaystyle f(v_{1},\ldots ,v_{k})=0.} The map w : V k → ⋀ k ( V ) , {\displaystyle w:V^{k}\to {\textstyle \bigwedge }^{\!k}(V),} which associates to k {\displaystyle k} vectors from V {\displaystyle V} their exterior product, i.e. their corresponding k {\displaystyle k} -vector, is also alternating. In fact, this map is the "most general" alternating operator defined on V k ; {\displaystyle V^{k};} given any other alternating operator f : V k → X , {\displaystyle f:V^{k}\rightarrow X,} there exists a unique linear map ϕ : ⋀ k ( V ) → X {\displaystyle \phi :{\textstyle \bigwedge }^{\!k}(V)\rightarrow X} with f = ϕ ∘ w . {\displaystyle f=\phi \circ w.} This universal property characterizes the space of alternating operators on V k {\displaystyle V^{k}} and can serve as its definition. === Alternating multilinear forms === The above discussion specializes to the case when ⁠ X = K {\displaystyle X=K} ⁠, the base field. In this case an alternating multilinear function f : V k → K {\displaystyle f:V^{k}\to K} is called an alternating multilinear form. The set of all alternating multilinear forms is a vector space, as the sum of two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of the exterior power, the space of alternating forms of degree k {\displaystyle k} on V {\displaystyle V} is naturally isomorphic with the dual vector space ⁠ ( ⋀ k ( V ) ) ∗ {\displaystyle {\bigl (}{\textstyle \bigwedge }^{\!k}(V){\bigr )}^{*}} ⁠. If V {\displaystyle V} is finite-dimensional, then the latter is naturally isomorphic to ⁠ ⋀ k ( V ∗ ) {\displaystyle {\textstyle \bigwedge }^{\!k}\left(V^{*}\right)} ⁠. In particular, if V {\displaystyle V} is n {\displaystyle n} -dimensional, the dimension of the space of alternating maps from V k {\displaystyle V^{k}} to K {\displaystyle K} is the binomial coefficient ⁠ ( n k ) {\displaystyle \textstyle {\binom {n}{k}}} ⁠. Under such identification, the exterior product takes a concrete form: it produces a new anti-symmetric map from two given ones. Suppose ω : Vk → K and η : Vm → K are two anti-symmetric maps. As in the case of tensor products of multilinear maps, the number of variables of their exterior product is the sum of the numbers of their variables. Depending on the choice of identification of elements of exterior power with multilinear forms, the exterior product is defined as ω ∧ η = Alt ⁡ ( ω ⊗ η ) {\displaystyle \omega \wedge \eta =\operatorname {Alt} (\omega \otimes \eta )} or as ω ∧ ˙ η = ( k + m ) ! k ! m ! Alt ⁡ ( ω ⊗ η ) , {\displaystyle \omega {\dot {\wedge }}\eta ={\frac {(k+m)!}{k!\,m!}}\operatorname {Alt} (\omega \otimes \eta ),} where, if the characteristic of the base field K {\displaystyle K} is 0, the alternation Alt of a multilinear map is defined to be the average of the sign-adjusted values over all the permutations of its variables: Alt ⁡ ( ω ) ( x 1 , … , x k ) = 1 k ! ∑ σ ∈ S k sgn ⁡ ( σ ) ω ( x σ ( 1 ) , … , x σ ( k ) ) . {\displaystyle \operatorname {Alt} (\omega )(x_{1},\ldots ,x_{k})={\frac {1}{k!}}\sum _{\sigma \in S_{k}}\operatorname {sgn} (\sigma )\,\omega (x_{\sigma (1)},\ldots ,x_{\sigma (k)}).} When the field K {\displaystyle K} has finite characteristic, an equivalent version of the second expression without any factorials or any constants is well-defined: ω ∧ ˙ η ( x 1 , … , x k + m ) = ∑ σ ∈ S h k , m sgn ⁡ ( σ ) ω ( x σ ( 1 ) , … , x σ ( k ) ) η ( x σ ( k + 1 ) , … , x σ ( k + m ) ) , {\displaystyle {\omega {\dot {\wedge }}\eta (x_{1},\ldots ,x_{k+m})}=\sum _{\sigma \in \mathrm {Sh} _{k,m}}\operatorname {sgn} (\sigma )\,\omega (x_{\sigma (1)},\ldots ,x_{\sigma (k)})\,\eta (x_{\sigma (k+1)},\ldots ,x_{\sigma (k+m)}),} where here Shk,m ⊂ Sk+m is the subset of (k, m) shuffles: permutations σ of the set {1, 2, ..., k + m} such that σ(1) < σ(2) < ⋯ < σ(k), and σ(k + 1) < σ(k + 2) < ... < σ(k + m). As this might look very specific and fine tuned, an equivalent raw version is to sum in the above formula over permutations in left cosets of Sk+m / (Sk × Sm). === Interior product === Suppose that V {\displaystyle V} is finite-dimensional. If V ∗ {\displaystyle V^{*}} denotes the dual space to the vector space ⁠ V {\displaystyle V} ⁠, then for each ⁠ α ∈ V ∗ {\displaystyle \alpha \in V^{*}} ⁠, it is possible to define an antiderivation on the algebra ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠, ι α : ⋀ k ( V ) → ⋀ k − 1 ( V ) . {\displaystyle \iota _{\alpha }:{\textstyle \bigwedge }^{\!k}(V)\rightarrow {\textstyle \bigwedge }^{\!k-1}(V).} This derivation is called the interior product with ⁠ α {\displaystyle \alpha } ⁠, or sometimes the insertion operator, or contraction by ⁠ α {\displaystyle \alpha } ⁠. Suppose that ⁠ w ∈ ⋀ k ( V ) {\displaystyle w\in {\textstyle \bigwedge }^{\!k}(V)} ⁠. Then w {\displaystyle w} is a multilinear mapping of V ∗ {\displaystyle V^{*}} to ⁠ K {\displaystyle K} ⁠, so it is defined by its values on the k-fold Cartesian product ⁠ V ∗ × V ∗ × ⋯ × V ∗ {\displaystyle V^{*}\times V^{*}\times \dots \times V^{*}} ⁠. If u1, u2, ..., uk−1 are k − 1 {\displaystyle k-1} elements of ⁠ V ∗ {\displaystyle V^{*}} ⁠, then define ( ι α w ) ( u 1 , u 2 , … , u k − 1 ) = w ( α , u 1 , u 2 , … , u k − 1 ) . {\displaystyle (\iota _{\alpha }w)(u_{1},u_{2},\ldots ,u_{k-1})=w(\alpha ,u_{1},u_{2},\ldots ,u_{k-1}).} Additionally, let ι α f = 0 {\displaystyle \iota _{\alpha }f=0} whenever f {\displaystyle f} is a pure scalar (i.e., belonging to ⁠ ⋀ 0 ( V ) {\displaystyle {\textstyle \bigwedge }^{\!0}(V)} ⁠). ==== Axiomatic characterization and properties ==== The interior product satisfies the following properties: For each ⁠ k {\displaystyle k} ⁠ and each ⁠ α ∈ V ∗ {\displaystyle \alpha \in V^{*}} ⁠ (where by convention Λ − 1 ( V ) = { 0 } {\displaystyle \Lambda ^{-1}(V)=\{0\}} ), ι α : ⋀ k ( V ) → ⋀ k − 1 ( V ) . {\displaystyle \iota _{\alpha }:{\textstyle \bigwedge }^{\!k}(V)\rightarrow {\textstyle \bigwedge }^{\!k-1}(V).} If v {\displaystyle v} is an element of V {\displaystyle V} (⁠ = ⋀ 1 ( V ) {\displaystyle ={\textstyle \bigwedge }^{\!1}(V)} ⁠), then ⁠ ι α v = α ( v ) {\displaystyle \iota _{\alpha }v=\alpha (v)} ⁠ is the dual pairing between elements of V {\displaystyle V} and elements of ⁠ V ∗ {\displaystyle V^{*}} ⁠. For each ⁠ α ∈ V ∗ {\displaystyle \alpha \in V^{*}} ⁠, ι α {\displaystyle \iota _{\alpha }} is a graded derivation of degree −1: ι α ( a ∧ b ) = ( ι α a ) ∧ b + ( − 1 ) deg ⁡ a a ∧ ( ι α b ) . {\displaystyle \iota _{\alpha }(a\wedge b)=(\iota _{\alpha }a)\wedge b+(-1)^{\deg a}a\wedge (\iota _{\alpha }b).} These three properties are sufficient to characterize the interior product as well as define it in the general infinite-dimensional case. Further properties of the interior product include: ι α ∘ ι α = 0. {\displaystyle \iota _{\alpha }\circ \iota _{\alpha }=0.} ι α ∘ ι β = − ι β ∘ ι α . {\displaystyle \iota _{\alpha }\circ \iota _{\beta }=-\iota _{\beta }\circ \iota _{\alpha }.} === Hodge duality === Suppose that V {\displaystyle V} has finite dimension ⁠ n {\displaystyle n} ⁠. Then the interior product induces a canonical isomorphism of vector spaces ⋀ k ( V ∗ ) ⊗ ⋀ n ( V ) → ⋀ n − k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V^{*})\otimes {\textstyle \bigwedge }^{\!n}(V)\to {\textstyle \bigwedge }^{\!n-k}(V)} by the recursive definition ι α ∧ β = ι β ∘ ι α . {\displaystyle \iota _{\alpha \wedge \beta }=\iota _{\beta }\circ \iota _{\alpha }.} In the geometrical setting, a non-zero element of the top exterior power ⋀ n ( V ) {\displaystyle {\textstyle \bigwedge }^{\!n}(V)} (which is a one-dimensional vector space) is sometimes called a volume form (or orientation form, although this term may sometimes lead to ambiguity). The name orientation form comes from the fact that a choice of preferred top element determines an orientation of the whole exterior algebra, since it is tantamount to fixing an ordered basis of the vector space. Relative to the preferred volume form ⁠ σ {\displaystyle \sigma } ⁠, the isomorphism is given explicitly by ⋀ k ( V ∗ ) → ⋀ n − k ( V ) : α ↦ ι α σ . {\displaystyle {\textstyle \bigwedge }^{\!k}(V^{*})\to {\textstyle \bigwedge }^{\!n-k}(V):\alpha \mapsto \iota _{\alpha }\sigma .} If, in addition to a volume form, the vector space V is equipped with an inner product identifying V {\displaystyle V} with ⁠ V ∗ {\displaystyle V^{*}} ⁠, then the resulting isomorphism is called the Hodge star operator, which maps an element to its Hodge dual: ⋆ : ⋀ k ( V ) → ⋀ n − k ( V ) . {\displaystyle \star :{\textstyle \bigwedge }^{\!k}(V)\rightarrow {\textstyle \bigwedge }^{\!n-k}(V).} The composition of ⋆ {\displaystyle \star } with itself maps ⋀ k ( V ) → ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)\to {\textstyle \bigwedge }^{\!k}(V)} and is always a scalar multiple of the identity map. In most applications, the volume form is compatible with the inner product in the sense that it is an exterior product of an orthonormal basis of ⁠ V {\displaystyle V} ⁠. In this case, ⋆ ∘ ⋆ : ⋀ k ( V ) → ⋀ k ( V ) = ( − 1 ) k ( n − k ) + q i d {\displaystyle \star \circ \star :{\textstyle \bigwedge }^{\!k}(V)\to {\textstyle \bigwedge }^{\!k}(V)=(-1)^{k(n-k)+q}\mathrm {id} } where id is the identity mapping, and the inner product has metric signature (p, q) — p pluses and q minuses. === Inner product === For ⁠ V {\displaystyle V} ⁠ a finite-dimensional space, an inner product (or a pseudo-Euclidean inner product) on ⁠ V {\displaystyle V} ⁠ defines an isomorphism of V {\displaystyle V} with ⁠ V ∗ {\displaystyle V^{*}} ⁠, and so also an isomorphism of ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} with ⁠ ( ⋀ k V ) ∗ {\displaystyle {\bigl (}{\textstyle \bigwedge }^{\!k}V{\bigr )}^{*}} ⁠. The pairing between these two spaces also takes the form of an inner product. On decomposable ⁠ k {\displaystyle k} ⁠-vectors, ⟨ v 1 ∧ ⋯ ∧ v k , w 1 ∧ ⋯ ∧ w k ⟩ = det ( ⟨ v i , w j ⟩ ) , {\displaystyle \left\langle v_{1}\wedge \cdots \wedge v_{k},w_{1}\wedge \cdots \wedge w_{k}\right\rangle =\det {\bigl (}\langle v_{i},w_{j}\rangle {\bigr )},} the determinant of the matrix of inner products. In the special case vi = wi, the inner product is the square norm of the k-vector, given by the determinant of the Gramian matrix (⟨vi, vj⟩). This is then extended bilinearly (or sesquilinearly in the complex case) to a non-degenerate inner product on ⋀ k ( V ) . {\displaystyle {\textstyle \bigwedge }^{\!k}(V).} If ei, i = 1, 2, ..., n, form an orthonormal basis of ⁠ V {\displaystyle V} ⁠, then the vectors of the form e i 1 ∧ ⋯ ∧ e i k , i 1 < ⋯ < i k , {\displaystyle e_{i_{1}}\wedge \cdots \wedge e_{i_{k}},\quad i_{1}<\cdots <i_{k},} constitute an orthonormal basis for ⁠ ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} ⁠, a statement equivalent to the Cauchy–Binet formula. With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically, for ⁠ v ∈ ⋀ k − 1 ( V ) {\displaystyle \mathbf {v} \in {\textstyle \bigwedge }^{\!k-1}(V)} ⁠, ⁠ w ∈ ⋀ k ( V ) {\displaystyle \mathbf {w} \in {\textstyle \bigwedge }^{\!k}(V)} ⁠, and ⁠ x ∈ V {\displaystyle x\in V} ⁠, ⟨ x ∧ v , w ⟩ = ⟨ v , ι x ♭ w ⟩ {\displaystyle \langle x\wedge \mathbf {v} ,\mathbf {w} \rangle =\langle \mathbf {v} ,\iota _{x^{\flat }}\mathbf {w} \rangle } where x♭ ∈ V∗ is the musical isomorphism, the linear functional defined by x ♭ ( y ) = ⟨ x , y ⟩ {\displaystyle x^{\flat }(y)=\langle x,y\rangle } for all ⁠ y ∈ V {\displaystyle y\in V} ⁠. This property completely characterizes the inner product on the exterior algebra. Indeed, more generally for ⁠ v ∈ ⋀ k − l ( V ) {\displaystyle \mathbf {v} \in {\textstyle \bigwedge }^{\!k-l}(V)} ⁠, ⁠ w ∈ ⋀ k ( V ) {\displaystyle \mathbf {w} \in {\textstyle \bigwedge }^{\!k}(V)} ⁠, and ⁠ x ∈ ⋀ l ( V ) {\displaystyle \mathbf {x} \in {\textstyle \bigwedge }^{\!l}(V)} ⁠, iteration of the above adjoint properties gives ⟨ x ∧ v , w ⟩ = ⟨ v , ι x ♭ w ⟩ {\displaystyle \langle \mathbf {x} \wedge \mathbf {v} ,\mathbf {w} \rangle =\langle \mathbf {v} ,\iota _{\mathbf {x} ^{\flat }}\mathbf {w} \rangle } where now x ♭ ∈ ⋀ l ( V ∗ ) ≃ ( ⋀ l ( V ) ) ∗ {\displaystyle \mathbf {x} ^{\flat }\in {\textstyle \bigwedge }^{\!l}\left(V^{*}\right)\simeq {\bigl (}{\textstyle \bigwedge }^{\!l}(V){\bigr )}^{*}} is the dual ⁠ l {\displaystyle l} ⁠-vector defined by x ♭ ( y ) = ⟨ x , y ⟩ {\displaystyle \mathbf {x} ^{\flat }(\mathbf {y} )=\langle \mathbf {x} ,\mathbf {y} \rangle } for all ⁠ y ∈ ⋀ l ( V ) {\displaystyle \mathbf {y} \in {\textstyle \bigwedge }^{\!l}(V)} ⁠. === Bialgebra structure === There is a correspondence between the graded dual of the graded algebra ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} and alternating multilinear forms on ⁠ V {\displaystyle V} ⁠. The exterior algebra (as well as the symmetric algebra) inherits a bialgebra structure, and, indeed, a Hopf algebra structure, from the tensor algebra. See the article on tensor algebras for a detailed treatment of the topic. The exterior product of multilinear forms defined above is dual to a coproduct defined on ⁠ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ⁠, giving the structure of a coalgebra. The coproduct is a linear function ⁠ Δ : ⋀ ( V ) → ⋀ ( V ) ⊗ ⋀ ( V ) {\displaystyle \Delta :{\textstyle \bigwedge }(V)\to {\textstyle \bigwedge }(V)\otimes {\textstyle \bigwedge }(V)} ⁠, which is given by Δ ( v ) = 1 ⊗ v + v ⊗ 1 {\displaystyle \Delta (v)=1\otimes v+v\otimes 1} on elements ⁠ v ∈ V {\displaystyle v\in V} ⁠. The symbol 1 {\displaystyle 1} stands for the unit element of the field ⁠ K {\displaystyle K} ⁠. Recall that ⁠ K ≃ ⋀ 0 ( V ) ⊆ ⋀ ( V ) {\displaystyle K\simeq {\textstyle \bigwedge }^{\!0}(V)\subseteq {\textstyle \bigwedge }(V)} ⁠, so that the above really does lie in ⁠ ⋀ ( V ) ⊗ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)\otimes {\textstyle \bigwedge }(V)} ⁠. This definition of the coproduct is lifted to the full space ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} by (linear) homomorphism. The correct form of this homomorphism is not what one might naively write, but has to be the one carefully defined in the coalgebra article. In this case, one obtains Δ ( v ∧ w ) = 1 ⊗ ( v ∧ w ) + v ⊗ w − w ⊗ v + ( v ∧ w ) ⊗ 1. {\displaystyle \Delta (v\wedge w)=1\otimes (v\wedge w)+v\otimes w-w\otimes v+(v\wedge w)\otimes 1.} Expanding this out in detail, one obtains the following expression on decomposable elements: Δ ( x 1 ∧ ⋯ ∧ x k ) = ∑ p = 0 k ∑ σ ∈ S h ( p , k − p ) sgn ⁡ ( σ ) ( x σ ( 1 ) ∧ ⋯ ∧ x σ ( p ) ) ⊗ ( x σ ( p + 1 ) ∧ ⋯ ∧ x σ ( k ) ) . {\displaystyle \Delta (x_{1}\wedge \cdots \wedge x_{k})=\sum _{p=0}^{k}\;\sum _{\sigma \in Sh(p,k-p)}\;\operatorname {sgn} (\sigma )(x_{\sigma (1)}\wedge \cdots \wedge x_{\sigma (p)})\otimes (x_{\sigma (p+1)}\wedge \cdots \wedge x_{\sigma (k)}).} where the second summation is taken over all (p, k−p)-shuffles. By convention, one takes that Sh(k,0) and Sh(0,k) equals {id: {1, ..., k} → {1, ..., k}}. It is also convenient to take the pure wedge products v σ ( 1 ) ∧ ⋯ ∧ v σ ( p ) {\displaystyle v_{\sigma (1)}\wedge \dots \wedge v_{\sigma (p)}} and v σ ( p + 1 ) ∧ ⋯ ∧ v σ ( k ) {\displaystyle v_{\sigma (p+1)}\wedge \dots \wedge v_{\sigma (k)}} to equal 1 for p = 0 and p = k, respectively (the empty product in ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} ). The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements x k {\displaystyle x_{k}} is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right. Observe that the coproduct preserves the grading of the algebra. Extending to the full space ⋀ ( V ) , {\textstyle {\textstyle \bigwedge }(V),} one has Δ : ⋀ k ( V ) → ⨁ p = 0 k ⋀ p ( V ) ⊗ ⋀ k − p ( V ) {\displaystyle \Delta :{\textstyle \bigwedge }^{k}(V)\to \bigoplus _{p=0}^{k}{\textstyle \bigwedge }^{p}(V)\otimes {\textstyle \bigwedge }^{k-p}(V)} The tensor symbol ⊗ used in this section should be understood with some caution: it is not the same tensor symbol as the one being used in the definition of the alternating product. Intuitively, it is perhaps easiest to think it as just another, but different, tensor product: it is still (bi-)linear, as tensor products should be, but it is the product that is appropriate for the definition of a bialgebra, that is, for creating the object ⁠ ⋀ ( V ) ⊗ ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)\otimes {\textstyle \bigwedge }(V)} ⁠. Any lingering doubt can be shaken by pondering the equalities (1 ⊗ v) ∧ (1 ⊗ w) = 1 ⊗ (v ∧ w) and (v ⊗ 1) ∧ (1 ⊗ w) = v ⊗ w, which follow from the definition of the coalgebra, as opposed to naive manipulations involving the tensor and wedge symbols. This distinction is developed in greater detail in the article on tensor algebras. Here, there is much less of a problem, in that the alternating product ∧ {\displaystyle \wedge } clearly corresponds to multiplication in the exterior algebra, leaving the symbol ⊗ {\displaystyle \otimes } free for use in the definition of the bialgebra. In practice, this presents no particular problem, as long as one avoids the fatal trap of replacing alternating sums of ⊗ {\displaystyle \otimes } by the wedge symbol, with one exception. One can construct an alternating product from ⁠ ⊗ {\displaystyle \otimes } ⁠, with the understanding that it works in a different space. Immediately below, an example is given: the alternating product for the dual space can be given in terms of the coproduct. The construction of the bialgebra here parallels the construction in the tensor algebra article almost exactly, except for the need to correctly track the alternating signs for the exterior algebra. In terms of the coproduct, the exterior product on the dual space is just the graded dual of the coproduct: ( α ∧ β ) ( x 1 ∧ ⋯ ∧ x k ) = ( α ⊗ β ) ( Δ ( x 1 ∧ ⋯ ∧ x k ) ) {\displaystyle (\alpha \wedge \beta )(x_{1}\wedge \cdots \wedge x_{k})=(\alpha \otimes \beta )\left(\Delta (x_{1}\wedge \cdots \wedge x_{k})\right)} where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of incompatible homogeneous degree: more precisely, α ∧ β = ε ∘ (α ⊗ β) ∘ Δ, where ε {\displaystyle \varepsilon } is the counit, as defined presently). The counit is the homomorphism ε : ⋀ ( V ) → K {\displaystyle \varepsilon :{\textstyle \bigwedge }(V)\to K} that returns the 0-graded component of its argument. The coproduct and counit, along with the exterior product, define the structure of a bialgebra on the exterior algebra. With an antipode defined on homogeneous elements by ⁠ S ( x ) = ( − 1 ) ( deg x + 1 2 ) x {\displaystyle S(x)=(-1)^{\binom {{\text{deg}}\,x\,+1}{2}}x} ⁠, the exterior algebra is furthermore a Hopf algebra. == Functoriality == Suppose that V {\displaystyle V} and W {\displaystyle W} are a pair of vector spaces and f : V → W {\displaystyle f:V\to W} is a linear map. Then, by the universal property, there exists a unique homomorphism of graded algebras ⋀ ( f ) : ⋀ ( V ) → ⋀ ( W ) {\displaystyle {\textstyle \bigwedge }(f):{\textstyle \bigwedge }(V)\rightarrow {\textstyle \bigwedge }(W)} such that ⋀ ( f ) | ⋀ 1 ( V ) = f : V = ⋀ 1 ( V ) → W = ⋀ 1 ( W ) . {\displaystyle {\textstyle \bigwedge }(f)\left|_{{\textstyle \bigwedge }^{\!1}(V)}\right.=f:V={\textstyle \bigwedge }^{\!1}(V)\rightarrow W={\textstyle \bigwedge }^{\!1}(W).} In particular, ⋀ ( f ) {\displaystyle {\textstyle \bigwedge }(f)} preserves homogeneous degree. The k-graded components of ⋀ ( f ) {\textstyle \bigwedge \left(f\right)} are given on decomposable elements by ⋀ ( f ) ( x 1 ∧ ⋯ ∧ x k ) = f ( x 1 ) ∧ ⋯ ∧ f ( x k ) . {\displaystyle {\textstyle \bigwedge }(f)(x_{1}\wedge \cdots \wedge x_{k})=f(x_{1})\wedge \cdots \wedge f(x_{k}).} Let ⋀ k ( f ) = ⋀ ( f ) | ⋀ k ( V ) : ⋀ k ( V ) → ⋀ k ( W ) . {\displaystyle {\textstyle \bigwedge }^{\!k}(f)={\textstyle \bigwedge }(f)\left|_{{\textstyle \bigwedge }^{\!k}(V)}\right.:{\textstyle \bigwedge }^{\!k}(V)\rightarrow {\textstyle \bigwedge }^{\!k}(W).} The components of the transformation ⁠ ⋀ k ( f ) {\displaystyle {\textstyle \bigwedge }^{\!k}(f)} ⁠ relative to a basis of V {\displaystyle V} and W {\displaystyle W} is the matrix of k × k {\displaystyle k\times k} minors of ⁠ f {\displaystyle f} ⁠. In particular, if V = W {\displaystyle V=W} and V {\displaystyle V} is of finite dimension ⁠ n {\displaystyle n} ⁠, then ⋀ n ( f ) {\displaystyle {\textstyle \bigwedge }^{\!n}(f)} is a mapping of a one-dimensional vector space ⋀ n ( V ) {\displaystyle {\textstyle \bigwedge }^{\!n}(V)} to itself, and is therefore given by a scalar: the determinant of ⁠ f {\displaystyle f} ⁠. === Exactness === If 0 → U → V → W → 0 {\displaystyle 0\to U\to V\to W\to 0} is a short exact sequence of vector spaces, then 0 → ⋀ 1 ( U ) ∧ ⋀ ( V ) → ⋀ ( V ) → ⋀ ( W ) → 0 {\displaystyle 0\to {\textstyle \bigwedge }^{\!1}(U)\wedge {\textstyle \bigwedge }(V)\to {\textstyle \bigwedge }(V)\to {\textstyle \bigwedge }(W)\to 0} is an exact sequence of graded vector spaces, as is 0 → ⋀ ( U ) → ⋀ ( V ) . {\displaystyle 0\to {\textstyle \bigwedge }(U)\to {\textstyle \bigwedge }(V).} === Direct sums === In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras: ⋀ ( V ⊕ W ) ≅ ⋀ ( V ) ⊗ ⋀ ( W ) . {\displaystyle {\textstyle \bigwedge }(V\oplus W)\cong {\textstyle \bigwedge }(V)\otimes {\textstyle \bigwedge }(W).} This is a graded isomorphism; i.e., ⋀ k ( V ⊕ W ) ≅ ⨁ p + q = k ⋀ p ( V ) ⊗ ⋀ q ( W ) . {\displaystyle {\textstyle \bigwedge }^{\!k}(V\oplus W)\cong \bigoplus _{p+q=k}{\textstyle \bigwedge }^{\!p}(V)\otimes {\textstyle \bigwedge }^{\!q}(W).} In greater generality, for a short exact sequence of vector spaces 0 → U → f V → g W → 0 , {\textstyle 0\to U\mathrel {\overset {f}{\to }} V\mathrel {\overset {g}{\to }} W\to 0,} there is a natural filtration 0 = F 0 ⊆ F 1 ⊆ ⋯ ⊆ F k ⊆ F k + 1 = ⋀ k ( V ) {\displaystyle 0=F^{0}\subseteq F^{1}\subseteq \cdots \subseteq F^{k}\subseteq F^{k+1}={\textstyle \bigwedge }^{\!k}(V)} where F p {\displaystyle F^{p}} for p ≥ 1 {\displaystyle p\geq 1} is spanned by elements of the form u 1 ∧ … ∧ u k + 1 − p ∧ v 1 ∧ … v p − 1 {\displaystyle u_{1}\wedge \ldots \wedge u_{k+1-p}\wedge v_{1}\wedge \ldots v_{p-1}} for u i ∈ U {\displaystyle u_{i}\in U} and v i ∈ V . {\displaystyle v_{i}\in V.} The corresponding quotients admit a natural isomorphism F p + 1 / F p ≅ ⋀ k − p ( U ) ⊗ ⋀ p ( W ) {\displaystyle F^{p+1}/F^{p}\cong {\textstyle \bigwedge }^{\!k-p}(U)\otimes {\textstyle \bigwedge }^{\!p}(W)} given by u 1 ∧ … ∧ u k − p ∧ v 1 ∧ … ∧ v p ↦ u 1 ∧ … ∧ u k − p ⊗ g ( v 1 ) ∧ … ∧ g ( v p ) . {\displaystyle u_{1}\wedge \ldots \wedge u_{k-p}\wedge v_{1}\wedge \ldots \wedge v_{p}\mapsto u_{1}\wedge \ldots \wedge u_{k-p}\otimes g(v_{1})\wedge \ldots \wedge g(v_{p}).} In particular, if U is 1-dimensional then 0 → U ⊗ ⋀ k − 1 ( W ) → ⋀ k ( V ) → ⋀ k ( W ) → 0 {\displaystyle 0\to U\otimes {\textstyle \bigwedge }^{\!k-1}(W)\to {\textstyle \bigwedge }^{\!k}(V)\to {\textstyle \bigwedge }^{\!k}(W)\to 0} is exact, and if W is 1-dimensional then 0 → ⋀ k ( U ) → ⋀ k ( V ) → ⋀ k − 1 ( U ) ⊗ W → 0 {\displaystyle 0\to {\textstyle \bigwedge }^{k}(U)\to {\textstyle \bigwedge }^{\!k}(V)\to {\textstyle \bigwedge }^{\!k-1}(U)\otimes W\to 0} is exact. == Applications == === Oriented volume in affine space === The natural setting for (oriented) k {\displaystyle k} -dimensional volume and exterior algebra is affine space. This is also the intimate connection between exterior algebra and differential forms, as to integrate we need a 'differential' object to measure infinitesimal volume. If A {\displaystyle \mathbb {A} } is an affine space over the vector space ⁠ V {\displaystyle V} ⁠, and a (simplex) collection of ordered k + 1 {\displaystyle k+1} points A 0 , A 1 , . . . , A k {\displaystyle A_{0},A_{1},...,A_{k}} , we can define its oriented k {\displaystyle k} -dimensional volume as the exterior product of vectors A 0 A 1 ∧ A 0 A 2 ∧ ⋯ ∧ A 0 A k = {\displaystyle A_{0}A_{1}\wedge A_{0}A_{2}\wedge \cdots \wedge A_{0}A_{k}={}} ( − 1 ) j A j A 0 ∧ A j A 1 ∧ A j A 2 ∧ ⋯ ∧ A j A k {\displaystyle (-1)^{j}A_{j}A_{0}\wedge A_{j}A_{1}\wedge A_{j}A_{2}\wedge \cdots \wedge A_{j}A_{k}} (using concatenation P Q {\displaystyle PQ} to mean the displacement vector from point P {\displaystyle P} to Q {\displaystyle Q} ); if the order of the points is changed, the oriented volume changes by a sign, according to the parity of the permutation. In ⁠ n {\displaystyle n} ⁠-dimensional space, the volume of any n {\displaystyle n} -dimensional simplex is a scalar multiple of any other. The sum of the ( k − 1 ) {\displaystyle (k-1)} -dimensional oriented areas of the boundary simplexes of a ⁠ k {\displaystyle k} ⁠-dimensional simplex is zero, as for the sum of vectors around a triangle or the oriented triangles bounding the tetrahedron in the previous section. The vector space structure on ⋀ ( V ) {\displaystyle {\textstyle \bigwedge }(V)} generalises addition of vectors in ⁠ V {\displaystyle V} ⁠: we have ( u 1 + u 2 ) ∧ v = u 1 ∧ v + u 2 ∧ v {\displaystyle (u_{1}+u_{2})\wedge v=u_{1}\wedge v+u_{2}\wedge v} and similarly a k-blade v 1 ∧ ⋯ ∧ v k {\displaystyle v_{1}\wedge \dots \wedge v_{k}} is linear in each factor. === Linear algebra === In applications to linear algebra, the exterior product provides an abstract algebraic manner for describing the determinant and the minors of a matrix. For instance, it is well known that the determinant of a square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix (with a sign to track orientation). This suggests that the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the k × k minors of a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas can be extended not just to matrices but to linear transformations as well: the determinant of a linear transformation is the factor by which it scales the oriented volume of any given reference parallelotope. So the determinant of a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action of a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of the transformation. === Physics === In physics, many quantities are naturally represented by alternating operators. For example, if the motion of a charged particle is described by velocity and acceleration vectors in four-dimensional spacetime, then normalization of the velocity vector requires that the electromagnetic force must be an alternating operator on the velocity. Its six degrees of freedom are identified with the electric and magnetic fields. === Electromagnetic field === In Einstein's theories of relativity, the electromagnetic field is generally given as a differential 2-form F = d A {\displaystyle F=dA} in 4-space or as the equivalent alternating tensor field F i j = A [ i , j ] = A [ i ; j ] , {\displaystyle F_{ij}=A_{[i,j]}=A_{[i;j]},} the electromagnetic tensor. Then d F = d d A = 0 {\displaystyle dF=ddA=0} or the equivalent Bianchi identity F [ i j , k ] = F [ i j ; k ] = 0. {\displaystyle F_{[ij,k]}=F_{[ij;k]}=0.} None of this requires a metric. Adding the Lorentz metric and an orientation provides the Hodge star operator ⋆ {\displaystyle \star } and thus makes it possible to define J = ⋆ d ⋆ F {\displaystyle J={\star }d{\star }F} or the equivalent tensor divergence J i = F , j i j = F ; j i j {\displaystyle J^{i}=F_{,j}^{ij}=F_{;j}^{ij}} where F i j = g i k g j l F k l . {\displaystyle F^{ij}=g^{ik}g^{jl}F_{kl}.} === Linear geometry === The decomposable k-vectors have geometric interpretations: the bivector u ∧ v {\displaystyle u\wedge v} represents the plane spanned by the vectors, "weighted" with a number, given by the area of the oriented parallelogram with sides u {\displaystyle u} and ⁠ v {\displaystyle v} ⁠. Analogously, the 3-vector u ∧ v ∧ w {\displaystyle u\wedge v\wedge w} represents the spanned 3-space weighted by the volume of the oriented parallelepiped with edges ⁠ u {\displaystyle u} ⁠, ⁠ v {\displaystyle v} ⁠, and ⁠ w {\displaystyle w} ⁠. === Projective geometry === Decomposable k-vectors in ⋀ k ( V ) {\displaystyle {\textstyle \bigwedge }^{\!k}(V)} correspond to weighted k-dimensional linear subspaces of ⁠ V {\displaystyle V} ⁠. In particular, the Grassmannian of k-dimensional subspaces of ⁠ V {\displaystyle V} ⁠, denoted ⁠ Gr k ⁡ ( V ) {\displaystyle \operatorname {Gr} _{k}(V)} ⁠, can be naturally identified with an algebraic subvariety of the projective space P ( ⋀ k ( V ) ) {\textstyle \mathbf {P} {\bigl (}{\textstyle \bigwedge }^{\!k}(V){\bigr )}} . This is called the Plücker embedding, and the image of the embedding can be characterized by the Plücker relations. === Differential geometry === The exterior algebra has notable applications in differential geometry, where it is used to define differential forms. Differential forms are mathematical objects that evaluate the length of vectors, areas of parallelograms, and volumes of higher-dimensional bodies, so they can be integrated over curves, surfaces and higher dimensional manifolds in a way that generalizes the line integrals and surface integrals from calculus. A differential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at the point. Equivalently, a differential form of degree k is a linear functional on the kth exterior power of the tangent space. As a consequence, the exterior product of multilinear forms defines a natural exterior product for differential forms. Differential forms play a major role in diverse areas of differential geometry. An alternate approach defines differential forms in terms of germs of functions. In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of a differential graded algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, and it is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exterior derivative, is a cochain complex whose cohomology is called the de Rham cohomology of the underlying manifold and plays a vital role in the algebraic topology of differentiable manifolds. === Representation theory === In representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vector spaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreducible representations of the general linear group (see Fundamental representation). === Superspace === The exterior algebra over the complex numbers is the archetypal example of a superalgebra, which plays a fundamental role in physical theories pertaining to fermions and supersymmetry. A single element of the exterior algebra is called a supernumber or Grassmann number. The exterior algebra itself is then just a one-dimensional superspace: it is just the set of all of the points in the exterior algebra. The topology on this space is essentially the weak topology, the open sets being the cylinder sets. An n-dimensional superspace is just the ⁠ n {\displaystyle n} ⁠-fold product of exterior algebras. === Lie algebra homology === Let L {\displaystyle L} be a Lie algebra over a field ⁠ K {\displaystyle K} ⁠, then it is possible to define the structure of a chain complex on the exterior algebra of ⁠ L {\displaystyle L} ⁠. This is a ⁠ K {\displaystyle K} ⁠-linear mapping ∂ : ⋀ p + 1 ( L ) → ⋀ p ( L ) {\displaystyle \partial :{\textstyle \bigwedge }^{\!p+1}(L)\to {\textstyle \bigwedge }^{\!p}(L)} defined on decomposable elements by ∂ ( x 1 ∧ ⋯ ∧ x p + 1 ) = 1 p + 1 ∑ j < ℓ ( − 1 ) j + ℓ + 1 [ x j , x ℓ ] ∧ x 1 ∧ ⋯ ∧ x ^ j ∧ ⋯ ∧ x ^ ℓ ∧ ⋯ ∧ x p + 1 . {\displaystyle \partial (x_{1}\wedge \cdots \wedge x_{p+1})={\frac {1}{p+1}}\sum _{j<\ell }(-1)^{j+\ell +1}[x_{j},x_{\ell }]\wedge x_{1}\wedge \cdots \wedge {\hat {x}}_{j}\wedge \cdots \wedge {\hat {x}}_{\ell }\wedge \cdots \wedge x_{p+1}.} The Jacobi identity holds if and only if ⁠ 1 {\displaystyle {1}} ⁠, and so this is a necessary and sufficient condition for an anticommutative nonassociative algebra L {\displaystyle L} to be a Lie algebra. Moreover, in that case ⋀ ( L ) {\textstyle {\textstyle \bigwedge }(L)} is a chain complex with boundary operator ⁠ ∂ {\displaystyle \partial } ⁠. The homology associated to this complex is the Lie algebra homology. === Homological algebra === The exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object in homological algebra. == History == The exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term of Ausdehnungslehre, or Theory of Extension. This referred more generally to an algebraic (or axiomatic) theory of extended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant also published similar ideas of exterior calculus for which he claimed priority over Grassmann. The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's theory of multivectors. It was thus a calculus, much like the propositional calculus, except focused exclusively on the task of formal reasoning in geometrical terms. In particular, this new development allowed for an axiomatic characterization of dimension, a property that had previously only been examined from the coordinate point of view. The import of this new theory of vectors and multivectors was lost to mid-19th-century mathematicians, until being thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn of the century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, Élie Cartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms. A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced his universal algebra. This then paved the way for the 20th-century developments of abstract algebra by placing the axiomatic notion of an algebraic system on a firm logical footing. == See also == Alternating algebra Exterior calculus identities Clifford algebra, a generalization of exterior algebra to a nonzero quadratic form Geometric algebra Koszul complex Multilinear algebra Symmetric algebra, the symmetric analog Tensor algebra Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form == Notes == == References == === Mathematical references === === Historical references === === Other references and further reading ===
Wikipedia/Exterior_algebra
In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols. Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, and a large set of routines to perform usual operations, like simplification of expressions, differentiation using the chain rule, polynomial factorization, indefinite integration, etc. Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems. == Terminology == Some authors distinguish computer algebra from symbolic computation, using the latter name to refer to kinds of symbolic computation other than the computation with mathematical formulas. Some authors use symbolic computation for the computer-science aspect of the subject and computer algebra for the mathematical aspect. In some languages, the name of the field is not a direct translation of its English name. Typically, it is called calcul formel in French, which means "formal computation". This name reflects the ties this field has with formal methods. Symbolic computation has also been referred to, in the past, as symbolic manipulation, algebraic manipulation, symbolic processing, symbolic mathematics, or symbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra. == Scientific community == There is no learned society that is specific to computer algebra, but this function is assumed by the special interest group of the Association for Computing Machinery named SIGSAM (Special Interest Group on Symbolic and Algebraic Manipulation). There are several annual conferences on computer algebra, the premier being ISSAC (International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM. There are several journals specializing in computer algebra, the top one being Journal of Symbolic Computation founded in 1985 by Bruno Buchberger. There are also several other journals that regularly publish articles in computer algebra. == Computer science aspects == === Data representation === As numerical software is highly efficient for approximate numerical computation, it is common, in computer algebra, to emphasize exact computation with exactly represented data. Such an exact representation implies that, even when the size of the output is small, the intermediate data generated during a computation may grow in an unpredictable way. This behavior is called expression swell. To alleviate this problem, various methods are used in the representation of the data, as well as in the algorithms that manipulate them. ==== Numbers ==== The usual number systems used in numerical computation are floating point numbers and integers of a fixed, bounded size. Neither of these is convenient for computer algebra, due to expression swell. Therefore, the basic numbers used in computer algebra are the integers of the mathematicians, commonly represented by an unbounded signed sequence of digits in some base of numeration, usually the largest base allowed by the machine word. These integers allow one to define the rational numbers, which are irreducible fractions of two integers. Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most free computer algebra systems, and some commercial ones such as Mathematica and Maple, use the GMP library, which is thus a de facto standard. ==== Expressions ==== Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer-algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, and a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. Even programs may be considered and represented as expressions with operator "procedure" and, at least, two operands, the list of parameters and the body, which is itself an expression with "body" as an operator and a sequence of instructions as operands. Conversely, any mathematical expression may be viewed as a program. For example, the expression a + b may be viewed as a program for the addition, with a and b as parameters. Executing this program consists of evaluating the expression for given values of a and b; if they are not given any values, then the result of the evaluation is simply its input. This process of delayed evaluation is fundamental in computer algebra. For example, the operator "=" of the equations is also, in most computer algebra systems, the name of the program of the equality test: normally, the evaluation of an equation results in an equation, but, when an equality test is needed, either explicitly asked by the user through an "evaluation to a Boolean" command, or automatically started by the system in the case of a test inside a program, then the evaluation to a Boolean result is executed. As the size of the operands of an expression is unpredictable and may change during a working session, the sequence of the operands is usually represented as a sequence of either pointers (like in Macsyma) or entries in a hash table (like in Maple). === Simplification === The raw application of the basic rules of differentiation with respect to x on the expression ax gives the result x ⋅ a x − 1 ⋅ 0 + a x ⋅ ( 1 ⋅ log ⁡ a + x ⋅ 0 a ) . {\displaystyle x\cdot a^{x-1}\cdot 0+a^{x}\cdot \left(1\cdot \log a+x\cdot {\frac {0}{a}}\right).} A simpler expression than this is generally desired, and simplification is needed when working with general expressions. This simplification is normally done through rewriting rules. There are several classes of rewriting rules to be considered. The simplest are rules that always reduce the size of the expression, like E − E → 0 or sin(0) → 0. They are systematically applied in computer algebra systems. A difficulty occurs with associative operations like addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands; that is, that a + b + c is represented as "+"(a, b, c). Thus a + (b + c) and (a + b) + c are both simplified to "+"(a, b, c), which is displayed a + b + c. In the case of expressions such as a − b + c, the simplest way is to systematically rewrite −E, E − F, E/F as, respectively, (−1)⋅E, E + (−1)⋅F, E⋅F−1. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. Another difficulty occurs with the commutativity of addition and multiplication. The problem is to quickly recognize the like terms in order to combine or cancel them. Testing every pair of terms is costly with very long sums and products. To address this, Macsyma sorts the operands of sums and products into an order that places like terms in consecutive places, allowing easy detection. In Maple, a hash function is designed for generating collisions when like terms are entered, allowing them to be combined as soon as they are introduced. This allows subexpressions that appear several times in a computation to be immediately recognized and stored only once. This saves memory and speeds up computation by avoiding repetition of the same operations on identical expressions. Some rewriting rules sometimes increase and sometimes decrease the size of the expressions to which they are applied. This is the case for the distributive law or trigonometric identities. For example, the distributive law allows rewriting ( x + 1 ) 4 → x 4 + 4 x 3 + 6 x 2 + 4 x + 1 {\displaystyle (x+1)^{4}\rightarrow x^{4}+4x^{3}+6x^{2}+4x+1} and ( x − 1 ) ( x 4 + x 3 + x 2 + x + 1 ) → x 5 − 1. {\displaystyle (x-1)(x^{4}+x^{3}+x^{2}+x+1)\rightarrow x^{5}-1.} As there is no way to make a good general choice of applying or not such a rewriting rule, such rewriting is done only when explicitly invoked by the user. For the distributive law, the computer function that applies this rewriting rule is typically called "expand". The reverse rewriting rule, called "factor", requires a non-trivial algorithm, which is thus a key function in computer algebra systems (see Polynomial factorization). == Mathematical aspects == Some fundamental mathematical questions arise when one wants to manipulate mathematical expressions in a computer. We consider mainly the case of the multivariate rational fractions. This is not a real restriction, because, as soon as the irrational functions appearing in an expression are simplified, they are usually considered as new indeterminates. For example, ( sin ⁡ ( x + y ) 2 + log ⁡ ( z 2 − 5 ) ) 3 {\displaystyle (\sin(x+y)^{2}+\log(z^{2}-5))^{3}} is viewed as a polynomial in sin ⁡ ( x + y ) {\displaystyle \sin(x+y)} and log ⁡ ( z 2 − 5 ) {\displaystyle \log(z^{2}-5)} . === Equality === There are two notions of equality for mathematical expressions. Syntactic equality is the equality of their representation in a computer. This is easy to test in a program. Semantic equality is when two expressions represent the same mathematical object, as in ( x + y ) 2 = x 2 + 2 x y + y 2 . {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}.} It is known from Richardson's theorem that there may not exist an algorithm that decides whether two expressions representing numbers are semantically equal if exponentials and logarithms are allowed in the expressions. Accordingly, (semantic) equality may be tested only on some classes of expressions such as the polynomials and rational fractions. To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in some canonical form or to put their difference in a normal form, and to test the syntactic equality of the result. In computer algebra, "canonical form" and "normal form" are not synonymous. A canonical form is such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while a normal form is such that an expression in normal form is semantically zero only if it is syntactically zero. In other words, zero has a unique representation as an expression in normal form. Normal forms are usually preferred in computer algebra for several reasons. Firstly, canonical forms may be more costly to compute than normal forms. For example, to put a polynomial in canonical form, one has to expand every product through the distributive law, while it is not necessary with a normal form (see below). Secondly, it may be the case, like for expressions involving radicals, that a canonical form, if it exists, depends on some arbitrary choices and that these choices may be different for two expressions that have been computed independently. This may make the use of a canonical form impractical. == History == === Human-driven computer algebra === Early computer algebra systems, such as the ENIAC at the University of Pennsylvania, relied on human computers or programmers to reprogram it between calculations, manipulate its many physical modules (or panels), and feed its IBM card reader. Female mathematicians handled the majority of ENIAC programming human-guided computation: Jean Jennings, Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty led said efforts. === Foundations and early applications === In 1960, John McCarthy explored an extension of primitive recursive functions for computing symbolic expressions through the Lisp programming language while at the Massachusetts Institute of Technology. Though his series on "Recursive functions of symbolic expressions and their computation by machine" remained incomplete, McCarthy and his contributions to artificial intelligence programming and computer algebra via Lisp helped establish Project MAC at the Massachusetts Institute of Technology and the organization that later became the Stanford AI Laboratory (SAIL) at Stanford University, whose competition facilitated significant development in computer algebra throughout the late 20th century. Early efforts at symbolic computation, in the 1960s and 1970s, faced challenges surrounding the inefficiency of long-known algorithms when ported to computer algebra systems. Predecessors to Project MAC, such as ALTRAN, sought to overcome algorithmic limitations through advancements in hardware and interpreters, while later efforts turned towards software optimization. === Historic problems === A large part of the work of researchers in the field consisted of revisiting classical algebra to increase its effectiveness while developing efficient algorithms for use in computer algebra. An example of this type of work is the computation of polynomial greatest common divisors, a task required to simplify fractions and an essential component of computer algebra. Classical algorithms for this computation, such as Euclid's algorithm, proved inefficient over infinite fields; algorithms from linear algebra faced similar struggles. Thus, researchers turned to discovering methods of reducing polynomials (such as those over a ring of integers or a unique factorization domain) to a variant efficiently computable via a Euclidean algorithm. == Algorithms used in computer algebra == == See also == Automated theorem prover Computer-assisted proof Computational algebraic geometry Computer algebra system Differential analyser Proof checker Model checker Symbolic-numeric computation Symbolic simulation Symbolic artificial intelligence == References == == Further reading == For a detailed definition of the subject: Buchberger, Bruno (1985). "Symbolic Computation (An Editorial)" (PDF). Journal of Symbolic Computation. 1 (1): 1–6. doi:10.1016/S0747-7171(85)80025-0. For textbooks devoted to the subject: Davenport, James H.; Siret, Yvon; Tournier, Èvelyne (1988). Computer Algebra: Systems and Algorithms for Algebraic Computation. Translated from the French by A. Davenport and J. H. Davenport. Academic Press. ISBN 978-0-12-204230-0. von zur Gathen, Joachim; Gerhard, Jürgen (2003). Modern computer algebra (2nd ed.). Cambridge University Press. ISBN 0-521-82646-2. Geddes, K. O.; Czapor, S. R.; Labahn, G. (1992). Algorithms for Computer Algebra. Bibcode:1992afca.book.....G. doi:10.1007/b102438. ISBN 978-0-7923-9259-0. Buchberger, Bruno; Collins, George Edwin; Loos, Rüdiger; Albrecht, Rudolf, eds. (1983). Computer Algebra: Symbolic and Algebraic Computation. Computing Supplementa. Vol. 4. doi:10.1007/978-3-7091-7551-4. ISBN 978-3-211-81776-6. S2CID 5221892.
Wikipedia/Computer_algebra
In mathematical logic, model theory is the study of the relationship between formal theories (a collection of sentences in a formal language expressing statements about a mathematical structure), and their models (those structures in which the statements of the theory hold). The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory. Compared to other areas of mathematical logic such as proof theory, model theory is often less concerned with formal rigour and closer in spirit to classical mathematics. This has prompted the comment that "if proof theory is about the sacred, then model theory is about the profane". The applications of model theory to algebraic and Diophantine geometry reflect this proximity to classical mathematics, as they often involve an integration of algebraic and model-theoretic results and techniques. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature. The most prominent scholarly organization in the field of model theory is the Association for Symbolic Logic. == Overview == This page focuses on finitary first order model theory of infinite structures. The relative emphasis placed on the class of models of a theory as opposed to the class of definable sets within a model fluctuated in the history of the subject, and the two directions are summarised by the pithy characterisations from 1973 and 1997 respectively: model theory = universal algebra + logic where universal algebra stands for mathematical structures and logic for logical theories; and model theory = algebraic geometry − fields. where logical formulas are to definable sets what equations are to varieties over a field. Nonetheless, the interplay of classes of models and the sets definable in them has been crucial to the development of model theory throughout its history. For instance, while stability was originally introduced to classify theories by their numbers of models in a given cardinality, stability theory proved crucial to understanding the geometry of definable sets. == Fundamental notions of first-order model theory == === First-order logic === A first-order formula is built out of atomic formulas such as R ( f ( x , y ) , z ) {\displaystyle R(f(x,y),z)} or y = x + 1 {\displaystyle y=x+1} by means of the Boolean connectives ¬ , ∧ , ∨ , → {\displaystyle \neg ,\land ,\lor ,\rightarrow } and prefixing of quantifiers ∀ v {\displaystyle \forall v} or ∃ v {\displaystyle \exists v} . A sentence is a formula in which each occurrence of a variable is in the scope of a corresponding quantifier. Examples for formulas are φ {\displaystyle \varphi } (or φ ( x ) {\displaystyle \varphi (x)} to indicate x {\displaystyle x} is the unbound variable in φ {\displaystyle \varphi } ) and ψ {\displaystyle \psi } (or ψ ( x ) {\displaystyle \psi (x)} ), defined as follows: φ = ∀ u ∀ v ( ∃ w ( x × w = u × v ) → ( ∃ w ( x × w = u ) ∨ ∃ w ( x × w = v ) ) ) ∧ x ≠ 0 ∧ x ≠ 1 , ψ = ∀ u ∀ v ( ( u × v = x ) → ( u = x ) ∨ ( v = x ) ) ∧ x ≠ 0 ∧ x ≠ 1. {\displaystyle {\begin{array}{lcl}\varphi &=&\forall u\forall v(\exists w(x\times w=u\times v)\rightarrow (\exists w(x\times w=u)\lor \exists w(x\times w=v)))\land x\neq 0\land x\neq 1,\\\psi &=&\forall u\forall v((u\times v=x)\rightarrow (u=x)\lor (v=x))\land x\neq 0\land x\neq 1.\end{array}}} (Note that the equality symbol has a double meaning here.) It is intuitively clear how to translate such formulas into mathematical meaning. In the semiring of natural numbers N {\displaystyle {\mathcal {N}}} , viewed as a structure with binary functions for addition and multiplication and constants for 0 and 1 of the natural numbers, for example, an element n {\displaystyle n} satisfies the formula φ {\displaystyle \varphi } if and only if n {\displaystyle n} is a prime number. The formula ψ {\displaystyle \psi } similarly defines irreducibility. Tarski gave a rigorous definition, sometimes called "Tarski's definition of truth", for the satisfaction relation ⊨ {\displaystyle \models } , so that one easily proves: N ⊨ φ ( n ) ⟺ n {\displaystyle {\mathcal {N}}\models \varphi (n)\iff n} is a prime number. N ⊨ ψ ( n ) ⟺ n {\displaystyle {\mathcal {N}}\models \psi (n)\iff n} is irreducible. A set T {\displaystyle T} of sentences is called a (first-order) theory, which takes the sentences in the set as its axioms. A theory is satisfiable if it has a model M ⊨ T {\displaystyle {\mathcal {M}}\models T} , i.e. a structure (of the appropriate signature) which satisfies all the sentences in the set T {\displaystyle T} . A complete theory is a theory that contains every sentence or its negation. The complete theory of all sentences satisfied by a structure is also called the theory of that structure. It's a consequence of Gödel's completeness theorem (not to be confused with his incompleteness theorems) that a theory has a model if and only if it is consistent, i.e. no contradiction is proved by the theory. Therefore, model theorists often use "consistent" as a synonym for "satisfiable". === Basic model-theoretic concepts === A signature or language is a set of non-logical symbols such that each symbol is either a constant symbol, or a function or relation symbol with a specified arity. Note that in some literature, constant symbols are considered as function symbols with zero arity, and hence are omitted. A structure is a set M {\displaystyle M} together with interpretations of each of the symbols of the signature as relations and functions on M {\displaystyle M} (not to be confused with the formal notion of an "interpretation" of one structure in another). Example: A common signature for ordered rings is σ o r = ( 0 , 1 , + , × , − , < ) {\displaystyle \sigma _{or}=(0,1,+,\times ,-,<)} , where 0 {\displaystyle 0} and 1 {\displaystyle 1} are 0-ary function symbols (also known as constant symbols), + {\displaystyle +} and × {\displaystyle \times } are binary (= 2-ary) function symbols, − {\displaystyle -} is a unary (= 1-ary) function symbol, and < {\displaystyle <} is a binary relation symbol. Then, when these symbols are interpreted to correspond with their usual meaning on Q {\displaystyle \mathbb {Q} } (so that e.g. + {\displaystyle +} is a function from Q 2 {\displaystyle \mathbb {Q} ^{2}} to Q {\displaystyle \mathbb {Q} } and < {\displaystyle <} is a subset of Q 2 {\displaystyle \mathbb {Q} ^{2}} ), one obtains a structure ( Q , σ o r ) {\displaystyle (\mathbb {Q} ,\sigma _{or})} . A structure N {\displaystyle {\mathcal {N}}} is said to model a set of first-order sentences T {\displaystyle T} in the given language if each sentence in T {\displaystyle T} is true in N {\displaystyle {\mathcal {N}}} with respect to the interpretation of the signature previously specified for N {\displaystyle {\mathcal {N}}} . (Again, not to be confused with the formal notion of an "interpretation" of one structure in another) A model of T {\displaystyle T} is a structure that models T {\displaystyle T} . A substructure A {\displaystyle {\mathcal {A}}} of a σ-structure B {\displaystyle {\mathcal {B}}} is a subset of its domain, closed under all functions in its signature σ, which is regarded as a σ-structure by restricting all functions and relations in σ to the subset. This generalises the analogous concepts from algebra; for instance, a subgroup is a substructure in the signature with multiplication and inverse. A substructure is said to be elementary if for any first-order formula φ {\displaystyle \varphi } and any elements a1, ..., an of A {\displaystyle {\mathcal {A}}} , A ⊨ φ ( a 1 , . . . , a n ) {\displaystyle {\mathcal {A}}\models \varphi (a_{1},...,a_{n})} if and only if B ⊨ φ ( a 1 , . . . , a n ) {\displaystyle {\mathcal {B}}\models \varphi (a_{1},...,a_{n})} . In particular, if φ {\displaystyle \varphi } is a sentence and A {\displaystyle {\mathcal {A}}} an elementary substructure of B {\displaystyle {\mathcal {B}}} , then A ⊨ φ {\displaystyle {\mathcal {A}}\models \varphi } if and only if B ⊨ φ {\displaystyle {\mathcal {B}}\models \varphi } . Thus, an elementary substructure is a model of a theory exactly when the superstructure is a model. Example: While the field of algebraic numbers Q ¯ {\displaystyle {\overline {\mathbb {Q} }}} is an elementary substructure of the field of complex numbers C {\displaystyle \mathbb {C} } , the rational field Q {\displaystyle \mathbb {Q} } is not, as we can express "There is a square root of 2" as a first-order sentence satisfied by C {\displaystyle \mathbb {C} } but not by Q {\displaystyle \mathbb {Q} } . An embedding of a σ-structure A {\displaystyle {\mathcal {A}}} into another σ-structure B {\displaystyle {\mathcal {B}}} is a map f: A → B between the domains which can be written as an isomorphism of A {\displaystyle {\mathcal {A}}} with a substructure of B {\displaystyle {\mathcal {B}}} . If it can be written as an isomorphism with an elementary substructure, it is called an elementary embedding. Every embedding is an injective homomorphism, but the converse holds only if the signature contains no relation symbols, such as in groups or fields. A field or a vector space can be regarded as a (commutative) group by simply ignoring some of its structure. The corresponding notion in model theory is that of a reduct of a structure to a subset of the original signature. The opposite relation is called an expansion - e.g. the (additive) group of the rational numbers, regarded as a structure in the signature {+,0} can be expanded to a field with the signature {×,+,1,0} or to an ordered group with the signature {+,0,<}. Similarly, if σ' is a signature that extends another signature σ, then a complete σ'-theory can be restricted to σ by intersecting the set of its sentences with the set of σ-formulas. Conversely, a complete σ-theory can be regarded as a σ'-theory, and one can extend it (in more than one way) to a complete σ'-theory. The terms reduct and expansion are sometimes applied to this relation as well. === Compactness and the Löwenheim–Skolem theorem === The compactness theorem states that a set of sentences S is satisfiable if every finite subset of S is satisfiable. The analogous statement with consistent instead of satisfiable is trivial, since every proof can have only a finite number of antecedents used in the proof. The completeness theorem allows us to transfer this to satisfiability. However, there are also several direct (semantic) proofs of the compactness theorem. As a corollary (i.e., its contrapositive), the compactness theorem says that every unsatisfiable first-order theory has a finite unsatisfiable subset. This theorem is of central importance in model theory, where the words "by compactness" are commonplace. Another cornerstone of first-order model theory is the Löwenheim–Skolem theorem. According to the theorem, every infinite structure in a countable signature has a countable elementary substructure. Conversely, for any infinite cardinal κ every infinite structure in a countable signature that is of cardinality less than κ can be elementarily embedded in another structure of cardinality κ (There is a straightforward generalisation to uncountable signatures). In particular, the Löwenheim-Skolem theorem implies that any theory in a countable signature with infinite models has a countable model as well as arbitrarily large models. In a certain sense made precise by Lindström's theorem, first-order logic is the most expressive logic for which both the Löwenheim–Skolem theorem and the compactness theorem hold. == Definability == === Definable sets === In model theory, definable sets are important objects of study. For instance, in N {\displaystyle \mathbb {N} } the formula ∀ u ∀ v ( ∃ w ( x × w = u × v ) → ( ∃ w ( x × w = u ) ∨ ∃ w ( x × w = v ) ) ) ∧ x ≠ 0 ∧ x ≠ 1 {\displaystyle \forall u\forall v(\exists w(x\times w=u\times v)\rightarrow (\exists w(x\times w=u)\lor \exists w(x\times w=v)))\land x\neq 0\land x\neq 1} defines the subset of prime numbers, while the formula ∃ y ( 2 × y = x ) {\displaystyle \exists y(2\times y=x)} defines the subset of even numbers. In a similar way, formulas with n free variables define subsets of M n {\displaystyle {\mathcal {M}}^{n}} . For example, in a field, the formula y = x × x {\displaystyle y=x\times x} defines the curve of all ( x , y ) {\displaystyle (x,y)} such that y = x 2 {\displaystyle y=x^{2}} . Both of the definitions mentioned here are parameter-free, that is, the defining formulas don't mention any fixed domain elements. However, one can also consider definitions with parameters from the model. For instance, in R {\displaystyle \mathbb {R} } , the formula y = x × x + π {\displaystyle y=x\times x+\pi } uses the parameter π {\displaystyle \pi } from R {\displaystyle \mathbb {R} } to define a curve. === Eliminating quantifiers === In general, definable sets without quantifiers are easy to describe, while definable sets involving possibly nested quantifiers can be much more complicated. This makes quantifier elimination a crucial tool for analysing definable sets: A theory T has quantifier elimination if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to a first-order formula ψ(x1, ..., xn) without quantifiers, i.e. ∀ x 1 … ∀ x n ( ϕ ( x 1 , … , x n ) ↔ ψ ( x 1 , … , x n ) ) {\displaystyle \forall x_{1}\dots \forall x_{n}(\phi (x_{1},\dots ,x_{n})\leftrightarrow \psi (x_{1},\dots ,x_{n}))} holds in all models of T. If the theory of a structure has quantifier elimination, every set definable in a structure is definable by a quantifier-free formula over the same parameters as the original definition. For example, the theory of algebraically closed fields in the signature σring = (×,+,−,0,1) has quantifier elimination. This means that in an algebraically closed field, every formula is equivalent to a Boolean combination of equations between polynomials. If a theory does not have quantifier elimination, one can add additional symbols to its signature so that it does. Axiomatisability and quantifier elimination results for specific theories, especially in algebra, were among the early landmark results of model theory. But often instead of quantifier elimination a weaker property suffices: A theory T is called model-complete if every substructure of a model of T which is itself a model of T is an elementary substructure. There is a useful criterion for testing whether a substructure is an elementary substructure, called the Tarski–Vaught test. It follows from this criterion that a theory T is model-complete if and only if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to an existential first-order formula, i.e. a formula of the following form: ∃ v 1 … ∃ v m ψ ( x 1 , … , x n , v 1 , … , v m ) {\displaystyle \exists v_{1}\dots \exists v_{m}\psi (x_{1},\dots ,x_{n},v_{1},\dots ,v_{m})} , where ψ is quantifier free. A theory that is not model-complete may have a model completion, which is a related model-complete theory that is not, in general, an extension of the original theory. A more general notion is that of a model companion. === Minimality === In every structure, every finite subset { a 1 , … , a n } {\displaystyle \{a_{1},\dots ,a_{n}\}} is definable with parameters: Simply use the formula x = a 1 ∨ ⋯ ∨ x = a n {\displaystyle x=a_{1}\vee \dots \vee x=a_{n}} . Since we can negate this formula, every cofinite subset (which includes all but finitely many elements of the domain) is also always definable. This leads to the concept of a minimal structure. A structure M {\displaystyle {\mathcal {M}}} is called minimal if every subset A ⊆ M {\displaystyle A\subseteq {\mathcal {M}}} definable with parameters from M {\displaystyle {\mathcal {M}}} is either finite or cofinite. The corresponding concept at the level of theories is called strong minimality: A theory T is called strongly minimal if every model of T is minimal. A structure is called strongly minimal if the theory of that structure is strongly minimal. Equivalently, a structure is strongly minimal if every elementary extension is minimal. Since the theory of algebraically closed fields has quantifier elimination, every definable subset of an algebraically closed field is definable by a quantifier-free formula in one variable. Quantifier-free formulas in one variable express Boolean combinations of polynomial equations in one variable, and since a nontrivial polynomial equation in one variable has only a finite number of solutions, the theory of algebraically closed fields is strongly minimal. On the other hand, the field R {\displaystyle \mathbb {R} } of real numbers is not minimal: Consider, for instance, the definable set φ ( x ) = ∃ y ( y × y = x ) {\displaystyle \varphi (x)\;=\;\exists y(y\times y=x)} . This defines the subset of non-negative real numbers, which is neither finite nor cofinite. One can in fact use φ {\displaystyle \varphi } to define arbitrary intervals on the real number line. It turns out that these suffice to represent every definable subset of R {\displaystyle \mathbb {R} } . This generalisation of minimality has been very useful in the model theory of ordered structures. A densely totally ordered structure M {\displaystyle {\mathcal {M}}} in a signature including a symbol for the order relation is called o-minimal if every subset A ⊆ M {\displaystyle A\subseteq {\mathcal {M}}} definable with parameters from M {\displaystyle {\mathcal {M}}} is a finite union of points and intervals. === Definable and interpretable structures === Particularly important are those definable sets that are also substructures, i. e. contain all constants and are closed under function application. For instance, one can study the definable subgroups of a certain group. However, there is no need to limit oneself to substructures in the same signature. Since formulas with n free variables define subsets of M n {\displaystyle {\mathcal {M}}^{n}} , n-ary relations can also be definable. Functions are definable if the function graph is a definable relation, and constants a ∈ M {\displaystyle a\in {\mathcal {M}}} are definable if there is a formula φ ( x ) {\displaystyle \varphi (x)} such that a is the only element of M {\displaystyle {\mathcal {M}}} such that φ ( a ) {\displaystyle \varphi (a)} is true. In this way, one can study definable groups and fields in general structures, for instance, which has been important in geometric stability theory. One can even go one step further, and move beyond immediate substructures. Given a mathematical structure, there are very often associated structures which can be constructed as a quotient of part of the original structure via an equivalence relation. An important example is a quotient group of a group. One might say that to understand the full structure one must understand these quotients. When the equivalence relation is definable, we can give the previous sentence a precise meaning. We say that these structures are interpretable. A key fact is that one can translate sentences from the language of the interpreted structures to the language of the original structure. Thus one can show that if a structure M {\displaystyle {\mathcal {M}}} interprets another whose theory is undecidable, then M {\displaystyle {\mathcal {M}}} itself is undecidable. == Types == === Basic notions === For a sequence of elements a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} of a structure M {\displaystyle {\mathcal {M}}} and a subset A of M {\displaystyle {\mathcal {M}}} , one can consider the set of all first-order formulas φ ( x 1 , … , x n ) {\displaystyle \varphi (x_{1},\dots ,x_{n})} with parameters in A that are satisfied by a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} . This is called the complete (n-)type realised by a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} over A. If there is an automorphism of M {\displaystyle {\mathcal {M}}} that is constant on A and sends a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} to b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} respectively, then a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} and b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} realise the same complete type over A. The real number line R {\displaystyle \mathbb {R} } , viewed as a structure with only the order relation {<}, will serve as a running example in this section. Every element a ∈ R {\displaystyle a\in \mathbb {R} } satisfies the same 1-type over the empty set. This is clear since any two real numbers a and b are connected by the order automorphism that shifts all numbers by b-a. The complete 2-type over the empty set realised by a pair of numbers a 1 , a 2 {\displaystyle a_{1},a_{2}} depends on their order: either a 1 < a 2 {\displaystyle a_{1}<a_{2}} , a 1 = a 2 {\displaystyle a_{1}=a_{2}} or a 2 < a 1 {\displaystyle a_{2}<a_{1}} . Over the subset Z ⊆ R {\displaystyle \mathbb {Z} \subseteq \mathbb {R} } of integers, the 1-type of a non-integer real number a depends on its value rounded down to the nearest integer. More generally, whenever M {\displaystyle {\mathcal {M}}} is a structure and A a subset of M {\displaystyle {\mathcal {M}}} , a (partial) n-type over A is a set of formulas p with at most n free variables that are realised in an elementary extension N {\displaystyle {\mathcal {N}}} of M {\displaystyle {\mathcal {M}}} . If p contains every such formula or its negation, then p is complete. The set of complete n-types over A is often written as S n M ( A ) {\displaystyle S_{n}^{\mathcal {M}}(A)} . If A is the empty set, then the type space only depends on the theory T {\displaystyle T} of M {\displaystyle {\mathcal {M}}} . The notation S n ( T ) {\displaystyle S_{n}(T)} is commonly used for the set of types over the empty set consistent with T {\displaystyle T} . If there is a single formula φ {\displaystyle \varphi } such that the theory of M {\displaystyle {\mathcal {M}}} implies φ → ψ {\displaystyle \varphi \rightarrow \psi } for every formula ψ {\displaystyle \psi } in p, then p is called isolated. Since the real numbers R {\displaystyle \mathbb {R} } are Archimedean, there is no real number larger than every integer. However, a compactness argument shows that there is an elementary extension of the real number line in which there is an element larger than any integer. Therefore, the set of formulas { n < x | n ∈ Z } {\displaystyle \{n<x|n\in \mathbb {Z} \}} is a 1-type over Z ⊆ R {\displaystyle \mathbb {Z} \subseteq \mathbb {R} } that is not realised in the real number line R {\displaystyle \mathbb {R} } . A subset of M n {\displaystyle {\mathcal {M}}^{n}} that can be expressed as exactly those elements of M n {\displaystyle {\mathcal {M}}^{n}} realising a certain type over A is called type-definable over A. For an algebraic example, suppose M {\displaystyle M} is an algebraically closed field. The theory has quantifier elimination . This allows us to show that a type is determined exactly by the polynomial equations it contains. Thus the set of complete n {\displaystyle n} -types over a subfield A {\displaystyle A} corresponds to the set of prime ideals of the polynomial ring A [ x 1 , … , x n ] {\displaystyle A[x_{1},\ldots ,x_{n}]} , and the type-definable sets are exactly the affine varieties. === Structures and types === While not every type is realised in every structure, every structure realises its isolated types. If the only types over the empty set that are realised in a structure are the isolated types, then the structure is called atomic. On the other hand, no structure realises every type over every parameter set; if one takes all of M {\displaystyle {\mathcal {M}}} as the parameter set, then every 1-type over M {\displaystyle {\mathcal {M}}} realised in M {\displaystyle {\mathcal {M}}} is isolated by a formula of the form a = x for an a ∈ M {\displaystyle a\in {\mathcal {M}}} . However, any proper elementary extension of M {\displaystyle {\mathcal {M}}} contains an element that is not in M {\displaystyle {\mathcal {M}}} . Therefore, a weaker notion has been introduced that captures the idea of a structure realising all types it could be expected to realise. A structure is called saturated if it realises every type over a parameter set A ⊂ M {\displaystyle A\subset {\mathcal {M}}} that is of smaller cardinality than M {\displaystyle {\mathcal {M}}} itself. While an automorphism that is constant on A will always preserve types over A, it is generally not true that any two sequences a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} and b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} that satisfy the same type over A can be mapped to each other by such an automorphism. A structure M {\displaystyle {\mathcal {M}}} in which this converse does hold for all A of smaller cardinality than M {\displaystyle {\mathcal {M}}} is called homogeneous. The real number line is atomic in the language that contains only the order < {\displaystyle <} , since all n-types over the empty set realised by a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} in R {\displaystyle \mathbb {R} } are isolated by the order relations between the a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} . It is not saturated, however, since it does not realise any 1-type over the countable set Z {\displaystyle \mathbb {Z} } that implies x to be larger than any integer. The rational number line Q {\displaystyle \mathbb {Q} } is saturated, in contrast, since Q {\displaystyle \mathbb {Q} } is itself countable and therefore only has to realise types over finite subsets to be saturated. === Stone spaces === The set of definable subsets of M n {\displaystyle {\mathcal {M}}^{n}} over some parameters A {\displaystyle A} is a Boolean algebra. By Stone's representation theorem for Boolean algebras there is a natural dual topological space, which consists exactly of the complete n {\displaystyle n} -types over A {\displaystyle A} . The topology generated by sets of the form { p | φ ∈ p } {\displaystyle \{p|\varphi \in p\}} for single formulas φ {\displaystyle \varphi } . This is called the Stone space of n-types over A. This topology explains some of the terminology used in model theory: The compactness theorem says that the Stone space is a compact topological space, and a type p is isolated if and only if p is an isolated point in the Stone topology. While types in algebraically closed fields correspond to the spectrum of the polynomial ring, the topology on the type space is the constructible topology: a set of types is basic open iff it is of the form { p : f ( x ) = 0 ∈ p } {\displaystyle \{p:f(x)=0\in p\}} or of the form { p : f ( x ) ≠ 0 ∈ p } {\displaystyle \{p:f(x)\neq 0\in p\}} . This is finer than the Zariski topology. == Constructing models == === Realising and omitting types === Constructing models that realise certain types and do not realise others is an important task in model theory. Not realising a type is referred to as omitting it, and is generally possible by the (Countable) Omitting types theorem: Let T {\displaystyle {\mathcal {T}}} be a theory in a countable signature and let Φ {\displaystyle \Phi } be a countable set of non-isolated types over the empty set. Then there is a model M {\displaystyle {\mathcal {M}}} of T {\displaystyle {\mathcal {T}}} which omits every type in Φ {\displaystyle \Phi } . This implies that if a theory in a countable signature has only countably many types over the empty set, then this theory has an atomic model. On the other hand, there is always an elementary extension in which any set of types over a fixed parameter set is realised: Let M {\displaystyle {\mathcal {M}}} be a structure and let Φ {\displaystyle \Phi } be a set of complete types over a given parameter set A ⊂ M . {\displaystyle A\subset {\mathcal {M}}.} Then there is an elementary extension N {\displaystyle {\mathcal {N}}} of M {\displaystyle {\mathcal {M}}} which realises every type in Φ {\displaystyle \Phi } . However, since the parameter set is fixed and there is no mention here of the cardinality of N {\displaystyle {\mathcal {N}}} , this does not imply that every theory has a saturated model. In fact, whether every theory has a saturated model is independent of the axioms of Zermelo–Fraenkel set theory, and is true if the generalised continuum hypothesis holds. === Ultraproducts === Ultraproducts are used as a general technique for constructing models that realise certain types. An ultraproduct is obtained from the direct product of a set of structures over an index set I by identifying those tuples that agree on almost all entries, where almost all is made precise by an ultrafilter U on I. An ultraproduct of copies of the same structure is known as an ultrapower. The key to using ultraproducts in model theory is Łoś's theorem: Let M i {\displaystyle {\mathcal {M}}_{i}} be a set of σ-structures indexed by an index set I and U an ultrafilter on I. Then any σ-formula φ ( [ ( a i ) i ∈: I ] ) {\displaystyle \varphi ([(a_{i})_{i\in :I}])} is true in the ultraproduct of the M i {\displaystyle {\mathcal {M}}_{i}} by U {\displaystyle U} if the set of all i ∈ I {\displaystyle i\in I} for which M i ⊨ φ ( a i ) {\displaystyle {\mathcal {M}}_{i}\models \varphi (a_{i})} lies in U. In particular, any ultraproduct of models of a theory is itself a model of that theory, and thus if two models have isomorphic ultrapowers, they are elementarily equivalent. The Keisler-Shelah theorem provides a converse: If M and N are elementarily equivalent, then there is a set I and an ultrafilter U on I such that the ultrapowers by U of M and :N are isomorphic. Therefore, ultraproducts provide a way to talk about elementary equivalence that avoids mentioning first-order theories at all. Basic theorems of model theory such as the compactness theorem have alternative proofs using ultraproducts, and they can be used to construct saturated elementary extensions if they exist. == Categoricity == A theory was originally called categorical if it determines a structure up to isomorphism. It turns out that this definition is not useful, due to serious restrictions in the expressivity of first-order logic. The Löwenheim–Skolem theorem implies that if a theory T has an infinite model for some infinite cardinal number, then it has a model of size κ for any sufficiently large cardinal number κ. Since two models of different sizes cannot possibly be isomorphic, only finite structures can be described by a categorical theory. However, the weaker notion of κ-categoricity for a cardinal κ has become a key concept in model theory. A theory T is called κ-categorical if any two models of T that are of cardinality κ are isomorphic. It turns out that the question of κ-categoricity depends critically on whether κ is bigger than the cardinality of the language (i.e. ℵ 0 + | σ | {\displaystyle \aleph _{0}+|\sigma |} , where |σ| is the cardinality of the signature). For finite or countable signatures this means that there is a fundamental difference between ω-cardinality and κ-cardinality for uncountable κ. === ω-categoricity === ω-categorical theories can be characterised by properties of their type space: For a complete first-order theory T in a finite or countable signature the following conditions are equivalent: T is ω-categorical. Every type in Sn(T) is isolated. For every natural number n, Sn(T) is finite. For every natural number n, the number of formulas φ(x1, ..., xn) in n free variables, up to equivalence modulo T, is finite. The theory of ( Q , < ) {\displaystyle (\mathbb {Q} ,<)} , which is also the theory of ( R , < ) {\displaystyle (\mathbb {R} ,<)} , is ω-categorical, as every n-type p ( x 1 , … , x n ) {\displaystyle p(x_{1},\dots ,x_{n})} over the empty set is isolated by the pairwise order relation between the x i {\displaystyle x_{i}} . This means that every countable dense linear order is order-isomorphic to the rational number line. On the other hand, the theories of ℚ, ℝ and ℂ as fields are not ω {\displaystyle \omega } -categorical. This follows from the fact that in all those fields, any of the infinitely many natural numbers can be defined by a formula of the form x = 1 + ⋯ + 1 {\displaystyle x=1+\dots +1} . ℵ 0 {\displaystyle \aleph _{0}} -categorical theories and their countable models also have strong ties with oligomorphic groups: A complete first-order theory T in a finite or countable signature is ω-categorical if and only if its automorphism group is oligomorphic. The equivalent characterisations of this subsection, due independently to Engeler, Ryll-Nardzewski and Svenonius, are sometimes referred to as the Ryll-Nardzewski theorem. In combinatorial signatures, a common source of ω-categorical theories are Fraïssé limits, which are obtained as the limit of amalgamating all possible configurations of a class of finite relational structures. === Uncountable categoricity === Michael Morley showed in 1963 that there is only one notion of uncountable categoricity for theories in countable languages. Morley's categoricity theorem If a first-order theory T in a finite or countable signature is κ-categorical for some uncountable cardinal κ, then T is κ-categorical for all uncountable cardinals κ. Morley's proof revealed deep connections between uncountable categoricity and the internal structure of the models, which became the starting point of classification theory and stability theory. Uncountably categorical theories are from many points of view the most well-behaved theories. In particular, complete strongly minimal theories are uncountably categorical. This shows that the theory of algebraically closed fields of a given characteristic is uncountably categorical, with the transcendence degree of the field determining its isomorphism type. A theory that is both ω-categorical and uncountably categorical is called totally categorical. == Stability theory == A key factor in the structure of the class of models of a first-order theory is its place in the stability hierarchy. A complete theory T is called λ {\displaystyle \lambda } -stable for a cardinal λ {\displaystyle \lambda } if for any model M {\displaystyle {\mathcal {M}}} of T and any parameter set A ⊂ M {\displaystyle A\subset {\mathcal {M}}} of cardinality not exceeding λ {\displaystyle \lambda } , there are at most λ {\displaystyle \lambda } complete T-types over A. A theory is called stable if it is λ {\displaystyle \lambda } -stable for some infinite cardinal λ {\displaystyle \lambda } . Traditionally, theories that are ℵ 0 {\displaystyle \aleph _{0}} -stable are called ω {\displaystyle \omega } -stable. === The stability hierarchy === A fundamental result in stability theory is the stability spectrum theorem, which implies that every complete theory T in a countable signature falls in one of the following classes: There are no cardinals λ {\displaystyle \lambda } such that T is λ {\displaystyle \lambda } -stable. T is λ {\displaystyle \lambda } -stable if and only if λ ℵ 0 = λ {\displaystyle \lambda ^{\aleph _{0}}=\lambda } (see Cardinal exponentiation for an explanation of λ ℵ 0 {\displaystyle \lambda ^{\aleph _{0}}} ). T is λ {\displaystyle \lambda } -stable for any λ ≥ 2 ℵ 0 {\displaystyle \lambda \geq 2^{\aleph _{0}}} (where 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} is the cardinality of the continuum). A theory of the first type is called unstable, a theory of the second type is called strictly stable and a theory of the third type is called superstable. Furthermore, if a theory is ω {\displaystyle \omega } -stable, it is stable in every infinite cardinal, so ω {\displaystyle \omega } -stability is stronger than superstability. Many construction in model theory are easier when restricted to stable theories; for instance, every model of a stable theory has a saturated elementary extension, regardless of whether the generalised continuum hypothesis is true. Shelah's original motivation for studying stable theories was to decide how many models a countable theory has of any uncountable cardinality. If a theory is uncountably categorical, then it is ω {\displaystyle \omega } -stable. More generally, the Main gap theorem implies that if there is an uncountable cardinal λ {\displaystyle \lambda } such that a theory T has less than 2 λ {\displaystyle 2^{\lambda }} models of cardinality λ {\displaystyle \lambda } , then T is superstable. === Geometric stability theory === The stability hierarchy is also crucial for analysing the geometry of definable sets within a model of a theory. In ω {\displaystyle \omega } -stable theories, Morley rank is an important dimension notion for definable sets S within a model. It is defined by transfinite induction: The Morley rank is at least 0 if S is non-empty. For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has infinitely many disjoint definable subsets, each of rank at least α − 1. For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α. A theory T in which every definable set has well-defined Morley rank is called totally transcendental; if T is countable, then T is totally transcendental if and only if T is ω {\displaystyle \omega } -stable. Morley Rank can be extended to types by setting the Morley rank of a type to be the minimum of the Morley ranks of the formulas in the type. Thus, one can also speak of the Morley rank of an element a over a parameter set A, defined as the Morley rank of the type of a over A. There are also analogues of Morley rank which are well-defined if and only if a theory is superstable (U-rank) or merely stable (Shelah's ∞ {\displaystyle \infty } -rank). Those dimension notions can be used to define notions of independence and of generic extensions. More recently, stability has been decomposed into simplicity and "not the independence property" (NIP). Simple theories are those theories in which a well-behaved notion of independence can be defined, while NIP theories generalise o-minimal structures. They are related to stability since a theory is stable if and only if it is NIP and simple, and various aspects of stability theory have been generalised to theories in one of these classes. == Non-elementary model theory == Model-theoretic results have been generalised beyond elementary classes, that is, classes axiomatisable by a first-order theory. Model theory in higher-order logics or infinitary logics is hampered by the fact that completeness and compactness do not in general hold for these logics. This is made concrete by Lindström's theorem, stating roughly that first-order logic is essentially the strongest logic in which both the Löwenheim-Skolem theorems and compactness hold. However, model theoretic techniques have been developed extensively for these logics too. It turns out, however, that much of the model theory of more expressive logical languages is independent of Zermelo–Fraenkel set theory. More recently, alongside the shift in focus to complete stable and categorical theories, there has been work on classes of models defined semantically rather than axiomatised by a logical theory. One example is homogeneous model theory, which studies the class of substructures of arbitrarily large homogeneous models. Fundamental results of stability theory and geometric stability theory generalise to this setting. As a generalisation of strongly minimal theories, quasiminimally excellent classes are those in which every definable set is either countable or co-countable. They are key to the model theory of the complex exponential function. The most general semantic framework in which stability is studied are abstract elementary classes, which are defined by a strong substructure relation generalising that of an elementary substructure. Even though its definition is purely semantic, every abstract elementary class can be presented as the models of a first-order theory which omit certain types. Generalising stability-theoretic notions to abstract elementary classes is an ongoing research program. == Selected applications == Among the early successes of model theory are Tarski's proofs of quantifier elimination for various algebraically interesting classes, such as the real closed fields, Boolean algebras and algebraically closed fields of a given characteristic. Quantifier elimination allowed Tarski to show that the first-order theories of real-closed and algebraically closed fields as well as the first-order theory of Boolean algebras are decidable, classify the Boolean algebras up to elementary equivalence and show that the theories of real-closed fields and algebraically closed fields of a given characteristic are unique. Furthermore, quantifier elimination provided a precise description of definable relations on algebraically closed fields as algebraic varieties and of the definable relations on real-closed fields as semialgebraic sets In the 1960s, the introduction of the ultraproduct construction led to new applications in algebra. This includes Ax's work on pseudofinite fields, proving that the theory of finite fields is decidable, and Ax and Kochen's proof of as special case of Artin's conjecture on diophantine equations, the Ax–Kochen theorem. The ultraproduct construction also led to Abraham Robinson's development of nonstandard analysis, which aims to provide a rigorous calculus of infinitesimals. More recently, the connection between stability and the geometry of definable sets led to several applications from algebraic and diophantine geometry, including Ehud Hrushovski's 1996 proof of the geometric Mordell–Lang conjecture in all characteristics In 2001, similar methods were used to prove a generalisation of the Manin-Mumford conjecture. In 2011, Jonathan Pila applied techniques around o-minimality to prove the André–Oort conjecture for products of Modular curves. In a separate strand of inquiries that also grew around stable theories, Laskowski showed in 1992 that NIP theories describe exactly those definable classes that are PAC-learnable in machine learning theory. This has led to several interactions between these separate areas. In 2018, the correspondence was extended as Hunter and Chase showed that stable theories correspond to online learnable classes. == History == Model theory as a subject has existed since approximately the middle of the 20th century, and the name was coined by Alfred Tarski, a member of the Lwów–Warsaw school, in 1954. However some earlier research, especially in mathematical logic, is often regarded as being of a model-theoretical nature in retrospect. The first significant result in what is now model theory was a special case of the downward Löwenheim–Skolem theorem, published by Leopold Löwenheim in 1915. The compactness theorem was implicit in work by Thoralf Skolem, but it was first published in 1930, as a lemma in Kurt Gödel's proof of his completeness theorem. The Löwenheim–Skolem theorem and the compactness theorem received their respective general forms in 1936 and 1941 from Anatoly Maltsev. The development of model theory as an independent discipline was brought on by Alfred Tarski during the interbellum. Tarski's work included logical consequence, deductive systems, the algebra of logic, the theory of definability, and the semantic definition of truth, among other topics. His semantic methods culminated in the model theory he and a number of his Berkeley students developed in the 1950s and '60s. In the further history of the discipline, different strands began to emerge, and the focus of the subject shifted. In the 1960s, techniques around ultraproducts became a popular tool in model theory. At the same time, researchers such as James Ax were investigating the first-order model theory of various algebraic classes, and others such as H. Jerome Keisler were extending the concepts and results of first-order model theory to other logical systems. Then, inspired by Morley's problem, Shelah developed stability theory. His work around stability changed the complexion of model theory, giving rise to a whole new class of concepts. This is known as the paradigm shift. Over the next decades, it became clear that the resulting stability hierarchy is closely connected to the geometry of sets that are definable in those models; this gave rise to the subdiscipline now known as geometric stability theory. An example of an influential proof from geometric model theory is Hrushovski's proof of the Mordell–Lang conjecture for function fields. == Connections to related branches of mathematical logic == === Finite model theory === Finite model theory, which concentrates on finite structures, diverges significantly from the study of infinite structures in both the problems studied and the techniques used. In particular, many central results of classical model theory that fail when restricted to finite structures. This includes the compactness theorem, Gödel's completeness theorem, and the method of ultraproducts for first-order logic. At the interface of finite and infinite model theory are algorithmic or computable model theory and the study of 0-1 laws, where the infinite models of a generic theory of a class of structures provide information on the distribution of finite models. Prominent application areas of FMT are descriptive complexity theory, database theory and formal language theory. === Set theory === Any set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model. Particularly the proof of the independence of the continuum hypothesis requires considering sets in models which appear to be uncountable when viewed from within the model, but are countable to someone outside the model. The model-theoretic viewpoint has been useful in set theory; for example in Kurt Gödel's work on the constructible universe, which, along with the method of forcing developed by Paul Cohen can be shown to prove the (again philosophically interesting) independence of the axiom of choice and the continuum hypothesis from the other axioms of set theory. In the other direction, model theory is itself formalised within Zermelo–Fraenkel set theory. For instance, the development of the fundamentals of model theory (such as the compactness theorem) rely on the axiom of choice, and is in fact equivalent over Zermelo–Fraenkel set theory without choice to the Boolean prime ideal theorem. Other results in model theory depend on set-theoretic axioms beyond the standard ZFC framework. For example, if the Continuum Hypothesis holds then every countable model has an ultrapower which is saturated (in its own cardinality). Similarly, if the Generalized Continuum Hypothesis holds then every model has a saturated elementary extension. Neither of these results are provable in ZFC alone. Finally, some questions arising from model theory (such as compactness for infinitary logics) have been shown to be equivalent to large cardinal axioms. == See also == == Notes == == References == === Canonical textbooks === Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3. Chang, Chen Chung; Keisler, H. Jerome (2012) [1990]. Model Theory. Dover Books on Mathematics (3rd ed.). Dover Publications. p. 672. ISBN 978-0-486-48821-9. Hodges, Wilfrid (1997). A shorter model theory. Cambridge: Cambridge University Press. ISBN 978-0-521-58713-6. Kopperman, R. (1972). Model Theory and Its Applications. Boston: Allyn and Bacon. Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics 217. Springer. ISBN 0-387-98760-6. === Other textbooks === Bell, John L.; Slomson, Alan B. (2006) [1969]. Models and Ultraproducts: An Introduction (reprint of 1974 ed.). Dover Publications. ISBN 0-486-44979-3. Ebbinghaus, Heinz-Dieter; Flum, Jörg; Thomas, Wolfgang (1994). Mathematical Logic. Springer. ISBN 0-387-94258-0. Hinman, Peter G. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0. Hodges, Wilfrid (1993). Model theory. Cambridge University Press. ISBN 0-521-30442-3. Manzano, María (1989). Teoría de modelos (in Spanish). Alianza Editorial. ISBN 9788420681269. Model Theory. Oxford Logic Guides. Vol. 37. Translated by De Queiroz, Ruy. Oxford University Press. 1999 [1989]. ISBN 978-0198538516. Poizat, Bruno (2000). A Course in Model Theory. Springer. ISBN 0-387-98655-3. Rautenberg, Wolfgang (2010). A Concise Introduction to Mathematical Logic (3rd ed.). New York: Springer Science+Business Media. doi:10.1007/978-1-4419-1221-3. ISBN 978-1-4419-1220-6. Rothmaler, Philipp (2000). Introduction to Model Theory (new ed.). Taylor & Francis. ISBN 90-5699-313-5. Tent, Katrin; Ziegler, Martin (2012). A Course in Model Theory. Cambridge University Press. ISBN 9780521763240. Kirby, Jonathan (2019). An Invitation to Model Theory. Cambridge University Press. ISBN 978-1-107-16388-1. === Free online texts === Chatzidakis, Zoé (2001). Introduction to Model Theory (PDF). pp. 26 pages. Pillay, Anand (2002). Lecture Notes – Model Theory (PDF). pp. 61 pages. "Model theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Hodges, Wilfrid, Model theory. The Stanford Encyclopedia Of Philosophy, E. Zalta (ed.). Hodges, Wilfrid, First-order Model theory. The Stanford Encyclopedia Of Philosophy, E. Zalta (ed.). Simmons, Harold (2004), An introduction to Good old fashioned model theory. Notes of an introductory course for postgraduates (with exercises). Barwise, J.; Feferman, S., eds. (1985). "Model-Theoretic Logics". Perspectives in Logic. 8. ISBN 3540909362.
Wikipedia/Model_theory