text stringlengths 2 9.75k |
|---|
Mathematics is a field of study that discovers and organizes methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of abstract objects that consist of either abstractions from nature or—in modern mathematics—purely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to prove properties of objects, a proof consisting of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, and—in case of abstraction from nature—some basic properties that are considered true starting points of the theory under consideration. Mathematics is essential in the natural sciences, engineering, medicine, finance, computer science, and the social sciences. Although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent of any scientific experimentation. Some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. Other areas are developed independently from any application (and are therefore called pure mathematics) but often later find practical applications. Historically, the concept of a proof and its associated mathematical rigour first appeared in Greek mathematics, most notably in Euclid's Elements. Since its beginning, mathematics was primarily divided into geometry and arithmetic (the manipulation of natural numbers and fractions), until the 16th and 17th centuries, when algebra and infinitesimal calculus were introduced as new fields. Since then, the interaction between mathematical innovations and scientific discoveries has led to a correlated increase in the development of both. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method, which heralded a dramatic increase in the number of mathematical areas and their fields of application. The contemporary Mathematics Subject Classification lists more than sixty first-level areas of mathematics. == Areas of mathematics == Before the Renaissance, mathematics was divided into two main areas: arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes. Some types of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics. During the Renaissance, two more areas appeared. Mathematical notation led to algebra which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two subfields differential calculus and integral calculus, is the study of continuous functions, which model the typically nonlinear relationships between varying quantities, as represented by variables. This division into four main areas—arithmetic, geometry, algebra, and calculus—endured until the end of the 19th century. Areas such as celestial mechanics and solid mechanics were then studied by mathematicians, but now are considered as belonging to physics. The subject of combinatorics has been studied for much of recorded history, yet did |
division into four main areas—arithmetic, geometry, algebra, and calculus—endured until the end of the 19th century. Areas such as celestial mechanics and solid mechanics were then studied by mathematicians, but now are considered as belonging to physics. The subject of combinatorics has been studied for much of recorded history, yet did not become a separate branch of mathematics until the seventeenth century. At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion of new areas of mathematics. The 2020 Mathematics Subject Classification contains no less than sixty-three first-level areas. Some of these areas correspond to the older division, as is true regarding number theory (the modern name for higher arithmetic) and geometry. Several other first-level areas have "geometry" in their names or are otherwise commonly considered part of geometry. Algebra and calculus do not appear as first-level areas but are respectively split into several first-level areas. Other first-level areas emerged during the 20th century or had not previously been considered as mathematics, such as mathematical logic and foundations. === Number theory === Number theory began with the manipulation of numbers, that is, natural numbers ( N ) , {\displaystyle (\mathbb {N} ),} and later expanded to integers ( Z ) {\displaystyle (\mathbb {Z} )} and rational numbers ( Q ) . {\displaystyle (\mathbb {Q} ).} Number theory was once called arithmetic, but nowadays this term is mostly used for numerical calculations. Number theory dates back to ancient Babylon and probably China. Two prominent early number theorists were Euclid of ancient Greece and Diophantus of Alexandria. The modern study of number theory in its abstract form is largely attributed to Pierre de Fermat and Leonhard Euler. The field came to full fruition with the contributions of Adrien-Marie Legendre and Carl Friedrich Gauss. Many easily stated number problems have solutions that require sophisticated methods, often from across mathematics. A prominent example is Fermat's Last Theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was proved only in 1994 by Andrew Wiles, who used tools including scheme theory from algebraic geometry, category theory, and homological algebra. Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven despite considerable effort. Number theory includes several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), diophantine equations, and transcendence theory (problem oriented). === Geometry === Geometry is one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the needs of surveying and architecture, but has since blossomed out into many other subfields. A fundamental innovation was the ancient Greeks' introduction of the concept of proofs, which require that every assertion must be proved. For example, it is not sufficient to verify by measurement that, say, two lengths are equal; their equality must be proven via reasoning from previously accepted results (theorems) |
other subfields. A fundamental innovation was the ancient Greeks' introduction of the concept of proofs, which require that every assertion must be proved. For example, it is not sufficient to verify by measurement that, say, two lengths are equal; their equality must be proven via reasoning from previously accepted results (theorems) and a few basic statements. The basic statements are not subject to proof because they are self-evident (postulates), or are part of the definition of the subject of study (axioms). This principle, foundational for all mathematics, was first elaborated for geometry, and was systematized by Euclid around 300 BC in his book Elements. The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the three-dimensional Euclidean space. Euclidean geometry was developed without change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This constituted a major change of paradigm: Instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using their coordinates, which are numbers. Algebra (and later, calculus) can thus be used to solve geometrical problems. Geometry was split into two new subfields: synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically. Analytic geometry allows the study of curves unrelated to circles and lines. Such curves can be defined as the graph of functions, the study of which led to differential geometry. They can also be defined as implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry also makes it possible to consider Euclidean spaces of higher than three dimensions. In the 19th century, mathematicians discovered non-Euclidean geometries, which do not follow the parallel postulate. By questioning that postulate's truth, this discovery has been viewed as joining Russell's paradox in revealing the foundational crisis of mathematics. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that do not change under specific transformations of the space. Today's subareas of geometry include: Projective geometry, introduced in the 16th century by Girard Desargues, extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by unifying the treatments for intersecting and parallel lines. Affine geometry, the study of properties relative to parallelism and independent from the concept of length. Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions. Manifold theory, the study of shapes that are not necessarily embedded in a larger space. Riemannian geometry, the study of distance properties in curved spaces. Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials. Topology, the study of properties that are kept under continuous deformations. Algebraic topology, the use in topology of algebraic methods, |
necessarily embedded in a larger space. Riemannian geometry, the study of distance properties in curved spaces. Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials. Topology, the study of properties that are kept under continuous deformations. Algebraic topology, the use in topology of algebraic methods, mainly homological algebra. Discrete geometry, the study of finite configurations in geometry. Convex geometry, the study of convex sets, which takes its importance from its applications in optimization. Complex geometry, the geometry obtained by replacing real numbers with complex numbers. === Algebra === Algebra is the art of manipulating equations and formulas. Diophantus (3rd century) and al-Khwarizmi (9th century) were the two main precursors of algebra. Diophantus solved some equations involving unknown natural numbers by deducing new relations until he obtained the solution. Al-Khwarizmi introduced systematic methods for transforming equations, such as moving a term from one side of an equation into the other side. The term algebra is derived from the Arabic word al-jabr meaning 'the reunion of broken parts' that he used for naming one of these methods in the title of his main treatise. Algebra became an area in its own right only with François Viète (1540–1603), who introduced the use of variables for representing unknown or unspecified numbers. Variables allow mathematicians to describe the operations that have to be done on the numbers represented using mathematical formulas. Until the 19th century, algebra consisted mainly of the study of linear equations (presently linear algebra), and polynomial equations in a single unknown, which were called algebraic equations (a term still in use, although it may be ambiguous). During the 19th century, mathematicians began to use variables to represent things other than numbers (such as matrices, modular integers, and geometric transformations), on which generalizations of arithmetic operations are often valid. The concept of algebraic structure addresses this, consisting of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. The scope of algebra thus grew to include the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, as established by the influence and works of Emmy Noether, and popularized by Van der Waerden's book Moderne Algebra. Some types of algebraic structures have useful and often fundamental properties, in many areas of mathematics. Their study became autonomous parts of algebra, and include: group theory field theory vector spaces, whose study is essentially the same as linear algebra ring theory commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry homological algebra Lie algebra and Lie group theory Boolean algebra, which is widely used for the study of the logical structure of computers The study of types of algebraic structures as mathematical objects is the purpose of universal algebra and category theory. The latter applies to every mathematical structure (not only algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such |
of computers The study of types of algebraic structures as mathematical objects is the purpose of universal algebra and category theory. The latter applies to every mathematical structure (not only algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology. === Calculus and analysis === Calculus, formerly called infinitesimal calculus, was introduced independently and simultaneously by 17th-century mathematicians Newton and Leibniz. It is fundamentally the study of the relationship of variables that depend on each other. Calculus was expanded in the 18th century by Euler with the introduction of the concept of a function and many other results. Presently, "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts. Analysis is further subdivided into real analysis, where variables represent real numbers, and complex analysis, where variables represent complex numbers. Analysis includes many subareas shared by other areas of mathematics which include: Multivariable calculus Functional analysis, where variables represent varying functions Integration, measure theory and potential theory, all strongly related with probability theory on a continuum Ordinary differential equations Partial differential equations Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications === Discrete mathematics === Discrete mathematics, broadly speaking, is the study of individual, countable mathematical objects. An example is the set of all integers. Because the objects of study here are discrete, the methods of calculus and mathematical analysis do not directly apply. Algorithms—especially their implementation and computational complexity—play a major role in discrete mathematics. The four color theorem and optimal sphere packing were two major problems of discrete mathematics solved in the second half of the 20th century. The P versus NP problem, which remains open to this day, is also important for discrete mathematics, since its solution would potentially impact a large number of computationally difficult problems. Discrete mathematics includes: Combinatorics, the art of enumerating mathematical objects that satisfy some given constraints. Originally, these objects were elements or subsets of a given set; this has been extended to various objects, which establishes a strong link between combinatorics and other parts of discrete mathematics. For example, discrete geometry includes counting configurations of geometric shapes. Graph theory and hypergraphs Coding theory, including error correcting codes and a part of cryptography Matroid theory Discrete geometry Discrete probability distributions Game theory (although continuous games are also studied, most common games, such as chess and poker are discrete) Discrete optimization, including combinatorial optimization, integer programming, constraint programming === Mathematical logic and set theory === The two subjects of mathematical logic and set theory have belonged to mathematics since the end of the 19th century. Before this period, sets were not considered to be mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Before Cantor's study of infinite sets, mathematicians were reluctant to consider actually infinite collections, and considered infinity to be |
of the 19th century. Before this period, sets were not considered to be mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Before Cantor's study of infinite sets, mathematicians were reluctant to consider actually infinite collections, and considered infinity to be the result of endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets but by showing that this implies different sizes of infinity, per Cantor's diagonal argument. This led to the controversy over Cantor's set theory. In the same period, various areas of mathematics concluded the former intuitive definitions of the basic mathematical objects were insufficient for ensuring mathematical rigour. This became the foundational crisis of mathematics. It was eventually solved in mainstream mathematics by systematizing the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number has a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. This mathematical abstraction from reality is embodied in the modern philosophy of formalism, as founded by David Hilbert around 1910. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and proofs. The approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every consistent formal system that contains the natural numbers, there are theorems that are true (that is provable in a stronger system), but not provable inside the system. This approach to the foundations of mathematics was challenged during the first half of the 20th century by mathematicians led by Brouwer, who promoted intuitionistic logic, which explicitly lacks the law of excluded middle. These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theories), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, formal verification, program analysis, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories. === Statistics and other decision sciences === The field of statistics is a mathematical application that is employed for the collection and processing of data samples, using procedures based on mathematical methods especially probability theory. Statisticians generate data with random sampling or randomized experiments. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by |
with random sampling or randomized experiments. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints. For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics overlaps with other decision sciences, such as operations research, control theory, and mathematical economics. === Computational mathematics === Computational mathematics is the study of mathematical problems that are typically too large for human, numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretization with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation. == History == === Etymology === The word mathematics comes from the Ancient Greek word máthēma (μάθημα), meaning 'something learned, knowledge, mathematics', and the derived expression mathēmatikḗ tékhnē (μαθηματικὴ τέχνη), meaning 'mathematical science'. It entered the English language during the Late Middle English period through French and Latin. Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense. The Pythagoreans were likely the first to constrain the use of the word to just the study of arithmetic and geometry. By the time of Aristotle (384–322 BC) this meaning was fully established. In Latin and English, until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This change has resulted in several mistranslations: For example, Saint Augustine's warning that Christians should beware of mathematici, meaning "astrologers", is sometimes mistranslated as a condemnation of mathematicians. The apparent plural form in English goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural ta mathēmatiká (τὰ μαθηματικά) and means roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, inherited from Greek. In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math. === Ancient === In addition to recognizing how to count physical objects, prehistoric peoples may have also known how to count abstract quantities, like time—days, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to |
seasons, or years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time. In the 6th century BC, Greek mathematics began to emerge as a distinct discipline and some Ancient Greeks such as the Pythagoreans appeared to have considered it a subject in its own right. Around 300 BC, Euclid organized mathematical knowledge by way of postulates and first principles, which evolved into the axiomatic method that is used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes (c. 287 – c. 212 BC) of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus of Nicaea, 2nd century BC), and the beginnings of algebra (Diophantus, 3rd century AD). The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series. === Medieval and later === During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarizmi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. The Greek and Arabic mathematical texts were in turn translated to Latin during the Middle Ages and made available in Europe. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe, with innovations that revolutionized mathematics, such as the introduction of variables and symbolic notation by François Viète (1540–1603), the introduction of logarithms by John Napier in 1614, which greatly simplified numerical calculations, |
available in Europe. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe, with innovations that revolutionized mathematics, such as the introduction of variables and symbolic notation by François Viète (1540–1603), the introduction of logarithms by John Napier in 1614, which greatly simplified numerical calculations, especially for astronomy and marine navigation, the introduction of coordinates by René Descartes (1596–1650) for reducing geometry to algebra, and the development of calculus by Isaac Newton (1643–1727) and Gottfried Leibniz (1646–1716). Leonhard Euler (1707–1783), the most notable mathematician of the 18th century, unified these innovations into a single corpus with a standardized terminology, and completed them with the discovery and the proof of numerous theorems. Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved. Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews (MR) database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs." == Symbolic notation and terminology == Mathematical notation is widely used in science and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way. This notation consists of symbols used for representing operations, unspecified numbers, relations and any other mathematical objects, and then assembling them into expressions and formulas. More precisely, numbers and other mathematical objects are represented by symbols called variables, which are generally Latin or Greek letters, and often include subscripts. Operation and relations are generally represented by specific symbols or glyphs, such as + (plus), × (multiplication), ∫ {\textstyle \int } (integral), = (equal), and < (less than). All these symbols are generally grouped according to specific rules to form expressions and formulas. Normally, expressions and formulas do not appear alone, but are included in sentences of the current language, where expressions play the role of noun phrases and formulas play the role of clauses. Mathematics has developed a rich terminology covering a broad range of fields that study the properties of various abstract, idealized objects and how they interact. It is based on rigorous definitions that provide a standard foundation for communication. An axiom or postulate is a mathematical statement that is taken to be true without need of proof. If a mathematical statement has yet to be proven (or disproven), it is termed |
idealized objects and how they interact. It is based on rigorous definitions that provide a standard foundation for communication. An axiom or postulate is a mathematical statement that is taken to be true without need of proof. If a mathematical statement has yet to be proven (or disproven), it is termed a conjecture. Through a series of rigorous arguments employing deductive reasoning, a statement that is proven to be true becomes a theorem. A specialized theorem that is mainly used to prove another theorem is called a lemma. A proven instance that forms part of a more general finding is termed a corollary. Numerous technical terms used in mathematics are neologisms, such as polynomial and homeomorphism. Other technical terms are words of the common language that are used in an accurate meaning that may differ slightly from their common meaning. For example, in mathematics, "or" means "one, the other or both", while, in common language, it is either ambiguous or means "one or the other but not both" (in mathematics, the latter is called "exclusive or"). Finally, many mathematical terms are common words that are used with a completely different meaning. This may lead to sentences that are correct and true mathematical assertions, but appear to be nonsense to people who do not have the required background. For example, "every free module is flat" and "a field is always a ring". == Relationship with sciences == Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model. There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. === Pure and applied mathematics === Until the 19th century, the development of mathematics in the West was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics. For example, the natural numbers and arithmetic were introduced for the need |
evidence. === Pure and applied mathematics === Until the 19th century, the development of mathematics in the West was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics. For example, the natural numbers and arithmetic were introduced for the need of counting, and geometry was motivated by surveying, architecture and astronomy. Later, Isaac Newton introduced infinitesimal calculus for explaining the movement of the planets with his law of gravitation. Moreover, most mathematicians were also scientists, and many scientists were also mathematicians. However, a notable exception occurred with the tradition of pure mathematics in Ancient Greece. The problem of integer factorization, for example, which goes back to Euclid in 300 BC, had no practical application before its use in the RSA cryptosystem, now widely used for the security of computer networks. In the 19th century, mathematicians such as Karl Weierstrass and Richard Dedekind increasingly focused their research on internal problems, that is, pure mathematics. This led to split mathematics into pure mathematics and applied mathematics, the latter being often considered as having a lower value among mathematical purists. However, the lines between the two are frequently blurred. The aftermath of World War II led to a surge in the development of applied mathematics in the US and elsewhere. Many of the theories developed for applications were found interesting from the point of view of pure mathematics, and many results of pure mathematics were shown to have applications outside mathematics; in turn, the study of these applications may give new insights on the "pure theory". An example of the first case is the theory of distributions, introduced by Laurent Schwartz for validating computations done in quantum mechanics, which became immediately an important tool of (pure) mathematical analysis. An example of the second case is the decidability of the first-order theory of the real numbers, a problem of pure mathematics that was proved true by Alfred Tarski, with an algorithm that is impossible to implement because of a computational complexity that is much too high. For getting an algorithm that can be implemented and can solve systems of polynomial equations and inequalities, George Collins introduced the cylindrical algebraic decomposition that became a fundamental tool in real algebraic geometry. In the present day, the distinction between pure and applied mathematics is more a question of personal research aim of mathematicians than a division of mathematics into broad areas. The Mathematics Subject Classification has a section for "general applied mathematics" but does not mention "pure mathematics". However, these terms are still used in names of some university departments, such as at the Faculty of Mathematics at the University of Cambridge. === Unreasonable effectiveness === The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. |
and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics. A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses. In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four. A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon Ω − . {\displaystyle \Omega ^{-}.} In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments. === Specific sciences === ==== Physics ==== Mathematics and physics have influenced each other over their modern history. Modern physics uses mathematics abundantly, and is also considered to be the motivation of major mathematical developments. ==== Computing ==== Computing is closely related to mathematics in several ways. Theoretical computer science is considered to be mathematical in nature. Communication technologies apply branches of mathematics that may be very old (e.g., arithmetic), especially with respect to transmission security, in cryptography and coding theory. Discrete mathematics is useful in many areas of computer science, such as complexity theory, information theory, and graph theory. In 1998, the Kepler conjecture on sphere packing seemed to also be partially proven by computer. ==== Biology and chemistry ==== Biology uses probability extensively in fields such as ecology or neurobiology. Most discussion of probability centers on the concept of evolutionary fitness. Ecology heavily uses modeling to simulate population dynamics, study ecosystems such as the predator-prey model, measure pollution diffusion, or to assess climate change. The dynamics of a population can be modeled by coupled differential equations, such as the Lotka–Volterra equations. Statistical hypothesis testing, is run on data from clinical trials to determine whether a new treatment works. Since the start of the 20th century, chemistry has used computing |
diffusion, or to assess climate change. The dynamics of a population can be modeled by coupled differential equations, such as the Lotka–Volterra equations. Statistical hypothesis testing, is run on data from clinical trials to determine whether a new treatment works. Since the start of the 20th century, chemistry has used computing to model molecules in three dimensions. ==== Earth sciences ==== Structural geology and climatology use probabilistic models to predict the risk of natural catastrophes. Similarly, meteorology, oceanography, and planetology also use mathematics due to their heavy use of models. ==== Social sciences ==== Areas of mathematics used in the social sciences include probability/statistics and differential equations. These are used in linguistics, economics, sociology, and psychology. Often the fundamental postulate of mathematical economics is that of the rational individual actor – Homo economicus (lit. 'economic man'). In this model, the individual seeks to maximize their self-interest, and always makes optimal choices using perfect information. This atomistic view of economics allows it to relatively easily mathematize its thinking, because individual calculations are transposed into mathematical calculations. Such mathematical modeling allows one to probe economic mechanisms. Some reject or criticise the concept of Homo economicus. Economists note that real people have limited information, make poor choices, and care about fairness and altruism, not just personal gain. Without mathematical modeling, it is hard to go beyond statistical observations or untestable speculation. Mathematical modeling allows economists to create structured frameworks to test hypotheses and analyze complex interactions. Models provide clarity and precision, enabling the translation of theoretical concepts into quantifiable predictions that can be tested against real-world data. At the start of the 20th century, there was a development to express historical movements in formulas. In 1922, Nikolai Kondratiev discerned the ~50-year-long Kondratiev cycle, which explains phases of economic growth or crisis. Towards the end of the 19th century, mathematicians extended their analysis into geopolitics. Peter Turchin developed cliodynamics in the 1990s. Mathematization of the social sciences is not without risk. In the controversial book Fashionable Nonsense (1997), Sokal and Bricmont denounced the unfounded or abusive use of scientific terminology, particularly from mathematics or physics, in the social sciences. The study of complex systems (evolution of unemployment, business capital, demographic evolution of a population, etc.) uses mathematical knowledge. However, the choice of counting criteria, particularly for unemployment, or of models, can be subject to controversy. == Philosophy == === Reality === The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects. Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that |
their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects. Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy, Charles Hermite, Henri Poincaré and Albert Einstein that support his views. Something becomes objective (as opposed to "subjective") as soon as we are convinced that it exists in the minds of others in the same form as it does in ours and that we can think about it and discuss it together. Because the language of mathematics is so precise, it is ideally suited to defining concepts for which such a consensus exists. In my opinion, that is sufficient to provide us with a feeling of an objective existence, of a reality of mathematics ... Nevertheless, Platonism and the concurrent views on abstraction do not explain the unreasonable effectiveness of mathematics (as Platonism assumes mathematics exists independently, but does not explain why it matches reality). === Proposed definitions === There is no general consensus about the definition of mathematics or its epistemological status—that is, its place inside knowledge. A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable. There is not even consensus on whether mathematics is an art or a science. Some just say, "mathematics is what mathematicians do". A common approach is to define mathematics by its object of study. Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart. In the 19th century, when mathematicians began to address topics—such as infinite sets—which have no clear-cut relation to physical reality, a variety of new definitions were given. With the large number of new areas of mathematics that have appeared since the beginning of the 20th century, defining mathematics by its object of study has become increasingly difficult. For example, in lieu of a definition, Saunders Mac Lane in Mathematics, form and function summarizes the basics of several areas of mathematics, emphasizing their inter-connectedness, and observes: the development of Mathematics provides a tightly connected network of formal rules, concepts, and systems. Nodes of this network are closely bound to procedures useful in human activities and to questions arising in science. The transition from activities to the formal Mathematical systems is guided by a variety of general insights and ideas. Another approach for defining mathematics is to use its methods. For example, an area of study is often qualified as mathematics as soon as one can prove theorems—assertions whose validity relies on a proof, that is, a purely-logical deduction. === Rigor === Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules, without any use of empirical evidence and intuition. Rigorous reasoning is |
validity relies on a proof, that is, a purely-logical deduction. === Rigor === Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules, without any use of empirical evidence and intuition. Rigorous reasoning is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. Despite mathematics' concision, rigorous proofs can require hundreds of pages to express, such as the 255-page Feit–Thompson theorem. The emergence of computer-assisted proofs has allowed proof lengths to further expand. The result of this trend is a philosophy of the quasi-empiricist proof that can not be considered infallible, but has a probability attached to it. The concept of rigor in mathematics dates back to ancient Greece, where their society encouraged logical, deductive reasoning. However, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity. The method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. In the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. This produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs. At the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes (non-Euclidean geometries and Weierstrass function) and contradictions (Russell's paradox). This was solved by the inclusion of axioms with the apodictic inference rules of mathematical theories; the re-introduction of axiomatic method pioneered by the ancient Greeks. It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof, wherein it may be demonstrably refuted by other mathematicians. After a proof has been accepted for many years or even decades, it can then be considered as reliable. Nevertheless, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof. == Training and practice == === Education === Mathematics has a remarkable ability to cross cultural boundaries and time periods. As a human activity, the practice of mathematics has a social side, which includes education, careers, recognition, popularization, and so on. In education, mathematics is a core part of the curriculum and forms an important element of the STEM academic disciplines. Prominent careers for professional mathematicians include mathematics teacher or professor, statistician, actuary, financial analyst, economist, accountant, commodity trader, or computer consultant. Archaeological evidence shows that instruction in mathematics occurred as early as the second millennium BCE in ancient Babylonia. Comparable evidence has been unearthed for scribal mathematics training in the ancient Near East and then for the Greco-Roman world starting around 300 BCE. The oldest known mathematics textbook is the Rhind papyrus, dated from c. 1650 BCE in Egypt. Due to |
early as the second millennium BCE in ancient Babylonia. Comparable evidence has been unearthed for scribal mathematics training in the ancient Near East and then for the Greco-Roman world starting around 300 BCE. The oldest known mathematics textbook is the Rhind papyrus, dated from c. 1650 BCE in Egypt. Due to a scarcity of books, mathematical teachings in ancient India were communicated using memorized oral tradition since the Vedic period (c. 1500 – c. 500 BCE). In Imperial China during the Tang dynasty (618–907 CE), a mathematics curriculum was adopted for the civil service exam to join the state bureaucracy. Following the Dark Ages, mathematics education in Europe was provided by religious schools as part of the Quadrivium. Formal instruction in pedagogy began with Jesuit schools in the 16th and 17th century. Most mathematical curricula remained at a basic and practical level until the nineteenth century, when it began to flourish in France and Germany. The oldest journal addressing instruction in mathematics was L'Enseignement Mathématique, which began publication in 1899. The Western advancements in science and technology led to the establishment of centralized education systems in many nation-states, with mathematics as a core component—initially for its military applications. While the content of courses varies, in the present day nearly all countries teach mathematics to students for significant amounts of time. During school, mathematical capabilities and positive expectations have a strong association with career interest in the field. Extrinsic factors such as feedback motivation by teachers, parents, and peer groups can influence the level of interest in mathematics. Some students studying mathematics may develop an apprehension or fear about their performance in the subject. This is known as mathematical anxiety, and is considered the most prominent of the disorders impacting academic performance. Mathematical anxiety can develop due to various factors such as parental and teacher attitudes, social stereotypes, and personal traits. Help to counteract the anxiety can come from changes in instructional approaches, by interactions with parents and teachers, and by tailored treatments for the individual. === Psychology (aesthetic, creativity and intuition) === The validity of a mathematical theorem relies only on the rigor of its proof, which could theoretically be done automatically by a computer program. This does not mean that there is no place for creativity in a mathematical work. On the contrary, many important mathematical results (theorems) are solutions of problems that other mathematicians failed to solve, and the invention of a way for solving them may be a fundamental way of the solving process. An extreme example is Apery's theorem: Roger Apery provided only the ideas for a proof, and the formal proof was given only several months later by three other mathematicians. Creativity and rigor are not the only psychological aspects of the activity of mathematicians. Some mathematicians can see their activity as a game, more specifically as solving puzzles. This aspect of mathematical activity is emphasized in recreational mathematics. Mathematicians can find an aesthetic value to mathematics. Like beauty, it is hard to define, it is commonly related to elegance, which involves qualities like simplicity, symmetry, completeness, |
mathematicians can see their activity as a game, more specifically as solving puzzles. This aspect of mathematical activity is emphasized in recreational mathematics. Mathematicians can find an aesthetic value to mathematics. Like beauty, it is hard to define, it is commonly related to elegance, which involves qualities like simplicity, symmetry, completeness, and generality. G. H. Hardy in A Mathematician's Apology expressed the belief that the aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to mathematical aesthetics. Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. The 1998 book Proofs from THE BOOK, inspired by Erdős, is a collection of particularly succinct and revelatory mathematical arguments. Some examples of particularly elegant results included are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis. Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science). The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions. == Cultural impact == === Artistic expression === Notes that sound well together to a Western ear are sounds whose fundamental frequencies of vibration are in simple ratios. For example, an octave doubles the frequency and a perfect fifth multiplies it by 3 2 {\displaystyle {\frac {3}{2}}} . Humans, as well as some other animals, find symmetric patterns to be more beautiful. Mathematically, the symmetries of an object form a group known as the symmetry group. For example, the group underlying mirror symmetry is the cyclic group of two elements, Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . A Rorschach test is a figure invariant by this symmetry, as are butterfly and animal bodies more generally (at least on the surface). Waves on the sea surface possess translation symmetry: moving one's viewpoint by the distance between wave crests does not change one's view of the sea. Fractals possess self-similarity. === Popularization === Popular mathematics is the act of presenting mathematics without technical terms. Presenting mathematics may be hard since the general public suffers from mathematical anxiety and mathematical objects are highly abstract. However, popular mathematics writing can overcome this by using applications or cultural links. Despite this, mathematics is rarely the topic of popularization in printed or televised media. === Awards and prize problems === The most prestigious award in mathematics is the Fields Medal, established in 1936 and awarded every four years (except around World War II) to up to four individuals. It is considered the mathematical equivalent of the Nobel Prize. Other prestigious mathematics awards include: The Abel Prize, instituted in 2002 and first awarded in 2003 The Chern Medal for lifetime achievement, introduced in 2009 and first awarded in 2010 |
years (except around World War II) to up to four individuals. It is considered the mathematical equivalent of the Nobel Prize. Other prestigious mathematics awards include: The Abel Prize, instituted in 2002 and first awarded in 2003 The Chern Medal for lifetime achievement, introduced in 2009 and first awarded in 2010 The AMS Leroy P. Steele Prize, awarded since 1970 The Wolf Prize in Mathematics, also for lifetime achievement, instituted in 1978 A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list has achieved great celebrity among mathematicians, and at least thirteen of the problems (depending how some are interpreted) have been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Only one of them, the Riemann hypothesis, duplicates one of Hilbert's problems. A solution to any of these problems carries a 1 million dollar reward. To date, only one of these problems, the Poincaré conjecture, has been solved by the Russian mathematician Grigori Perelman. == See also == == Notes == == References == === Citations === === Other sources === == Further reading == |
Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication. Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions. Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures. Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences. == Definition and etymology == Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations defined on that set, like addition and multiplication. Algebra explores the laws, general characteristics, and types of algebraic structures. Within certain algebraic structures, it examines the use of variables in equations and how to manipulate these equations. Algebra is often understood as a generalization of arithmetic. Arithmetic studies operations like addition, subtraction, multiplication, and division, in a particular domain of numbers, such as the real numbers. Elementary algebra constitutes the first level of abstraction. Like arithmetic, it restricts itself to specific types of numbers and operations. It generalizes these operations by allowing indefinite quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal |
quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal algebra is still more abstract in that it is not interested in specific algebraic structures but investigates the characteristics of algebraic structures in general. The term "algebra" is sometimes used in a more narrow sense to refer only to elementary algebra or only to abstract algebra. When used as a countable noun, an algebra is a specific type of algebraic structure that involves a vector space equipped with a certain type of binary operation. Depending on the context, "algebra" can also refer to other algebraic structures, like a Lie algebra or an associative algebra. The word algebra comes from the Arabic term الجبر (al-jabr), which originally referred to the surgical treatment of bonesetting. In the 9th century, the term received a mathematical meaning when the Persian mathematician Muhammad ibn Musa al-Khwarizmi employed it to describe a method of solving equations and used it in the title of a treatise on algebra, al-Kitāb al-Mukhtaṣar fī Ḥisāb al-Jabr wal-Muqābalah [The Compendious Book on Calculation by Completion and Balancing] which was translated into Latin as Liber Algebrae et Almucabola. The word entered the English language in the 16th century from Italian, Spanish, and medieval Latin. Initially, its meaning was restricted to the theory of equations, that is, to the art of manipulating polynomial equations in view of solving them. This changed in the 19th century when the scope of algebra broadened to cover the study of diverse types of algebraic operations and structures together with their underlying axioms, the laws they follow. == Major branches == === Elementary algebra === Elementary algebra, also called school algebra, college algebra, and classical algebra, is the oldest and most basic form of algebra. It is a generalization of arithmetic that relies on variables and examines how mathematical statements may be transformed. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithm. For example, the operation of addition combines two numbers, called the addends, into a third number, called the sum, as in 2 + 5 = 7 {\displaystyle 2+5=7} . Elementary algebra relies on the same operations while allowing variables in addition to regular numbers. Variables are symbols for unspecified or unknown quantities. They make it possible to state relationships for which one does not know the exact values and to express general laws that are true, independent of which numbers are used. For example, the equation 2 × 3 = 3 × 2 {\displaystyle 2\times 3=3\times 2} belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of |
equation 2 × 3 = 3 × 2 {\displaystyle 2\times 3=3\times 2} belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of multiplication, which is expressed in the equation a × b = b × a {\displaystyle a\times b=b\times a} . Algebraic expressions are formed by using arithmetic operations to combine variables and numbers. By convention, the lowercase letters x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} represent variables. In some cases, subscripts are added to distinguish variables, as in x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , and x 3 {\displaystyle x_{3}} . The lowercase letters a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} are usually used for constants and coefficients. The expression 5 x + 3 {\displaystyle 5x+3} is an algebraic expression created by multiplying the number 5 with the variable x {\displaystyle x} and adding the number 3 to the result. Other examples of algebraic expressions are 32 x y z {\displaystyle 32xyz} and 64 x 1 2 + 7 x 2 − c {\displaystyle 64x_{1}{}^{2}+7x_{2}-c} . Some algebraic expressions take the form of statements that relate two expressions to one another. An equation is a statement formed by comparing two expressions, saying that they are equal. This can be expressed using the equals sign ( = {\displaystyle =} ), as in 5 x 2 + 6 x = 3 y + 4 {\displaystyle 5x^{2}+6x=3y+4} . Inequations involve a different type of comparison, saying that the two sides are different. This can be expressed using symbols such as the less-than sign ( < {\displaystyle <} ), the greater-than sign ( > {\displaystyle >} ), and the inequality sign ( ≠ {\displaystyle \neq } ). Unlike other expressions, statements can be true or false, and their truth value usually depends on the values of the variables. For example, the statement x 2 = 4 {\displaystyle x^{2}=4} is true if x {\displaystyle x} is either 2 or −2 and false otherwise. Equations with variables can be divided into identity equations and conditional equations. Identity equations are true for all values that can be assigned to the variables, such as the equation 2 x + 5 x = 7 x {\displaystyle 2x+5x=7x} . Conditional equations are only true for some values. For example, the equation x + 4 = 9 {\displaystyle x+4=9} is only true if x {\displaystyle x} is 5. The main goal of elementary algebra is to determine the values for which a statement is true. This can be achieved by transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs |
transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs to subtract 5 from the right side to balance both sides. The goal of these steps is usually to isolate the variable one is interested in on one side, a process known as solving the equation for that variable. For example, the equation x − 7 = 4 {\displaystyle x-7=4} can be solved for x {\displaystyle x} by adding 7 to both sides, which isolates x {\displaystyle x} on the left side and results in the equation x = 11 {\displaystyle x=11} . There are many other techniques used to solve equations. Simplification is employed to replace a complicated expression with an equivalent simpler one. For example, the expression 7 x − 3 x {\displaystyle 7x-3x} can be replaced with the expression 4 x {\displaystyle 4x} since 7 x − 3 x = ( 7 − 3 ) x = 4 x {\displaystyle 7x-3x=(7-3)x=4x} by the distributive property. For statements with several variables, substitution is a common technique to replace one variable with an equivalent expression that does not use this variable. For example, if one knows that y = 3 x {\displaystyle y=3x} then one can simplify the expression 7 x y {\displaystyle 7xy} to arrive at 21 x 2 {\displaystyle 21x^{2}} . In a similar way, if one knows the value of one variable one may be able to use it to determine the value of other variables. Algebraic equations can be interpreted geometrically to describe spatial figures in the form of a graph. To do so, the different variables in the equation are understood as coordinates and the values that solve the equation are interpreted as points of a graph. For example, if x {\displaystyle x} is set to zero in the equation y = 0.5 x − 1 {\displaystyle y=0.5x-1} , then y {\displaystyle y} must be −1 for the equation to be true. This means that the ( x , y ) {\displaystyle (x,y)} -pair ( 0 , − 1 ) {\displaystyle (0,-1)} is part of the graph of the equation. The ( x , y ) {\displaystyle (x,y)} -pair ( 0 , 7 ) {\displaystyle (0,7)} , by contrast, does not solve the equation and is therefore not part of the graph. The graph encompasses the totality of ( x , y ) {\displaystyle (x,y)} -pairs that solve the equation. ==== Polynomials ==== A polynomial is an expression consisting of one or more terms that are added or subtracted from each other, like x 4 + 3 x y 2 + 5 x 3 − 1 {\displaystyle x^{4}+3xy^{2}+5x^{3}-1} . Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive integer power. A monomial is a polynomial |
each other, like x 4 + 3 x y 2 + 5 x 3 − 1 {\displaystyle x^{4}+3xy^{2}+5x^{3}-1} . Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive integer power. A monomial is a polynomial with one term while two- and three-term polynomials are called binomials and trinomials. The degree of a polynomial is the maximal value (among its terms) of the sum of the exponents of the variables (4 in the above example). Polynomials of degree one are called linear polynomials. Linear algebra studies systems of linear polynomials. A polynomial is said to be univariate or multivariate, depending on whether it uses one or more variables. Factorization is a method used to simplify polynomials, making it easier to analyze them and determine the values for which they evaluate to zero. Factorization consists of rewriting a polynomial as a product of several factors. For example, the polynomial x 2 − 3 x − 10 {\displaystyle x^{2}-3x-10} can be factorized as ( x + 2 ) ( x − 5 ) {\displaystyle (x+2)(x-5)} . The polynomial as a whole is zero if and only if one of its factors is zero, i.e., if x {\displaystyle x} is either −2 or 5. Before the 19th century, much of algebra was devoted to polynomial equations, that is equations obtained by equating a polynomial to zero. The first attempts for solving polynomial equations were to express the solutions in terms of nth roots. The solution of a second-degree polynomial equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} is given by the quadratic formula x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}}.} Solutions for the degrees 3 and 4 are given by the cubic and quartic formulas. There are no general solutions for higher degrees, as proven in the 19th century by the Abel–Ruffini theorem. Even when general solutions do not exist, approximate solutions can be found by numerical tools like the Newton–Raphson method. The fundamental theorem of algebra asserts that every univariate polynomial equation of positive degree with real or complex coefficients has at least one complex solution. Consequently, every polynomial of a positive degree can be factorized into linear polynomials. This theorem was proved at the beginning of the 19th century, but this does not close the problem since the theorem does not provide any way for computing the solutions. === Linear algebra === Linear algebra starts with the study of systems of linear equations. An equation is linear if it can be expressed in the form a 1 x 1 + a 2 x 2 + . . . + a n x n = b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+...+a_{n}x_{n}=b} , where a 1 {\displaystyle a_{1}} , a 2 {\displaystyle a_{2}} , ..., a n {\displaystyle a_{n}} and b {\displaystyle b} are constants. Examples are x 1 − 7 x 2 + 3 x 3 = |
+ . . . + a n x n = b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+...+a_{n}x_{n}=b} , where a 1 {\displaystyle a_{1}} , a 2 {\displaystyle a_{2}} , ..., a n {\displaystyle a_{n}} and b {\displaystyle b} are constants. Examples are x 1 − 7 x 2 + 3 x 3 = 0 {\displaystyle x_{1}-7x_{2}+3x_{3}=0} and 1 4 x − y = 4 {\displaystyle \textstyle {\frac {1}{4}}x-y=4} . A system of linear equations is a set of linear equations for which one is interested in common solutions. Matrices are rectangular arrays of values that have been originally introduced for having a compact and synthetic notation for systems of linear equations. For example, the system of equations 9 x 1 + 3 x 2 − 13 x 3 = 0 2.3 x 1 + 7 x 3 = 9 − 5 x 1 − 17 x 2 = − 3 {\displaystyle {\begin{aligned}9x_{1}+3x_{2}-13x_{3}&=0\\2.3x_{1}+7x_{3}&=9\\-5x_{1}-17x_{2}&=-3\end{aligned}}} can be written as A X = B , {\displaystyle AX=B,} where A {\displaystyle A} , X {\displaystyle X} and B {\displaystyle B} are the matrices A = [ 9 3 − 13 2.3 0 7 − 5 − 17 0 ] , X = [ x 1 x 2 x 3 ] , B = [ 0 9 − 3 ] . {\displaystyle A={\begin{bmatrix}9&3&-13\\2.3&0&7\\-5&-17&0\end{bmatrix}},\quad X={\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}},\quad B={\begin{bmatrix}0\\9\\-3\end{bmatrix}}.} Under some conditions on the number of rows and columns, matrices can be added, multiplied, and sometimes inverted. All methods for solving linear systems may be expressed as matrix manipulations using these operations. For example, solving the above system consists of computing an inverted matrix A − 1 {\displaystyle A^{-1}} such that A − 1 A = I , {\displaystyle A^{-1}A=I,} where I {\displaystyle I} is the identity matrix. Then, multiplying on the left both members of the above matrix equation by A − 1 , {\displaystyle A^{-1},} one gets the solution of the system of linear equations as X = A − 1 B . {\displaystyle X=A^{-1}B.} Methods of solving systems of linear equations range from the introductory, like substitution and elimination, to more advanced techniques using matrices, such as Cramer's rule, the Gaussian elimination, and LU decomposition. Some systems of equations are inconsistent, meaning that no solutions exist because the equations contradict each other. Consistent systems have either one unique solution or an infinite number of solutions. The study of vector spaces and linear maps form a large part of linear algebra. A vector space is an algebraic structure formed by a set with an addition that makes it an abelian group and a scalar multiplication that is compatible with addition (see vector space for details). A linear map is a function between vector spaces that is compatible with addition and scalar multiplication. In the case of finite-dimensional vector spaces, vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear |
vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear map: if one chooses a particular basis to describe the vectors being transformed, then the entries in the matrix give the results of applying the linear map to the basis vectors. Systems of equations can be interpreted as geometric figures. For systems with two variables, each equation represents a line in two-dimensional space. The point where the two lines intersect is the solution of the full system because this is the only point that solves both the first and the second equation. For inconsistent systems, the two lines run parallel, meaning that there is no solution since they never intersect. If two equations are not independent then they describe the same line, meaning that every solution of one equation is also a solution of the other equation. These relations make it possible to seek solutions graphically by plotting the equations and determining where they intersect. The same principles also apply to systems of equations with more variables, with the difference being that the equations do not describe lines but higher dimensional figures. For instance, equations with three variables correspond to planes in three-dimensional space, and the points where all planes intersect solve the system of equations. === Abstract algebra === Abstract algebra, also called modern algebra, is the study of algebraic structures. An algebraic structure is a framework for understanding operations on mathematical objects, like the addition of numbers. While elementary algebra and linear algebra work within the confines of particular algebraic structures, abstract algebra takes a more general approach that compares how algebraic structures differ from each other and what types of algebraic structures there are, such as groups, rings, and fields. The key difference between these types of algebraic structures lies in the number of operations they use and the laws they obey. In mathematics education, abstract algebra refers to an advanced undergraduate course that mathematics majors take after completing courses in linear algebra. On a formal level, an algebraic structure is a set of mathematical objects, called the underlying set, together with one or several operations. Abstract algebra is primarily interested in binary operations, which take any two objects from the underlying set as inputs and map them to another object from this set as output. For example, the algebraic structure ⟨ N , + ⟩ {\displaystyle \langle \mathbb {N} ,+\rangle } has the natural numbers ( N {\displaystyle \mathbb {N} } ) as the underlying set and addition ( + {\displaystyle +} ) as its binary operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary |
operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary operation is function composition, which takes two transformations as input and has the transformation resulting from applying the first transformation followed by the second as its output. ==== Group theory ==== Abstract algebra classifies algebraic structures based on the laws or axioms that its operations obey and the number of operations it uses. One of the most basic types is a group, which has one operation and requires that this operation is associative and has an identity element and inverse elements. An operation is associative if the order of several applications does not matter, i.e., if ( a ∘ b ) ∘ c {\displaystyle (a\circ b)\circ c} is the same as a ∘ ( b ∘ c ) {\displaystyle a\circ (b\circ c)} for all elements. An operation has an identity element or a neutral element if one element e exists that does not change the value of any other element, i.e., if a ∘ e = e ∘ a = a {\displaystyle a\circ e=e\circ a=a} . An operation has inverse elements if for any element a {\displaystyle a} there exists a reciprocal element a − 1 {\displaystyle a^{-1}} that undoes a {\displaystyle a} . If an element operates on its inverse then the result is the neutral element e, expressed formally as a ∘ a − 1 = a − 1 ∘ a = e {\displaystyle a\circ a^{-1}=a^{-1}\circ a=e} . Every algebraic structure that fulfills these requirements is a group. For example, ⟨ Z , + ⟩ {\displaystyle \langle \mathbb {Z} ,+\rangle } is a group formed by the set of integers together with the operation of addition. The neutral element is 0 and the inverse element of any number a {\displaystyle a} is − a {\displaystyle -a} . The natural numbers with addition, by contrast, do not form a group since they contain only positive integers and therefore lack inverse elements. Group theory examines the nature of groups, with basic theorems such as the fundamental theorem of finite abelian groups and the Feit–Thompson theorem. The latter was a key early step in one of the most important mathematical achievements of the 20th century: the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups. ==== Ring theory and field theory ==== A ring is an algebraic structure with two operations that work similarly to the addition and multiplication of numbers and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, a ( b + c ) = a |
and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, a ( b + c ) = a b + a c {\displaystyle a(b+c)=ab+ac} and ( b + c ) a = b a + c a . {\displaystyle (b+c)a=ba+ca.} Moreover, multiplication is associative and has an identity element generally denoted as 1. Multiplication needs not to be commutative; if it is commutative, one has a commutative ring. The ring of integers ( Z {\displaystyle \mathbb {Z} } ) is one of the simplest commutative rings. A field is a commutative ring such that 1 ≠ 0 {\displaystyle 1\neq 0} and each nonzero element has a multiplicative inverse. The ring of integers does not form a field because it lacks multiplicative inverses. For example, the multiplicative inverse of 7 {\displaystyle 7} is 1 7 {\displaystyle {\tfrac {1}{7}}} , which is not an integer. The rational numbers, the real numbers, and the complex numbers each form a field with the operations of addition and multiplication. Ring theory is the study of rings, exploring concepts such as subrings, quotient rings, polynomial rings, and ideals as well as theorems such as Hilbert's basis theorem. Field theory is concerned with fields, examining field extensions, algebraic closures, and finite fields. Galois theory explores the relation between field theory and group theory, relying on the fundamental theorem of Galois theory. ==== Theories of interrelations among structures ==== Besides groups, rings, and fields, there are many other algebraic structures studied by algebra. They include magmas, semigroups, monoids, abelian groups, commutative rings, modules, lattices, vector spaces, algebras over a field, and associative and non-associative algebras. They differ from each other regarding the types of objects they describe and the requirements that their operations fulfill. Many are related to each other in that a basic structure can be turned into a more specialized structure by adding constraints. For example, a magma becomes a semigroup if its operation is associative. Homomorphisms are tools to examine structural features by comparing two algebraic structures. A homomorphism is a function from the underlying set of one algebraic structure to the underlying set of another algebraic structure that preserves certain structural characteristics. If the two algebraic structures use binary operations and have the form ⟨ A , ∘ ⟩ {\displaystyle \langle A,\circ \rangle } and ⟨ B , ⋆ ⟩ {\displaystyle \langle B,\star \rangle } then the function h : A → B {\displaystyle h:A\to B} is a homomorphism if it fulfills the following requirement: h ( x ∘ y ) = h ( x ) ⋆ h ( y ) {\displaystyle h(x\circ y)=h(x)\star h(y)} . The existence of a homomorphism reveals that the operation ⋆ {\displaystyle \star } in the second algebraic structure plays the same role as the operation ∘ {\displaystyle \circ } does in the first algebraic structure. Isomorphisms are a special type of homomorphism that |
( y ) {\displaystyle h(x\circ y)=h(x)\star h(y)} . The existence of a homomorphism reveals that the operation ⋆ {\displaystyle \star } in the second algebraic structure plays the same role as the operation ∘ {\displaystyle \circ } does in the first algebraic structure. Isomorphisms are a special type of homomorphism that indicates a high degree of similarity between two algebraic structures. An isomorphism is a bijective homomorphism, meaning that it establishes a one-to-one relationship between the elements of the two algebraic structures. This implies that every element of the first algebraic structure is mapped to one unique element in the second structure without any unmapped elements in the second structure. Another tool of comparison is the relation between an algebraic structure and its subalgebra. The algebraic structure and its subalgebra use the same operations, which follow the same axioms. The only difference is that the underlying set of the subalgebra is a subset of the underlying set of the algebraic structure. All operations in the subalgebra are required to be closed in its underlying set, meaning that they only produce elements that belong to this set. For example, the set of even integers together with addition is a subalgebra of the full set of integers together with addition. This is the case because the sum of two even numbers is again an even number. But the set of odd integers together with addition is not a subalgebra because it is not closed: adding two odd numbers produces an even number, which is not part of the chosen subset. Universal algebra is the study of algebraic structures in general. As part of its general perspective, it is not concerned with the specific elements that make up the underlying sets and considers operations with more than two inputs, such as ternary operations. It provides a framework for investigating what structural features different algebraic structures have in common. One of those structural features concerns the identities that are true in different algebraic structures. In this context, an identity is a universal equation or an equation that is true for all elements of the underlying set. For example, commutativity is a universal equation that states that a ∘ b {\displaystyle a\circ b} is identical to b ∘ a {\displaystyle b\circ a} for all elements. A variety is a class of all algebraic structures that satisfy certain identities. For example, if two algebraic structures satisfy commutativity then they are both part of the corresponding variety. Category theory examines how mathematical objects are related to each other using the concept of categories. A category is a collection of objects together with a collection of morphisms or "arrows" between those objects. These two collections must satisfy certain conditions. For example, morphisms can be joined, or composed: if there exists a morphism from object a {\displaystyle a} to object b {\displaystyle b} , and another morphism from object b {\displaystyle b} to object c {\displaystyle c} , then there must also exist one from object a {\displaystyle a} to object c {\displaystyle c} . Composition of morphisms |
exists a morphism from object a {\displaystyle a} to object b {\displaystyle b} , and another morphism from object b {\displaystyle b} to object c {\displaystyle c} , then there must also exist one from object a {\displaystyle a} to object c {\displaystyle c} . Composition of morphisms is required to be associative, and there must be an "identity morphism" for every object. Categories are widely used in contemporary mathematics since they provide a unifying framework to describe and analyze many fundamental mathematical concepts. For example, sets can be described with the category of sets, and any group can be regarded as the morphisms of a category with just one object. == History == The origin of algebra lies in attempts to solve mathematical problems involving arithmetic calculations and unknown quantities. These developments happened in the ancient period in Babylonia, Egypt, Greece, China, and India. One of the earliest documents on algebraic problems is the Rhind Mathematical Papyrus from ancient Egypt, which was written around 1650 BCE. It discusses solutions to linear equations, as expressed in problems like "A quantity; its fourth is added to it. It becomes fifteen. What is the quantity?" Babylonian clay tablets from around the same time explain methods to solve linear and quadratic polynomial equations, such as the method of completing the square. Many of these insights found their way to the ancient Greeks. Starting in the 6th century BCE, their main interest was geometry rather than algebra, but they employed algebraic methods to solve geometric problems. For example, they studied geometric figures while taking their lengths and areas as unknown quantities to be determined, as exemplified in Pythagoras' formulation of the difference of two squares method and later in Euclid's Elements. In the 3rd century CE, Diophantus provided a detailed treatment of how to solve algebraic equations in a series of books called Arithmetica. He was the first to experiment with symbolic notation to express polynomials. Diophantus's work influenced Arab development of algebra with many of his methods reflected in the concepts and techniques used in medieval Arabic algebra. In ancient China, The Nine Chapters on the Mathematical Art, a book composed over the period spanning from the 10th century BCE to the 2nd century CE, explored various techniques for solving algebraic equations, including the use of matrix-like constructs. There is no unanimity of opinion as to whether these early developments are part of algebra or only precursors. They offered solutions to algebraic problems but did not conceive them in an abstract and general manner, focusing instead on specific cases and applications. This changed with the Persian mathematician al-Khwarizmi, who published his The Compendious Book on Calculation by Completion and Balancing in 825 CE. It presents the first detailed treatment of general methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries. In India, Brahmagupta investigated |
methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries. In India, Brahmagupta investigated how to solve quadratic equations and systems of equations with several variables in the 7th century CE. Among his innovations were the use of zero and negative numbers in algebraic equations. The Indian mathematicians Mahāvīra in the 9th century and Bhāskara II in the 12th century further refined Brahmagupta's methods and concepts. In 1247, the Chinese mathematician Qin Jiushao wrote the Mathematical Treatise in Nine Sections, which includes an algorithm for the numerical evaluation of polynomials, including polynomials of higher degrees. The Italian mathematician Fibonacci brought al-Khwarizmi's ideas and techniques to Europe in books including his Liber Abaci. In 1545, the Italian polymath Gerolamo Cardano published his book Ars Magna, which covered many topics in algebra, discussed imaginary numbers, and was the first to present general methods for solving cubic and quartic equations. In the 16th and 17th centuries, the French mathematicians François Viète and René Descartes introduced letters and symbols to denote variables and operations, making it possible to express equations in an concise and abstract manner. Their predecessors had relied on verbal descriptions of problems and solutions. Some historians see this development as a key turning point in the history of algebra and consider what came before it as the prehistory of algebra because it lacked the abstract nature based on symbolic manipulation. In the 17th and 18th centuries, many attempts were made to find general solutions to polynomials of degree five and higher. All of them failed. At the end of the 18th century, the German mathematician Carl Friedrich Gauss proved the fundamental theorem of algebra, which describes the existence of zeros of polynomials of any degree without providing a general solution. At the beginning of the 19th century, the Italian mathematician Paolo Ruffini and the Norwegian mathematician Niels Henrik Abel were able to show that no general solution exists for polynomials of degree five and higher. In response to and shortly after their findings, the French mathematician Évariste Galois developed what came later to be known as Galois theory, which offered a more in-depth analysis of the solutions of polynomials while also laying the foundation of group theory. Mathematicians soon realized the relevance of group theory to other fields and applied it to disciplines like geometry and number theory. Starting in the mid-19th century, interest in algebra shifted from the study of polynomials associated with elementary algebra towards a more general inquiry into algebraic structures, marking the emergence of abstract algebra. This approach explored the axiomatic basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as |
basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as the Austrian mathematician Emil Artin. They researched different forms of algebraic structures and categorized them based on their underlying axioms into types, like groups, rings, and fields. The idea of the even more general approach associated with universal algebra was conceived by the English mathematician Alfred North Whitehead in his 1898 book A Treatise on Universal Algebra. Starting in the 1930s, the American mathematician Garrett Birkhoff expanded these ideas and developed many of the foundational concepts of this field. The invention of universal algebra led to the emergence of various new areas focused on the algebraization of mathematics—that is, the application of algebraic methods to other branches of mathematics. Topological algebra arose in the early 20th century, studying algebraic structures such as topological groups and Lie groups. In the 1940s and 50s, homological algebra emerged, employing algebraic techniques to study homology. Around the same time, category theory was developed and has since played a key role in the foundations of mathematics. Other developments were the formulation of model theory and the study of free algebras. == Applications == The influence of algebra is wide-reaching, both within mathematics and in its applications to other fields. The algebraization of mathematics is the process of applying algebraic methods and principles to other branches of mathematics, such as geometry, topology, number theory, and calculus. It happens by employing symbols in the form of variables to express mathematical insights on a more general level, allowing mathematicians to develop formal models describing how objects interact and relate to each other. One application, found in geometry, is the use of algebraic statements to describe geometric figures. For example, the equation y = 3 x − 7 {\displaystyle y=3x-7} describes a line in two-dimensional space while the equation x 2 + y 2 + z 2 = 1 {\displaystyle x^{2}+y^{2}+z^{2}=1} corresponds to a sphere in three-dimensional space. Of special interest to algebraic geometry are algebraic varieties, which are solutions to systems of polynomial equations that can be used to describe more complex geometric figures. Algebraic reasoning can also solve geometric problems. For example, one can determine whether and where the line described by y = x + 1 {\displaystyle y=x+1} intersects with the circle described by x 2 + y 2 = 25 {\displaystyle x^{2}+y^{2}=25} by solving the system of equations made up of these two equations. Topology studies the properties of geometric figures or topological spaces that are preserved under operations of continuous deformation. Algebraic topology relies on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them. Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field |
on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them. Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field of inquiry. Examples are the use of algebraic expressions to describe general laws, like Fermat's Last Theorem, and of algebraic structures to analyze the behavior of numbers, such as the ring of integers. The related field of combinatorics uses algebraic techniques to solve problems related to counting, arrangement, and combination of discrete objects. An example in algebraic combinatorics is the application of group theory to analyze graphs and symmetries. The insights of algebra are also relevant to calculus, which uses mathematical expressions to examine rates of change and accumulation. It relies on algebra, for instance, to understand how these expressions can be transformed and what role variables play in them. Algebraic logic employs the methods of algebra to describe and analyze the structures and patterns that underlie logical reasoning, exploring both the relevant mathematical structures themselves and their application to concrete problems of logic. It includes the study of Boolean algebra to describe propositional logic as well as the formulation and analysis of algebraic structures corresponding to more complex systems of logic. Algebraic methods are also commonly employed in other areas, like the natural sciences. For example, they are used to express scientific laws and solve equations in physics, chemistry, and biology. Similar applications are found in fields like economics, geography, engineering (including electronics and robotics), and computer science to express relationships, solve problems, and model systems. Linear algebra plays a central role in artificial intelligence and machine learning, for instance, by enabling the efficient processing and analysis of large datasets. Various fields rely on algebraic structures investigated by abstract algebra. For example, physical sciences like crystallography and quantum mechanics make extensive use of group theory, which is also employed to study puzzles such as Sudoku and Rubik's cubes, and origami. Both coding theory and cryptology rely on abstract algebra to solve problems associated with data transmission, like avoiding the effects of noise and ensuring data security. == Education == Algebra education mostly focuses on elementary algebra, which is one of the reasons why elementary algebra is also called school algebra. It is usually not introduced until secondary education since it requires mastery of the fundamentals of arithmetic while posing new cognitive challenges associated with abstract reasoning and generalization. It aims to familiarize students with the formal side of mathematics by helping them understand mathematical symbolism, for example, how variables can be used to represent unknown quantities. An additional difficulty for students lies in the fact that, unlike arithmetic calculations, algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often to determine an unknown quantity. Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including |
algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often to determine an unknown quantity. Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including sticks or cups, and "function machines" representing equations as flow diagrams. One method uses balance scales as a pictorial approach to help students grasp basic problems of algebra. The mass of some objects on the scale is unknown and represents variables. Solving an equation corresponds to adding and removing objects on both sides in such a way that the sides stay in balance until the only object remaining on one side is the object of unknown mass. Word problems are another tool to show how algebra is applied to real-life situations. For example, students may be presented with a situation in which Naomi's brother has twice as many apples as Naomi. Given that both together have twelve apples, students are then asked to find an algebraic equation that describes this situation ( 2 x + x = 12 {\displaystyle 2x+x=12} ) and to determine how many apples Naomi has ( x = 4 {\displaystyle x=4} ). At the university level, mathematics students encounter advanced algebra topics from linear and abstract algebra. Initial undergraduate courses in linear algebra focus on matrices, vector spaces, and linear maps. Upon completing them, students are usually introduced to abstract algebra, where they learn about algebraic structures like groups, rings, and fields, as well as the relations between them. The curriculum typically also covers specific instances of algebraic structures, such as the systems of rational numbers, the real numbers, and the polynomials. == See also == == References == === Notes === === Citations === === Sources === == External links == |
In mathematics, an algebraic expression is an expression built up from constants (usually, algebraic numbers), variables, and the basic algebraic operations: addition (+), subtraction (-), multiplication (×), division (÷), whole number powers, and roots (fractional powers).. For example, 3 x 2 − 2 x y + c {\displaystyle 3x^{2}-2xy+c} is an algebraic expression. Since taking the square root is the same as raising to the power 1/2, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} An algebraic equation is an equation involving polynomials, for which algebraic expressions may be solutions. If you restrict your set of constants to be numbers, any algebraic expression can be called an arithmetic expression. However, algebraic expressions can be used on more abstract objects such as in Abstract algebra. If you restrict your constants to integers, the set of numbers that can be described with an algebraic expression are called Algebraic numbers. By contrast, transcendental numbers like π and e are not algebraic, since they are not derived from integer constants and algebraic operations. Usually, π is constructed as a geometric relationship, and the definition of e requires an infinite number of algebraic operations. More generally, expressions which are algebraically independent from their constants and/or variables are called transcendental. == Terminology == Algebra has its own terminology to describe parts of an expression: == Conventions == === Variables === By convention, letters at the beginning of the alphabet (e.g. a , b , c {\displaystyle a,b,c} ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x , y {\displaystyle x,y} and z {\displaystyle z} ) are used to represent variables. They are usually written in italics. === Exponents === By convention, terms with the highest power (exponent), are written on the left, for example, x 2 {\displaystyle x^{2}} is written to the left of x {\displaystyle x} . When a coefficient is one, it is usually omitted (e.g. 1 x 2 {\displaystyle 1x^{2}} is written x 2 {\displaystyle x^{2}} ). Likewise when the exponent (power) is one, (e.g. 3 x 1 {\displaystyle 3x^{1}} is written 3 x {\displaystyle 3x} ), and, when the exponent is zero, the result is always 1 (e.g. 3 x 0 {\displaystyle 3x^{0}} is written 3 {\displaystyle 3} , since x 0 {\displaystyle x^{0}} is always 1 {\displaystyle 1} ). == In roots of polynomials == The roots of a polynomial expression of degree n, or equivalently the solutions of a polynomial equation, can always be written as algebraic expressions if n < 5 (see quadratic formula, cubic function, and quartic equation). Such a solution of an equation is called an algebraic solution. But the Abel–Ruffini theorem states that algebraic solutions do not exist for all such equations (just for some of them) if n ≥ {\displaystyle \geq } 5. == Rational expressions == Given two polynomials P ( x ) {\displaystyle P(x)} and Q ( x ) {\displaystyle Q(x)} , their quotient is called a rational expression or |
not exist for all such equations (just for some of them) if n ≥ {\displaystyle \geq } 5. == Rational expressions == Given two polynomials P ( x ) {\displaystyle P(x)} and Q ( x ) {\displaystyle Q(x)} , their quotient is called a rational expression or simply rational fraction. A rational expression P ( x ) Q ( x ) {\textstyle {\frac {P(x)}{Q(x)}}} is called proper if deg P ( x ) < deg Q ( x ) {\displaystyle \deg P(x)<\deg Q(x)} , and improper otherwise. For example, the fraction 2 x x 2 − 1 {\displaystyle {\tfrac {2x}{x^{2}-1}}} is proper, and the fractions x 3 + x 2 + 1 x 2 − 5 x + 6 {\displaystyle {\tfrac {x^{3}+x^{2}+1}{x^{2}-5x+6}}} and x 2 − x + 1 5 x 2 + 3 {\displaystyle {\tfrac {x^{2}-x+1}{5x^{2}+3}}} are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has x 3 + x 2 + 1 x 2 − 5 x + 6 = ( x + 6 ) + 24 x − 35 x 2 − 5 x + 6 , {\displaystyle {\frac {x^{3}+x^{2}+1}{x^{2}-5x+6}}=(x+6)+{\frac {24x-35}{x^{2}-5x+6}},} where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example, 2 x x 2 − 1 = 1 x − 1 + 1 x + 1 . {\displaystyle {\frac {2x}{x^{2}-1}}={\frac {1}{x-1}}+{\frac {1}{x+1}}.} Here, the two terms on the right are called partial fractions. === Irrational fraction === An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is x 1 / 2 − 1 3 a x 1 / 3 − x 1 / 2 . {\displaystyle {\frac {x^{1/2}-{\tfrac {1}{3}}a}{x^{1/3}-x^{1/2}}}.} The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute x = z 6 {\displaystyle x=z^{6}} to obtain z 3 − 1 3 a z 2 − z 3 . {\displaystyle {\frac {z^{3}-{\tfrac {1}{3}}a}{z^{2}-z^{3}}}.} == Algebraic and other mathematical expressions == The table below summarizes how algebraic expressions compare with several other types of mathematical expressions by the type of elements they may contain, according to common but not universal conventions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as x2 + 4x + 4. An irrational algebraic expression is one that is not rational, such as √x + 4. == See |
according to common but not universal conventions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as x2 + 4x + 4. An irrational algebraic expression is one that is not rational, such as √x + 4. == See also == Algebraic function Analytical expression Closed-form expression Expression (mathematics) Precalculus Term (logic) == Notes == == References == James, Robert Clarke; James, Glenn (1992). Mathematics dictionary. Springer. p. 8. ISBN 9780412990410. == External links == Weisstein, Eric W. "Algebraic Expression". MathWorld. |
A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic, that is, relies only on addition, subtraction, multiplication, division, raising to integer powers, and extraction of nth roots (square roots, cube roots, etc.). A well-known example is the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}},} which expresses the solutions of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} There exist algebraic solutions for cubic equations and quartic equations, which are more complicated than the quadratic formula. The Abel–Ruffini theorem,: 211 and, more generally Galois theory, state that some quintic equations, such as x 5 − x + 1 = 0 , {\displaystyle x^{5}-x+1=0,} do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation x 10 = 2 {\displaystyle x^{10}=2} can be solved as x = ± 2 10 . {\displaystyle x=\pm {\sqrt[{10}]{2}}.} The eight other solutions are nonreal complex numbers, which are also algebraic and have the form x = ± r 2 10 , {\displaystyle x=\pm r{\sqrt[{10}]{2}},} where r is a fifth root of unity, which can be expressed with two nested square roots. See also Quintic function § Other solvable quintics for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. == See also == Radical symbol Solvable quintics Solvable sextics Solvable septics == References == |
Multilinear algebra is the study of functions with multiple vector-valued arguments, with the functions being linear maps with respect to each argument. It involves concepts such as matrices, tensors, multivectors, systems of linear equations, higher-dimensional spaces, determinants, inner and outer products, and dual spaces. It is a mathematical tool used in engineering, machine learning, physics, and mathematics. == Origin == While many theoretical concepts and applications involve single vectors, mathematicians such as Hermann Grassmann considered structures involving pairs, triplets, and multivectors that generalize vectors. With multiple combinational possibilities, the space of multivectors expands to 2n dimensions, where n is the dimension of the relevant vector space. The determinant can be formulated abstractly using the structures of multilinear algebra. Multilinear algebra appears in the study of the mechanical response of materials to stress and strain, involving various moduli of elasticity. The term "tensor" describes elements within the multilinear space due to its added structure. Despite Grassmann's early work in 1844 with his Ausdehnungslehre, which was also republished in 1862, the subject was initially not widely understood, as even ordinary linear algebra posed many challenges at the time. The concepts of multilinear algebra find applications in certain studies of multivariate calculus and manifolds, particularly concerning the Jacobian matrix. Infinitesimal differentials encountered in single-variable calculus are transformed into differential forms in multivariate calculus, and their manipulation is carried out using exterior algebra. Following Grassmann, developments in multilinear algebra were made by Victor Schlegel in 1872 with the publication of the first part of his System der Raumlehre and by Elwin Bruno Christoffel. Notably, significant advancements came through the work of Gregorio Ricci-Curbastro and Tullio Levi-Civita, particularly in the form of absolute differential calculus within multilinear algebra. Marcel Grossmann and Michele Besso introduced this form to Albert Einstein, and in 1915, Einstein's publication on general relativity, explaining the precession of Mercury's perihelion, established multilinear algebra and tensors as important mathematical tools in physics. In 1958, Nicolas Bourbaki included a chapter on multilinear algebra titled "Algèbre Multilinéaire" in his series Éléments de mathématique, specifically within the algebra book. The chapter covers topics such as bilinear functions, the tensor product of two modules, and the properties of tensor products. == Applications == Multilinear algebra concepts find applications in various areas, including: == See also == == References == Greub, W. H. (1967) Multilinear Algebra, Springer Douglas Northcott (1984) Multilinear Algebra, Cambridge University Press ISBN 0-521-26269-0 Shaw, Ronald (1983). Multilinear algebra and group representations. Vol. 2. Academic Press. ISBN 978-0-12-639202-9. OCLC 59106339. |
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are: Typed λ-calculus of Alonzo Church Intuitionistic type theory of Per Martin-Löf Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. == History == Type theory was created to avoid paradoxes in naive set theory and formal logic, such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory. Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic. In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Rocq (previously known as Coq), Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory. == Applications == === Mathematical foundations === The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory. Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS). Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy). === Proof assistants === Much of the current research into type theory is driven by |
as Lawvere's Elementary Theory of the Category of Sets (ETCS). Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy). === Proof assistants === Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages: LF is used by Twelf, often to define other type theories; many type theories which fall under higher-order logic are used by the HOL family of provers and PVS; computational type theory is used by NuPRL; calculus of constructions and its derivatives are used by Rocq (previously known as Coq), Matita, and Lean; UTT (Luo's Unified Theory of dependent Types) is used by Agda which is both a programming language and proof assistant Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory. === Programming languages === Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system. The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them. === Linguistics === Type theory is also widely used in formal theories of semantics of natural languages, especially Montague grammar and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words. The most common construction takes the basic types e {\displaystyle e} and t {\displaystyle t} for individuals and truth-values, respectively, and defines the set of types recursively as follows: if a {\displaystyle a} and b {\displaystyle b} are types, then so is ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } ; nothing except the basic types, and what can be constructed from them by means of the previous clause are types. A complex type ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } is the type of functions from entities of type a {\displaystyle a} to entities of type b {\displaystyle b} . Thus one has types like ⟨ e , t ⟩ {\displaystyle \langle e,t\rangle } which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type ⟨ ⟨ e , t ⟩ , t ⟩ {\displaystyle \langle \langle e,t\rangle ,t\rangle } is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981). Type theory with records is |
\langle e,t\rangle ,t\rangle } is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981). Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems. === Social sciences === Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types. == Logic == A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition φ {\displaystyle \varphi } is true", or "The formula φ {\displaystyle \varphi } is a well-formed formula". A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as t e r m : t y p e {\displaystyle \mathrm {term} :{\mathsf {type}}} . === Terms === A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number 0 {\displaystyle 0} , the Boolean value t r u e {\displaystyle \mathrm {true} } , and functions such as the successor function S {\displaystyle \mathrm {S} } and conditional operator i f {\displaystyle \mathrm {if} } . Thus some terms could be 0 {\displaystyle 0} , ( S 0 ) {\displaystyle (\mathrm {S} \,0)} , ( S ( S 0 ) ) {\displaystyle (\mathrm {S} \,(\mathrm {S} \,0))} , and ( i f t r u e 0 ( S 0 ) ) {\displaystyle (\mathrm {if} \,\mathrm {true} \,0\,(\mathrm {S} \,0))} . === Judgments === Most type theories have 4 judgments: " T {\displaystyle T} is a type" " t {\displaystyle t} is a term of type T {\displaystyle T} " "Type T 1 {\displaystyle T_{1}} is equal to type T 2 {\displaystyle T_{2}} " "Terms t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} both of type T {\displaystyle T} are equal" Judgments may follow from assumptions. For example, one might say "assuming x {\displaystyle x} is a term of type b o o l {\displaystyle {\mathsf {bool}}} and y {\displaystyle y} is a term of type n a t {\displaystyle {\mathsf {nat}}} , it follows that ( i f x y y ) {\displaystyle (\mathrm {if} \,x\,y\,y)} is a term of type n a t {\displaystyle {\mathsf {nat}}} ". Such judgments are formally written with the turnstile symbol ⊢ {\displaystyle \vdash } . x : b o o l , y : n a t ⊢ ( if x y y ) : n a t {\displaystyle x:{\mathsf {bool}},y:{\mathsf {nat}}\vdash ({\textrm {if}}\,x\,y\,y):{\mathsf {nat}}} If there are no assumptions, there will be nothing |
are formally written with the turnstile symbol ⊢ {\displaystyle \vdash } . x : b o o l , y : n a t ⊢ ( if x y y ) : n a t {\displaystyle x:{\mathsf {bool}},y:{\mathsf {nat}}\vdash ({\textrm {if}}\,x\,y\,y):{\mathsf {nat}}} If there are no assumptions, there will be nothing to the left of the turnstile. ⊢ S : n a t → n a t {\displaystyle \vdash \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} The list of assumptions on the left is the context of the judgment. Capital greek letters, such as Γ {\displaystyle \Gamma } and Δ {\displaystyle \Delta } , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows. Some textbooks use a triple equal sign ≡ {\displaystyle \equiv } to stress that this is judgmental equality and thus an extrinsic notion of equality. The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term. === Rules of Inference === A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line. For example, the following inference rule states a substitution rule for judgmental equality. Γ ⊢ t : T 1 Δ ⊢ T 1 = T 2 Γ , Δ ⊢ t : T 2 {\displaystyle {\begin{array}{c}\Gamma \vdash t:T_{1}\qquad \Delta \vdash T_{1}=T_{2}\\\hline \Gamma ,\Delta \vdash t:T_{2}\end{array}}} The rules are syntactic and work by rewriting. The metavariables Γ {\displaystyle \Gamma } , Δ {\displaystyle \Delta } , t {\displaystyle t} , T 1 {\displaystyle T_{1}} , and T 2 {\displaystyle T_{2}} may actually consist of complex terms and types that contain many function applications, not just single symbols. To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term 0 {\displaystyle 0} of type n a t {\displaystyle {\mathsf {nat}}} , one would write the following. ⊢ 0 : n a t {\displaystyle {\begin{array}{c}\hline \vdash 0:nat\\\end{array}}} ==== Type inhabitation ==== Generally, the desired conclusion of a proof in type theory is one of type inhabitation. The decision problem of type inhabitation (abbreviated by ∃ t . Γ ⊢ t : τ ? {\displaystyle \exists t.\Gamma \vdash t:\tau ?} ) is: Given a context Γ {\displaystyle \Gamma } and a type τ {\displaystyle \tau } , decide whether there exists a term t {\displaystyle t} that can be assigned the type τ {\displaystyle \tau } in the type environment Γ {\displaystyle \Gamma } . Girard's paradox shows that type inhabitation is strongly |
Given a context Γ {\displaystyle \Gamma } and a type τ {\displaystyle \tau } , decide whether there exists a term t {\displaystyle t} that can be assigned the type τ {\displaystyle \tau } in the type environment Γ {\displaystyle \Gamma } . Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. A type theory usually has several rules, including ones to: create a judgment (known as a context in this case) add an assumption to the context (context weakening) rearrange the assumptions use an assumption to create a variable define reflexivity, symmetry and transitivity for judgmental equality define substitution for application of lambda terms list all the interactions of equality, such as substitution define a hierarchy of type universes assert the existence of new types Also, for each "by rule" type, there are 4 different kinds of rules "type formation" rules say how to create the type "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S". "term elimination" rules define the other functions like "first", "second", and "R". "computation" rules specify how computation is performed with the type-specific functions. For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book, or read Martin-Löf's Intuitionistic Type Theory. == Connections to foundations == The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic. Additionally, connections can be made to category theory and computer programs. === Intuitionistic logic === When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation. Under this intuitionistic interpretation, there are common types that act as the logical operators: Because the law of excluded middle does not hold, there is no term of type Π a . A + ( A → ⊥ ) {\displaystyle \Pi a.A+(A\to \bot )} . Likewise, double negation does not hold, so there is no term of type Π A . ( ( A → ⊥ ) → ⊥ ) → A {\displaystyle \Pi A.((A\to \bot )\to \bot )\to A} . It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. ==== Constructive mathematics ==== Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics. Constructive mathematics |
theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. ==== Constructive mathematics ==== Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics. Constructive mathematics requires when proving "there exists an x {\displaystyle x} with property P ( x ) {\displaystyle P(x)} ", one must construct a particular x {\displaystyle x} and a proof that it has property P {\displaystyle P} . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type. An example of a non-constructive proof is proof by contradiction. The first step is assuming that x {\displaystyle x} does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that x {\displaystyle x} does not exist". The last step is, by double negation, concluding that x {\displaystyle x} exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that x {\displaystyle x} exists. Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants. It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity. === Curry-Howard correspondence === The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A → {\displaystyle \to } B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs". The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information. ==== Type inference ==== Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user. === Research areas === ==== Category theory ==== Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts ), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant |
related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts ), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way: cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970); C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980); locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984). The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance. ==== Homotopy type theory ==== Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization. == Definitions == === Terms and types === ==== Atomic terms ==== The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type n a t {\displaystyle {\mathsf {nat}}} , Boolean logic values ( t r u e {\displaystyle \mathrm {true} } / f a l s e {\displaystyle \mathrm {false} } ), notated with the type b o o l {\displaystyle {\mathsf {bool}}} , and formal variables, whose type may vary. For example, the following may be atomic terms. 42 : n a t {\displaystyle 42:{\mathsf {nat}}} t r u e : b o o l {\displaystyle \mathrm {true} :{\mathsf {bool}}} x : n a t {\displaystyle x:{\mathsf {nat}}} y : b o o l {\displaystyle y:{\mathsf {bool}}} ==== Function terms ==== In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If σ {\displaystyle \sigma } and τ {\displaystyle \tau } are types, then the notation σ → τ {\displaystyle \sigma \to \tau } is the type of a function which takes a parameter of type σ {\displaystyle \sigma } and returns a term of type τ {\displaystyle \tau } . Types of this form are known as simple types. Some terms may be declared directly as having a simple type, such as the following term, a d d {\displaystyle \mathrm {add} } , which takes in two natural numbers in sequence and returns one natural number. a d d : n a t → ( n a t → n a t ) {\displaystyle \mathrm {add} :{\mathsf {nat}}\to ({\mathsf {nat}}\to {\mathsf {nat}})} Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that a d d {\displaystyle \mathrm {add} } is a function which takes in a natural number |
t ) {\displaystyle \mathrm {add} :{\mathsf {nat}}\to ({\mathsf {nat}}\to {\mathsf {nat}})} Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that a d d {\displaystyle \mathrm {add} } is a function which takes in a natural number and returns a function of the form n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} . The parentheses clarify that a d d {\displaystyle \mathrm {add} } does not have the type ( n a t → n a t ) → n a t {\displaystyle ({\mathsf {nat}}\to {\mathsf {nat}})\to {\mathsf {nat}}} , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from a d d {\displaystyle \mathrm {add} } 's type. ==== Lambda terms ==== New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form ( λ v . t ) {\displaystyle (\lambda v.t)} , where v {\displaystyle v} is a formal variable and t {\displaystyle t} is a term, and its type is notated σ → τ {\displaystyle \sigma \to \tau } , where σ {\displaystyle \sigma } is the type of v {\displaystyle v} , and τ {\displaystyle \tau } is the type of t {\displaystyle t} . The following lambda term represents a function which doubles an input natural number. ( λ x . a d d x x ) : n a t → n a t {\displaystyle (\lambda x.\mathrm {add} \,x\,x):{\mathsf {nat}}\to {\mathsf {nat}}} The variable is x {\displaystyle x} and (implicit from the lambda term's type) must have type n a t {\displaystyle {\mathsf {nat}}} . The term a d d x x {\displaystyle \mathrm {add} \,x\,x} has type n a t {\displaystyle {\mathsf {nat}}} , which is seen by applying the function application inference rule twice. Thus, the lambda term has type n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} , which means it is a function taking a natural number as an argument and returning a natural number. A lambda term is often referred to as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages. === Inference Rules === ==== Function application ==== The power of type theories is in specifying how terms may be combined by way of inference rules. Type theories which have functions also have the inference rule of function application: if t {\displaystyle t} is a term of type σ → τ {\displaystyle \sigma \to \tau } , and s {\displaystyle s} is a term of type σ {\displaystyle \sigma } , then the application of t {\displaystyle t} to s {\displaystyle s} , often written ( t s ) {\displaystyle (t\,s)} , has type τ {\displaystyle \tau } . For example, if one knows the type notations 0 : nat |
{\displaystyle s} is a term of type σ {\displaystyle \sigma } , then the application of t {\displaystyle t} to s {\displaystyle s} , often written ( t s ) {\displaystyle (t\,s)} , has type τ {\displaystyle \tau } . For example, if one knows the type notations 0 : nat {\displaystyle 0:{\textsf {nat}}} , 1 : nat {\displaystyle 1:{\textsf {nat}}} , and 2 : nat {\displaystyle 2:{\textsf {nat}}} , then the following type notations can be deduced from function application. ( a d d 1 ) : nat → nat {\displaystyle (\mathrm {add} \,1):{\textsf {nat}}\to {\textsf {nat}}} ( ( a d d 2 ) 0 ) : nat {\displaystyle ((\mathrm {add} \,2)\,0):{\textsf {nat}}} ( ( a d d 1 ) ( ( a d d 2 ) 0 ) ) : nat {\displaystyle ((\mathrm {add} \,1)((\mathrm {add} \,2)\,0)):{\textsf {nat}}} Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate. In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to a d d 1 ( a d d 2 0 ) : nat {\displaystyle \mathrm {add} \,1\,(\mathrm {add} \,2\,0):{\textsf {nat}}} . ==== Reductions ==== Type theories that allow for lambda terms also include inference rules known as β {\displaystyle \beta } -reduction and η {\displaystyle \eta } -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written ( λ v . t ) s → t [ v : = s ] {\displaystyle (\lambda v.t)\,s\rightarrow t[v\colon =s]} ( β {\displaystyle \beta } -reduction). ( λ v . t v ) → t {\displaystyle (\lambda v.t\,v)\rightarrow t} , if v {\displaystyle v} is not a free variable in t {\displaystyle t} ( η {\displaystyle \eta } -reduction). The first reduction describes how to evaluate a lambda term: if a lambda expression ( λ v . t ) {\displaystyle (\lambda v.t)} is applied to a term s {\displaystyle s} , one replaces every occurrence of v {\displaystyle v} in t {\displaystyle t} with s {\displaystyle s} . The second reduction makes explicit the relationship between lambda expressions and function types: if ( λ v . t v ) {\displaystyle (\lambda v.t\,v)} is a lambda term, then it must be that t {\displaystyle t} is a function term because it is being applied to v {\displaystyle v} . Therefore, the lambda expression is equivalent to just t {\displaystyle t} , as both take in one argument and apply t {\displaystyle t} to it. For example, the following term may be β {\displaystyle \beta } -reduced. ( λ x . a d d x x ) 2 → a d d 2 2 {\displaystyle (\lambda x.\mathrm {add} \,x\,x)\,2\rightarrow \mathrm {add} \,2\,2} In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of β {\displaystyle \beta } -equality and η {\displaystyle \eta } -equality. === Common terms and types === ==== Empty type ==== The empty type has no terms. |
{add} \,x\,x)\,2\rightarrow \mathrm {add} \,2\,2} In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of β {\displaystyle \beta } -equality and η {\displaystyle \eta } -equality. === Common terms and types === ==== Empty type ==== The empty type has no terms. The type is usually written ⊥ {\displaystyle \bot } or 0 {\displaystyle \mathbb {0} } . One use for the empty type is proofs of type inhabitation. If for a type a {\displaystyle a} , it is consistent to derive a function of type a → ⊥ {\displaystyle a\to \bot } , then a {\displaystyle a} is uninhabited, which is to say it has no terms. ==== Unit type ==== The unit type has exactly 1 canonical term. The type is written ⊤ {\displaystyle \top } or 1 {\displaystyle \mathbb {1} } and the single canonical term is written ∗ {\displaystyle \ast } . The unit type is also used in proofs of type inhabitation. If for a type a {\displaystyle a} , it is consistent to derive a function of type ⊤ → a {\displaystyle \top \to a} , then a {\displaystyle a} is inhabited, which is to say it must have one or more terms. ==== Boolean type ==== The Boolean type has exactly 2 canonical terms. The type is usually written bool {\displaystyle {\textsf {bool}}} or B {\displaystyle \mathbb {B} } or 2 {\displaystyle \mathbb {2} } . The canonical terms are usually t r u e {\displaystyle \mathrm {true} } and f a l s e {\displaystyle \mathrm {false} } . ==== Natural numbers ==== Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term 0 : n a t {\displaystyle 0:{\mathsf {nat}}} for zero. Canonical values larger than zero use iterated applications of a successor function S : n a t → n a t {\displaystyle \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} . === Type constructors === Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments; these are called type constructors. For example, a type theory could have the dependent type l i s t a {\displaystyle {\mathsf {list}}\,a} , which should correspond to lists of terms, where each term must have type a {\displaystyle a} . In this case, l i s t {\displaystyle {\mathsf {list}}} has the kind U → U {\displaystyle U\to U} , where U {\displaystyle U} denotes the universe of all types in the theory. ==== Product type ==== The product type, × {\displaystyle \times } , depends on two types, and its terms are commonly written as ordered pairs ( s , t ) {\displaystyle (s,t)} . The pair ( s , t ) {\displaystyle (s,t)} has the product type σ × τ {\displaystyle \sigma \times \tau } , where σ {\displaystyle \sigma } is the type of s {\displaystyle s} and τ {\displaystyle \tau } is the type of t {\displaystyle t} . Each product type is then usually defined |
s , t ) {\displaystyle (s,t)} has the product type σ × τ {\displaystyle \sigma \times \tau } , where σ {\displaystyle \sigma } is the type of s {\displaystyle s} and τ {\displaystyle \tau } is the type of t {\displaystyle t} . Each product type is then usually defined with eliminator functions f i r s t : σ × τ → σ {\displaystyle \mathrm {first} :\sigma \times \tau \to \sigma } and s e c o n d : σ × τ → τ {\displaystyle \mathrm {second} :\sigma \times \tau \to \tau } . f i r s t ( s , t ) {\displaystyle \mathrm {first} \,(s,t)} returns s {\displaystyle s} , and s e c o n d ( s , t ) {\displaystyle \mathrm {second} \,(s,t)} returns t {\displaystyle t} . Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection. ==== Sum type ==== The sum type is written as either + {\displaystyle +} or ⊔ {\displaystyle \sqcup } . In programming languages, sum types may be referred to as tagged unions. Each type σ ⊔ τ {\displaystyle \sigma \sqcup \tau } is usually defined with constructors l e f t : σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\sigma \to (\sigma \sqcup \tau )} and r i g h t : τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\tau \to (\sigma \sqcup \tau )} , which are injective, and an eliminator function m a t c h : ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that m a t c h f g ( l e f t x ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {left} \,x)} returns f x {\displaystyle f\,x} , and m a t c h f g ( r i g h t y ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {right} \,y)} returns g y {\displaystyle g\,y} . The sum type is used for the concepts of logical disjunction and union. === Polymorphic types === Some theories also allow terms to have their definitions depend on types. For instance, an identity function of any type could be written as λ x . x : ∀ α . α → α {\displaystyle \lambda x.x:\forall \alpha .\alpha \to \alpha } . The function is said to be polymorphic in α {\displaystyle \alpha } , or generic in x {\displaystyle x} . As another example, consider a function a p p e n d {\displaystyle \mathrm {append} } , which takes in a l i s t a {\displaystyle {\mathsf {list}}\,a} and a term of type a {\displaystyle a} , and returns the list with the element at the end. The type annotation of such a function would be a p p e n d : ∀ a . l i s t a → a → l i s t a {\displaystyle \mathrm |
term of type a {\displaystyle a} , and returns the list with the element at the end. The type annotation of such a function would be a p p e n d : ∀ a . l i s t a → a → l i s t a {\displaystyle \mathrm {append} :\forall \,a.{\mathsf {list}}\,a\to a\to {\mathsf {list}}\,a} , which can be read as "for any type a {\displaystyle a} , pass in a l i s t a {\displaystyle {\mathsf {list}}\,a} and an a {\displaystyle a} , and return a l i s t a {\displaystyle {\mathsf {list}}\,a} ". Here a p p e n d {\displaystyle \mathrm {append} } is polymorphic in a {\displaystyle a} . ==== Products and sums ==== With polymorphism, the eliminator functions can be defined generically for all product types as f i r s t : ∀ σ τ . σ × τ → σ {\displaystyle \mathrm {first} :\forall \,\sigma \,\tau .\sigma \times \tau \to \sigma } and s e c o n d : ∀ σ τ . σ × τ → τ {\displaystyle \mathrm {second} :\forall \,\sigma \,\tau .\sigma \times \tau \to \tau } . f i r s t ( s , t ) {\displaystyle \mathrm {first} \,(s,t)} returns s {\displaystyle s} , and s e c o n d ( s , t ) {\displaystyle \mathrm {second} \,(s,t)} returns t {\displaystyle t} . Likewise, the sum type constructors can be defined for all valid types of sum members as l e f t : ∀ σ τ . σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\forall \,\sigma \,\tau .\sigma \to (\sigma \sqcup \tau )} and r i g h t : ∀ σ τ . τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\forall \,\sigma \,\tau .\tau \to (\sigma \sqcup \tau )} , which are injective, and the eliminator function can be given as m a t c h : ∀ σ τ ρ . ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :\forall \,\sigma \,\tau \,\rho .(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that m a t c h f g ( l e f t x ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {left} \,x)} returns f x {\displaystyle f\,x} , and m a t c h f g ( r i g h t y ) {\displaystyle \mathrm {match} \,f\,g\,(\mathrm {right} \,y)} returns g y {\displaystyle g\,y} . === Dependent typing === Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type v e c t o r n {\displaystyle {\mathsf {vector}}\,n} , where n {\displaystyle n} is a term of type n a t {\displaystyle {\mathsf {nat}}} encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part |
{vector}}\,n} , where n {\displaystyle n} is a term of type n a t {\displaystyle {\mathsf {nat}}} encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type. There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing. ==== Dependent products and sums ==== Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence. As they also connect to products and sums in set theory, they are often written with the symbols Π {\displaystyle \Pi } and Σ {\displaystyle \Sigma } , respectively. Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function i f {\displaystyle \mathrm {if} } , which takes three arguments and behaves as follows. i f t r u e x y {\displaystyle \mathrm {if} \,\mathrm {true} \,x\,y} returns x {\displaystyle x} , and i f f a l s e x y {\displaystyle \mathrm {if} \,\mathrm {false} \,x\,y} returns y {\displaystyle y} . Ordinary definitions of i f {\displaystyle \mathrm {if} } require x {\displaystyle x} and y {\displaystyle y} to have the same type. If the type theory allows for dependent types, then it is possible to define a dependent type x : b o o l ⊢ T F x : U → U → U {\displaystyle x:{\mathsf {bool}}\,\vdash \,\mathrm {TF} \,x:U\to U\to U} such that T F t r u e σ τ {\displaystyle \mathrm {TF} \,\mathrm {true} \,\sigma \,\tau } returns σ {\displaystyle \sigma } , and T F f a l s e σ τ {\displaystyle \mathrm {TF} \,\mathrm {false} \,\sigma \,\tau } returns τ {\displaystyle \tau } . The type of i f {\displaystyle \mathrm {if} } may then be written as ∀ σ τ . Π x : b o o l . σ → τ → T F x σ τ {\displaystyle \forall \,\sigma \,\tau .\Pi _{x:{\mathsf {bool}}}.\sigma \to \tau \to \mathrm {TF} \,x\,\sigma \,\tau } . ==== Identity type ==== Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol = {\displaystyle =} . For example, if x + 1 {\displaystyle x+1} and 1 + x {\displaystyle 1+x} are terms, then x + 1 = 1 + x {\displaystyle |
equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol = {\displaystyle =} . For example, if x + 1 {\displaystyle x+1} and 1 + x {\displaystyle 1+x} are terms, then x + 1 = 1 + x {\displaystyle x+1=1+x} is a possible type. Canonical terms are created with a reflexivity function, r e f l {\displaystyle \mathrm {refl} } . For a term t {\displaystyle t} , the call r e f l t {\displaystyle \mathrm {refl} \,t} returns the canonical term inhabiting the type t = t {\displaystyle t=t} . The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory. ==== Inductive types ==== Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding. Some proof assistants, such as Rocq (previously known as Coq) and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types. == Differences from set theory == The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches. Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference. Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle. In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element x {\displaystyle x} is paired with a proof that the subset's property holds for x {\displaystyle x} . Where a union would be used, type theory uses the sum type, which contains new canonical terms. Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality. Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and |
lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality. Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms. In type theory, proofs are types whereas in set theory, proofs are part of the underlying first-order logic. Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to Category theory. === Properties of type theories === Terms usually belong to a single type. However, there are set theories that define "subtyping". Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule". Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities. Thus, 0 + A ≅ A {\displaystyle {\mathbb {0} }+A\cong A} , 1 × A ≅ A {\displaystyle {\mathbb {1} }\times A\cong A} , 1 + 1 ≅ 2 {\displaystyle {\mathbb {1} }+{\mathbb {1} }\cong {\mathbb {2} }} , A B + C ≅ A B × A C {\displaystyle A^{B+C}\cong A^{B}\times A^{C}} , A B × C ≅ ( A B ) C {\displaystyle A^{B\times C}\cong (A^{B})^{C}} . === Axioms === Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets. Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules. Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory. Some commonly encountered axioms are: "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity. "Univalence Axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom. "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic. The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the |
theory, where the property holds without needing an axiom. "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic. The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See Axiom of choice § In constructive mathematics.) == List of type theories == === Major === Simply typed lambda calculus which is a higher-order logic Intuitionistic type theory System F LF is often used to define other type theories Calculus of constructions and its derivatives === Minor === Automath ST type theory UTT (Luo's Unified Theory of dependent Types) some forms of combinatory logic others defined in the lambda cube (also known as pure type systems) others under the name typed lambda calculus === Active research === Homotopy type theory explores equality of types Cubical Type Theory is an implementation of homotopy type theory == See also == Class (set theory) Type–token distinction == Further reading == == Notes == == References == == External links == === Introductory material === Type Theory at nLab, which has articles on many topics. Intuitionistic Type Theory article at the Stanford Encyclopedia of Philosophy Lambda Calculi with Types book by Henk Barendregt Calculus of Constructions / Typed Lambda Calculus textbook style paper by Helmut Brandl Intuitionistic Type Theory notes by Per Martin-Löf Programming in Martin-Löf's Type Theory book Homotopy Type Theory book, which proposed homotopy type theory as a mathematical foundation. === Advanced material === Robert L. Constable (ed.). "Computational type theory". Scholarpedia. The TYPES Forum — moderated e-mail forum focusing on type theory in computer science, operating since 1987. The Nuprl Book: "Introduction to Type Theory." Types Project lecture notes of summer schools 2005–2008 The 2005 summer school has introductory lectures Oregon Programming Languages Summer School, many lectures and some notes. Summer 2013 lectures including Robert Harper's talks on YouTube Summer 2015 Types, Logic, Semantics, and Verification Andrej Bauer's blog |
In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1. Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; and they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry. == Etymology == The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or "name". It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-. That is, it means a sum of many terms (many monomials). The word polynomial was first used in the 17th century. == Notation and terminology == The x occurring in a polynomial is commonly called a variable or an indeterminate. When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably. A polynomial P in the indeterminate x is commonly denoted either as P or as P(x). Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial. The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials. If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function a ↦ P ( a ) , {\displaystyle a\mapsto P(a),} which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number. However, one may use |
convention, the result of substituting a for x in P. Thus, the polynomial P defines the function a ↦ P ( a ) , {\displaystyle a\mapsto P(a),} which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number. However, one may use it over any domain where addition and multiplication are defined (that is, any ring). In particular, if a is a polynomial then P(a) is also a polynomial. More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything). In other words, P ( x ) = P , {\displaystyle P(x)=P,} which justifies formally the existence of two notations for the same polynomial. == Definition == A polynomial expression is an expression that can be built from constants and symbols called variables or indeterminates by means of addition, multiplication and exponentiation to a non-negative integer power. The constants are generally numbers, but may be any expression that do not involve the indeterminates, and represent mathematical objects that can be added and multiplied. Two polynomial expressions are considered as defining the same polynomial if they may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication. For example ( x − 1 ) ( x − 2 ) {\displaystyle (x-1)(x-2)} and x 2 − 3 x + 2 {\displaystyle x^{2}-3x+2} are two polynomial expressions that represent the same polynomial; so, one has the equality ( x − 1 ) ( x − 2 ) = x 2 − 3 x + 2 {\displaystyle (x-1)(x-2)=x^{2}-3x+2} . A polynomial in a single indeterminate x can always be written (or rewritten) in the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 , {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constants that are called the coefficients of the polynomial, and x {\displaystyle x} is the indeterminate. The word "indeterminate" means that x {\displaystyle x} represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a function, called a polynomial function. This can be expressed more concisely by using summation notation: ∑ k = 0 n a k x k {\displaystyle \sum _{k=0}^{n}a_{k}x^{k}} That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a number – called the coefficient of the term – and a finite number of indeterminates, raised to non-negative integer powers. == Classification == The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a |
number of indeterminates, raised to non-negative integer powers. == Classification == The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient. Because x = x1, the degree of an indeterminate without a written exponent is one. A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial. The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below). For example: − 5 x 2 y {\displaystyle -5x^{2}y} is a term. The coefficient is −5, the indeterminates are x and y, the degree of x is two, while the degree of y is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is 2 + 1 = 3. Forming a sum of several terms produces a polynomial. For example, the following is a polynomial: 3 x 2 ⏟ t e r m 1 − 5 x ⏟ t e r m 2 + 4 ⏟ t e r m 3 . {\displaystyle \underbrace {_{\,}3x^{2}} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {1} \end{smallmatrix}}\underbrace {-_{\,}5x} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {2} \end{smallmatrix}}\underbrace {+_{\,}4} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {3} \end{smallmatrix}}.} It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero. Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial, or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term 2x in x2 + 2x + 1 is a linear term in a quadratic polynomial. The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞). The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots. The graph of the zero polynomial, f(x) = 0, is the x-axis. In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of degree n if all of its non-zero terms have degree n. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, x3y2 + 7x2y3 − 3x5 is homogeneous of |
In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of degree n if all of its non-zero terms have degree n. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, x3y2 + 7x2y3 − 3x5 is homogeneous of degree 5. For more details, see Homogeneous polynomial. The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x". The polynomial 3x2 − 5x + 4 is written in descending powers of x. The first term has coefficient 3, indeterminate x, and exponent 2. In the second term, the coefficient is −5. The third term is a constant. Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two. Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0. Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial. A real polynomial is a polynomial with real coefficients. When it is used to define a function, the domain is not so restricted. However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial. Similarly, an integer polynomial is a polynomial with integer coefficients, and a complex polynomial is a polynomial with complex coefficients. A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial. A polynomial with two indeterminates is called a bivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in x, y, and z", listing the indeterminates allowed. == Operations == === Addition and subtraction === Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if P = |
and z", listing the indeterminates allowed. == Operations == === Addition and subtraction === Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if P = 3 x 2 − 2 x + 5 x y − 2 {\displaystyle P=3x^{2}-2x+5xy-2} and Q = − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle Q=-3x^{2}+3x+4y^{2}+8} then the sum P + Q = 3 x 2 − 2 x + 5 x y − 2 − 3 x 2 + 3 x + 4 y 2 + 8 {\displaystyle P+Q=3x^{2}-2x+5xy-2-3x^{2}+3x+4y^{2}+8} can be reordered and regrouped as P + Q = ( 3 x 2 − 3 x 2 ) + ( − 2 x + 3 x ) + 5 x y + 4 y 2 + ( 8 − 2 ) {\displaystyle P+Q=(3x^{2}-3x^{2})+(-2x+3x)+5xy+4y^{2}+(8-2)} and then simplified to P + Q = x + 5 x y + 4 y 2 + 6. {\displaystyle P+Q=x+5xy+4y^{2}+6.} When polynomials are added together, the result is another polynomial. Subtraction of polynomials is similar. === Multiplication === Polynomials can also be multiplied. To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if P = 2 x + 3 y + 5 Q = 2 x + 5 y + x y + 1 {\displaystyle {\begin{aligned}\color {Red}P&\color {Red}{=2x+3y+5}\\\color {Blue}Q&\color {Blue}{=2x+5y+xy+1}\end{aligned}}} then P Q = ( 2 x ⋅ 2 x ) + ( 2 x ⋅ 5 y ) + ( 2 x ⋅ x y ) + ( 2 x ⋅ 1 ) + ( 3 y ⋅ 2 x ) + ( 3 y ⋅ 5 y ) + ( 3 y ⋅ x y ) + ( 3 y ⋅ 1 ) + ( 5 ⋅ 2 x ) + ( 5 ⋅ 5 y ) + ( 5 ⋅ x y ) + ( 5 ⋅ 1 ) {\displaystyle {\begin{array}{rccrcrcrcr}{\color {Red}{P}}{\color {Blue}{Q}}&{=}&&({\color {Red}{2x}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{3y}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{5}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{5}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{5}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{5}}\cdot {\color {Blue}{1}})\end{array}}} Carrying out the multiplication in each term produces P Q = 4 x 2 + 10 x y + 2 x 2 y + 2 x + 6 x y + 15 y 2 + 3 x y 2 + 3 y + 10 x + 25 y + 5 x y + 5. {\displaystyle {\begin{array}{rccrcrcrcr}PQ&=&&4x^{2}&+&10xy&+&2x^{2}y&+&2x\\&&+&6xy&+&15y^{2}&+&3xy^{2}&+&3y\\&&+&10x&+&25y&+&5xy&+&5.\end{array}}} Combining similar terms yields P Q = 4 x 2 + ( 10 x y + 6 x y + 5 x y ) + 2 x 2 y + ( 2 x + 10 x ) + 15 y 2 + 3 x y 2 + ( 3 y + 25 y ) + |
P Q = 4 x 2 + ( 10 x y + 6 x y + 5 x y ) + 2 x 2 y + ( 2 x + 10 x ) + 15 y 2 + 3 x y 2 + ( 3 y + 25 y ) + 5 {\displaystyle {\begin{array}{rcccrcrcrcr}PQ&=&&4x^{2}&+&(10xy+6xy+5xy)&+&2x^{2}y&+&(2x+10x)\\&&+&15y^{2}&+&3xy^{2}&+&(3y+25y)&+&5\end{array}}} which can be simplified to P Q = 4 x 2 + 21 x y + 2 x 2 y + 12 x + 15 y 2 + 3 x y 2 + 28 y + 5. {\displaystyle PQ=4x^{2}+21xy+2x^{2}y+12x+15y^{2}+3xy^{2}+28y+5.} As in the example, the product of polynomials is always a polynomial. === Composition === Given a polynomial f {\displaystyle f} of a single variable and another polynomial g of any number of variables, the composition f ∘ g {\displaystyle f\circ g} is obtained by substituting each copy of the variable of the first polynomial by the second polynomial. For example, if f ( x ) = x 2 + 2 x {\displaystyle f(x)=x^{2}+2x} and g ( x ) = 3 x + 2 {\displaystyle g(x)=3x+2} then ( f ∘ g ) ( x ) = f ( g ( x ) ) = ( 3 x + 2 ) 2 + 2 ( 3 x + 2 ) . {\displaystyle (f\circ g)(x)=f(g(x))=(3x+2)^{2}+2(3x+2).} A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial. === Division === The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called rational fractions, rational expressions, or rational functions, depending on context. This is analogous to the fact that the ratio of two integers is a rational number, not necessarily an integer. For example, the fraction 1/(x2 + 1) is not a polynomial, and it cannot be written as a finite sum of powers of the variable x. For polynomials in one variable, there is a notion of Euclidean division of polynomials, generalizing the Euclidean division of integers. This notion of the division a(x)/b(x) results in two polynomials, a quotient q(x) and a remainder r(x), such that a = b q + r and degree(r) < degree(b). The quotient and remainder may be computed by any of several algorithms, including polynomial long division and synthetic division. When the denominator b(x) is monic and linear, that is, b(x) = x − c for some constant c, then the polynomial remainder theorem asserts that the remainder of the division of a(x) by b(x) is the evaluation a(c). In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division. === Factoring === All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the |
(for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree. For example, the factored form of 5 x 3 − 5 {\displaystyle 5x^{3}-5} is 5 ( x − 1 ) ( x 2 + x + 1 ) {\displaystyle 5(x-1)\left(x^{2}+x+1\right)} over the integers and the reals, and 5 ( x − 1 ) ( x + 1 + i 3 2 ) ( x + 1 − i 3 2 ) {\displaystyle 5(x-1)\left(x+{\frac {1+i{\sqrt {3}}}{2}}\right)\left(x+{\frac {1-i{\sqrt {3}}}{2}}\right)} over the complex numbers. The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, efficient polynomial factorization algorithms are available in most computer algebra systems. === Calculus === Calculating derivatives and integrals of polynomials is particularly simple, compared to other kinds of functions. The derivative of the polynomial P = a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = ∑ i = 0 n a i x i {\displaystyle P=a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{2}x^{2}+a_{1}x+a_{0}=\sum _{i=0}^{n}a_{i}x^{i}} with respect to x is the polynomial n a n x n − 1 + ( n − 1 ) a n − 1 x n − 2 + ⋯ + 2 a 2 x + a 1 = ∑ i = 1 n i a i x i − 1 . {\displaystyle na_{n}x^{n-1}+(n-1)a_{n-1}x^{n-2}+\dots +2a_{2}x+a_{1}=\sum _{i=1}^{n}ia_{i}x^{i-1}.} Similarly, the general antiderivative (or indefinite integral) of P {\displaystyle P} is a n x n + 1 n + 1 + a n − 1 x n n + ⋯ + a 2 x 3 3 + a 1 x 2 2 + a 0 x + c = c + ∑ i = 0 n a i x i + 1 i + 1 {\displaystyle {\frac {a_{n}x^{n+1}}{n+1}}+{\frac {a_{n-1}x^{n}}{n}}+\dots +{\frac {a_{2}x^{3}}{3}}+{\frac {a_{1}x^{2}}{2}}+a_{0}x+c=c+\sum _{i=0}^{n}{\frac {a_{i}x^{i+1}}{i+1}}} where c is an arbitrary constant. For example, antiderivatives of x2 + 1 have the form 1/3x3 + x + c. For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number p, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient kak understood to mean the sum of k copies of ak. For example, over the integers modulo p, the derivative of the polynomial xp + x is the polynomial 1. == Polynomial functions == A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function f of one argument from a given domain is a polynomial function if there |
integers modulo p, the derivative of the polynomial xp + x is the polynomial 1. == Polynomial functions == A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function f of one argument from a given domain is a polynomial function if there exists a polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}} that evaluates to f ( x ) {\displaystyle f(x)} for all x in the domain of f (here, n is a non-negative integer and a0, a1, a2, ..., an are constant coefficients). Generally, unless otherwise specified, polynomial functions have complex coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is also restricted to the reals, the resulting function is a real function that maps reals to reals. For example, the function f, defined by f ( x ) = x 3 − x , {\displaystyle f(x)=x^{3}-x,} is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in f ( x , y ) = 2 x 3 + 4 x 2 y + x y 5 + y 2 − 7. {\displaystyle f(x,y)=2x^{3}+4x^{2}y+xy^{5}+y^{2}-7.} According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression ( 1 − x 2 ) 2 , {\displaystyle \left({\sqrt {1-x^{2}}}\right)^{2},} which takes the same values as the polynomial 1 − x 2 {\displaystyle 1-x^{2}} on the interval [ − 1 , 1 ] {\displaystyle [-1,1]} , and thus both expressions define the same polynomial function on this interval. Every polynomial function is continuous, smooth, and entire. The evaluation of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions. For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method, which consists of rewriting the polynomial as ( ( ( ( ( a n x + a n − 1 ) x + a n − 2 ) x + ⋯ + a 3 ) x + a 2 ) x + a 1 ) x + a 0 . {\displaystyle (((((a_{n}x+a_{n-1})x+a_{n-2})x+\dotsb +a_{3})x+a_{2})x+a_{1})x+a_{0}.} === Graphs === A polynomial function in one real variable can be represented by a graph. The graph of the zero polynomial is the x-axis. The graph of a degree 0 polynomial is a horizontal line with y-intercept a0 The graph of a degree 1 polynomial (or linear function) is an oblique line with y-intercept a0 and slope a1. The graph of a degree 2 polynomial is a parabola. The graph of a degree 3 polynomial is a cubic curve. |
degree 0 polynomial is a horizontal line with y-intercept a0 The graph of a degree 1 polynomial (or linear function) is an oblique line with y-intercept a0 and slope a1. The graph of a degree 2 polynomial is a parabola. The graph of a degree 3 polynomial is a cubic curve. The graph of any polynomial with degree 2 or greater is a continuous non-linear curve. A non-constant polynomial function tends to infinity when the variable increases indefinitely (in absolute value). If the degree is higher than one, the graph does not have any asymptote. It has two parabolic branches with vertical direction (one branch for positive x and one for negative x). Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior. == Equations == A polynomial equation, also called an algebraic equation, is an equation of the form a n x n + a n − 1 x n − 1 + ⋯ + a 2 x 2 + a 1 x + a 0 = 0. {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0}=0.} For example, 3 x 2 + 4 x − 5 = 0 {\displaystyle 3x^{2}+4x-5=0} is a polynomial equation. When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like (x + y)(x − y) = x2 − y2, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. In elementary algebra, methods such as the quadratic formula are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals. However, root-finding algorithms may be used to find numerical approximations of the roots of a polynomial expression of any degree. The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra. === Solving equations === A root of a nonzero univariate polynomial P is a value a of x such that P(a) = 0. In other words, a root of P is a solution of the polynomial equation P(x) = 0 or a zero of the polynomial function defined by P. In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered. A number a is a root of a polynomial P if and only if the linear polynomial x − a divides P, that is if there is another polynomial Q such that P = (x − a) Q. It may happen that a power (greater than 1) of x − a divides P; in this case, a |
of a polynomial P if and only if the linear polynomial x − a divides P, that is if there is another polynomial Q such that P = (x − a) Q. It may happen that a power (greater than 1) of x − a divides P; in this case, a is a multiple root of P, and otherwise a is a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a)m divides P, which is called the multiplicity of a as a root of P. The number of roots of a nonzero polynomial P, counted with their respective multiplicities, cannot exceed the degree of P, and equals this degree if all complex roots are considered (this is a consequence of the fundamental theorem of algebra). The coefficients of a polynomial and its roots are related by Vieta's formulas. Some polynomials, such as x2 + 1, do not have any roots among the real numbers. If, however, the set of accepted solutions is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial. There may be several meanings of "solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of 2x − 1 = 0 is 1/2. This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as algebraic expressions; for example, the golden ratio ( 1 + 5 ) / 2 {\displaystyle (1+{\sqrt {5}})/2} is the unique positive solution of x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} In the ancient times, they succeeded only for degrees one and two. For quadratic equations, the quadratic formula provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see cubic equation and quartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, Niels Henrik Abel proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see Abel–Ruffini theorem). In 1830, Évariste Galois proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, |
and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation). When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute numerical approximations of the solutions. There are many methods for that; some are restricted to polynomials and others may apply to any continuous function. The most efficient algorithms allow solving easily (on a computer) polynomial equations of degree higher than 1,000 (see Root-finding algorithm). For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots". The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem. == Polynomial expressions == Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name. === Trigonometric polynomials === A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using the multiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform. === Matrix polynomials === |
linear combinations are called polynomials. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform. === Matrix polynomials === A matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial P ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n , {\displaystyle P(x)=\sum _{i=0}^{n}{a_{i}x^{i}}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},} this polynomial evaluated at a matrix A is P ( A ) = ∑ i = 0 n a i A i = a 0 I + a 1 A + a 2 A 2 + ⋯ + a n A n , {\displaystyle P(A)=\sum _{i=0}^{n}{a_{i}A^{i}}=a_{0}I+a_{1}A+a_{2}A^{2}+\cdots +a_{n}A^{n},} where I is the identity matrix. A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R). === Exponential polynomials === A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example P(x, ex), may be called an exponential polynomial. == Related concepts == === Rational functions === A rational fraction is the quotient (algebraic fraction) of two polynomials. Any algebraic expression that can be rewritten as a rational fraction is a rational function. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero. The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate. === Laurent polynomials === Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur. === Power series === Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge. == Polynomial ring == A polynomial f over a commutative ring R is a polynomial all of whose coefficients belong to R. It is straightforward to verify that the polynomials in a given set of indeterminates over R form a commutative ring, called the polynomial ring in these indeterminates, denoted R [ x ] {\displaystyle R[x]} in the univariate case and R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} in the multivariate case. One has R [ x 1 , … , x n ] = ( R [ x 1 , … , x n − 1 ] ) [ x n ] . {\displaystyle |
and R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} in the multivariate case. One has R [ x 1 , … , x n ] = ( R [ x 1 , … , x n − 1 ] ) [ x n ] . {\displaystyle R[x_{1},\ldots ,x_{n}]=\left(R[x_{1},\ldots ,x_{n-1}]\right)[x_{n}].} So, most of the theory of the multivariate case can be reduced to an iterated univariate case. The map from R to R[x] sending r to itself considered as a constant polynomial is an injective ring homomorphism, by which R is viewed as a subring of R[x]. In particular, R[x] is an algebra over R. One can think of the ring R[x] as arising from R by adding one new element x to R, and extending in a minimal way to a ring in which x satisfies no other relations than the obligatory ones, plus commutation with all elements of R (that is xr = rx). To do this, one must add all powers of x and their linear combinations as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[x] over the real numbers by factoring out the ideal of multiples of the polynomial x2 + 1. Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic). If R is commutative, then one can associate with every polynomial P in R[x] a polynomial function f with domain and range equal to R. (More generally, one can take domain and range to be any same unital associative algebra over R.) One obtains the value f(r) by substitution of the value r for the symbol x in P. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where R is the integers modulo p). This is not the case when R is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for x. === Divisibility === If R is an integral domain and f and g are polynomials in R[x], it is said that f divides g or f is a divisor of g if there exists a polynomial q in R[x] such that f q = g. If a ∈ R , {\displaystyle a\in R,} then a is a root of f if and only x − a {\displaystyle x-a} divides f. In this case, the quotient can be computed using the polynomial long |
if there exists a polynomial q in R[x] such that f q = g. If a ∈ R , {\displaystyle a\in R,} then a is a root of f if and only x − a {\displaystyle x-a} divides f. In this case, the quotient can be computed using the polynomial long division. If F is a field and f and g are polynomials in F[x] with g ≠ 0, then there exist unique polynomials q and r in F[x] with f = q g + r {\displaystyle f=q\,g+r} and such that the degree of r is smaller than the degree of g (using the convention that the polynomial 0 has a negative degree). The polynomials q and r are uniquely determined by f and g. This is called Euclidean division, division with remainder or polynomial long division and shows that the ring F[x] is a Euclidean domain. Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in any computer algebra system. Eisenstein's criterion can also be used in some cases to determine irreducibility. == Applications == === Positional notation === In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, 4 × 101 + 5 × 100. As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number 1 × 52 + 3 × 51 + 2 × 50 = 42. This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form a = r m b m + r m − 1 b m − 1 + ⋯ + r 1 b + r 0 , {\displaystyle a=r_{m}b^{m}+r_{m-1}b^{m-1}+\dotsb +r_{1}b+r_{0},} where m is a nonnegative integer and the r's are integers such that 0 < rm < b and 0 ≤ ri < b for i = 0, 1, . . . , m − 1. === Interpolation and approximation === The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An |
r's are integers such that 0 < rm < b and 0 ≤ ri < b for i = 0, 1, . . . , m − 1. === Interpolation and approximation === The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include polynomial interpolation and the use of splines. === Other applications === Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph. The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input. == History == Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, c. 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write 3x + 2y + z = 29. === History of the notation === The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie, 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the as denote constants and x denotes a variable. Descartes introduced the use of superscripts to denote exponents as well. == See also == List of polynomial topics == Notes == == References == == External links == Markushevich, A.I. (2001) [1994], "Polynomial", Encyclopedia of Mathematics, EMS Press "Euler's Investigations on the Roots of Equations". Archived from the original on September 24, 2012. |
== References == == External links == Markushevich, A.I. (2001) [1994], "Polynomial", Encyclopedia of Mathematics, EMS Press "Euler's Investigations on the Roots of Equations". Archived from the original on September 24, 2012. |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear". The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead. An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space. Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra. Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients. == Definition and motivation == === Motivating examples === === Definition === Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K: Right distributivity: (x + y) · z = x · z + y · z Left distributivity: z · (x + y) = z · x + z · y Compatibility with scalars: (ax) · (by) = (ab) (x · y). These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this |
· y). These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra. When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs. == Basic concepts == === Algebra homomorphisms === Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as H o m K -alg ( A , B ) . {\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).} A K-algebra isomorphism is a bijective K-algebra homomorphism. === Subalgebras and ideals === A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L. In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra. A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements. x + y is in L (L is closed under addition), cx is in L (L is closed under scalar multiplication), z · x is in L (L is closed under left multiplication by arbitrary elements). If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal |
L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra. This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2). === Extension of scalars === If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product V F := V ⊗ K F {\displaystyle V_{F}:=V\otimes _{K}F} . So if A is an algebra over K, then A F {\displaystyle A_{F}} is an algebra over F. == Kinds of algebras and examples == Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different. === Unital algebra === An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra. === Zero algebra === An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative. A unital zero algebra is the direct sum K ⊕ V {\displaystyle K\oplus V} of a field K {\displaystyle K} and a K {\displaystyle K} -vector space V {\displaystyle V} , that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra. More precisely, every element of the algebra may be uniquely written as k + v {\displaystyle k+v} with k ∈ K {\displaystyle k\in K} and v ∈ V {\displaystyle v\in V} , and the product is the only bilinear operation such that v w = 0 {\displaystyle vw=0} for every v {\displaystyle v} and w {\displaystyle w} in V {\displaystyle V} . So, if k 1 , k 2 ∈ K {\displaystyle k_{1},k_{2}\in K} and v 1 , v 2 ∈ V {\displaystyle v_{1},v_{2}\in V} , one has ( k 1 + v 1 ) ( k |
and w {\displaystyle w} in V {\displaystyle V} . So, if k 1 , k 2 ∈ K {\displaystyle k_{1},k_{2}\in K} and v 1 , v 2 ∈ V {\displaystyle v_{1},v_{2}\in V} , one has ( k 1 + v 1 ) ( k 2 + v 2 ) = k 1 k 2 + ( k 1 v 2 + k 2 v 1 ) . {\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).} A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space. This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module". Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module V {\displaystyle V} correspond exactly to the ideals of K ⊕ V {\displaystyle K\oplus V} that are contained in V {\displaystyle V} . For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals. Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals. === Associative algebra === Examples of associative algebras include the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication. group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication. the commutative algebra K[x] of all polynomials over K (see polynomial ring). algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative. Incidence algebras are built on certain partially ordered sets. algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis. === Non-associative algebra === A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map A × A → A {\displaystyle A\times A\rightarrow A} . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean |
algebra === A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map A × A → A {\displaystyle A\times A\rightarrow A} . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative". Examples detailed in the main article include: Euclidean space R3 with multiplication given by the vector cross product Octonions Lie algebras Jordan algebras Alternative algebras Flexible algebras Power-associative algebras == Algebras and rings == The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism η : K → Z ( A ) , {\displaystyle \eta \colon K\to Z(A),} where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication K × A → A {\displaystyle K\times A\to A} given by ( k , a ) ↦ η ( k ) a . {\displaystyle (k,a)\mapsto \eta (k)a.} Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as f ( k a ) = k f ( a ) {\displaystyle f(ka)=kf(a)} for all k ∈ K {\displaystyle k\in K} and a ∈ A {\displaystyle a\in A} . In other words, the following diagram commutes: K η A ↙ η B ↘ A f ⟶ B {\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}} == Structure coefficients == For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A. Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws. Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars. These structure coefficients determine the multiplication in A via the following rule: e i e j = ∑ k = 1 n c i , j , k e k {\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}} where e1,...,en form a basis of A. Note however that several different sets of structure coefficients can give rise to isomorphic algebras. In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule |
the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as eiej = ci,jkek. If you apply this to vectors written in index notation, then this becomes (xy)k = ci,jkxiyj. If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism. == Classification of low-dimensional unital associative algebras over the complex numbers == Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study. There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element, 1 ⋅ 1 = 1 , 1 ⋅ a = a , a ⋅ 1 = a . {\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.} It remains to specify a a = 1 {\displaystyle \textstyle aa=1} for the first algebra, a a = 0 {\displaystyle \textstyle aa=0} for the second algebra. There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify a a = a , b b = b , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0} for the first algebra, a a = a , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0} for the second algebra, a a = b , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0} for the third algebra, a a = 1 , b b = 0 , a b = − b a = b {\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b} for the fourth algebra, a a = 0 , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0} for the fifth algebra. The fourth of these algebras is non-commutative, and the others are commutative. == Generalization: algebra over a ring == In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space). === Associative algebras over rings === |
is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space). === Associative algebras over rings === A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to H × H {\displaystyle \mathbb {H} \times \mathbb {H} } , the direct product of two quaternion algebras. The center of that ring is R × R {\displaystyle \mathbb {R} \times \mathbb {R} } , and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional R {\displaystyle \mathbb {R} } -algebra. In commutative algebra, if A is a commutative ring, then any unital ring homomorphism R → A {\displaystyle R\to A} defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural Z {\displaystyle \mathbb {Z} } -module structure, since one can take the unique homomorphism Z → A {\displaystyle \mathbb {Z} \to A} . On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field. == See also == Algebra over an operad Alternative algebra Clifford algebra Composition algebra Differential algebra Free algebra Geometric algebra Max-plus algebra Mutation (algebra) Operator algebra Zariski's lemma == Notes == == References == Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. |
In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,} where a ≠ 0. The quartic is the highest order polynomial equation that can be solved by radicals in the general case. == History == Lodovico Ferrari is attributed with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna (1545). The proof that this was the highest order general polynomial for which such solutions could be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois before his death in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result. == Special case solutions == Consider a quartic equation expressed in the form a 0 x 4 + a 1 x 3 + a 2 x 2 + a 3 x + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{3}x+a_{4}=0} : There exists a general formula for finding the roots to quartic equations, provided the coefficient of the leading term is non-zero. However, since the general method is quite complex and susceptible to errors in execution, it is better to apply one of the special cases listed below if possible. === Degenerate case === If the constant term a4 = 0, then one of the roots is x = 0, and the other roots can be found by dividing by x, and solving the resulting cubic equation, a 0 x 3 + a 1 x 2 + a 2 x + a 3 = 0. {\displaystyle a_{0}x^{3}+a_{1}x^{2}+a_{2}x+a_{3}=0.\,} === Evident roots: 1 and −1 and −k === Call our quartic polynomial Q(x). Since 1 raised to any power is 1, Q ( 1 ) = a 0 + a 1 + a 2 + a 3 + a 4 . {\displaystyle Q(1)=a_{0}+a_{1}+a_{2}+a_{3}+a_{4}\ .} Thus if a 0 + a 1 + a 2 + a 3 + a 4 = 0 , {\displaystyle \ a_{0}+a_{1}+a_{2}+a_{3}+a_{4}=0\ ,} Q(1) = 0 and so x = 1 is a root of Q(x). It can similarly be shown that if a 0 + a 2 + a 4 = a 1 + a 3 , {\displaystyle \ a_{0}+a_{2}+a_{4}=a_{1}+a_{3}\ ,} x = −1 is a root. In either case the full quartic can then be divided by the factor (x − 1) or (x + 1) respectively yielding a new cubic polynomial, which can be solved to find the quartic's other roots. If a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ |
−1 is a root. In either case the full quartic can then be divided by the factor (x − 1) or (x + 1) respectively yielding a new cubic polynomial, which can be solved to find the quartic's other roots. If a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 2 = 0 {\displaystyle \ a_{2}=0\ } and a 4 = a 3 k , {\displaystyle \ a_{4}=a_{3}k\ ,} then x = − k {\displaystyle \ x=-k\ } is a root of the equation. The full quartic can then be factorized this way: a 0 x 4 + a 0 k x 3 + a 3 x + a 3 k = a 0 x 3 ( x + k ) + a 3 ( x + k ) = ( a 0 x 3 + a 3 ) ( x + k ) . {\displaystyle \ a_{0}x^{4}+a_{0}kx^{3}+a_{3}x+a_{3}k=a_{0}x^{3}(x+k)+a_{3}(x+k)=(a_{0}x^{3}+a_{3})(x+k)\ .} Alternatively, if a 1 = a 0 k , {\displaystyle \ a_{1}=a_{0}k\ ,} a 3 = a 2 k , {\displaystyle \ a_{3}=a_{2}k\ ,} and a 4 = 0 , {\displaystyle \ a_{4}=0\ ,} then x = 0 and x = −k become two known roots. Q(x) divided by x(x + k) is a quadratic polynomial. === Biquadratic equations === A quartic equation where a3 and a1 are equal to 0 takes the form a 0 x 4 + a 2 x 2 + a 4 = 0 {\displaystyle a_{0}x^{4}+a_{2}x^{2}+a_{4}=0\,\!} and thus is a biquadratic equation, which is easy to solve: let z = x 2 {\displaystyle z=x^{2}} , so our equation becomes a 0 z 2 + a 2 z + a 4 = 0 {\displaystyle a_{0}z^{2}+a_{2}z+a_{4}=0\,\!} which is a simple quadratic equation, whose solutions are easily found using the quadratic formula: z = − a 2 ± a 2 2 − 4 a 0 a 4 2 a 0 {\displaystyle z={\frac {-a_{2}\pm {\sqrt {a_{2}^{2}-4a_{0}a_{4}}}}{2a_{0}}}\,\!} When we've solved it (i.e. found these two z values), we can extract x from them x 1 = + z + {\displaystyle x_{1}=+{\sqrt {z_{+}}}\,\!} x 2 = − z + {\displaystyle x_{2}=-{\sqrt {z_{+}}}\,\!} x 3 = + z − {\displaystyle x_{3}=+{\sqrt {z_{-}}}\,\!} x 4 = − z − {\displaystyle x_{4}=-{\sqrt {z_{-}}}\,\!} If either of the z solutions were negative or complex numbers, then some of the x solutions are complex numbers. === Quasi-symmetric equations === a 0 x 4 + a 1 x 3 + a 2 x 2 + a 1 m x + a 0 m 2 = 0 {\displaystyle a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}=0\,} Steps: Divide by x 2. Use variable change z = x + m/x. So, z 2 = x 2 + (m/x) 2 + 2m. This leads to: a 0 ( x 2 + m 2 / x 2 ) + a 1 ( x + m / x ) + a 2 = 0 {\displaystyle a_{0}(x^{2}+m^{2}/x^{2})+a_{1}(x+m/x)+a_{2}=0} , a 0 ( z 2 − 2 m ) + a 1 ( z ) + a 2 = 0 {\displaystyle a_{0}(z^{2}-2m)+a_{1}(z)+a_{2}=0} , z 2 + ( a 1 / a 0 ) |
+ a 1 ( x + m / x ) + a 2 = 0 {\displaystyle a_{0}(x^{2}+m^{2}/x^{2})+a_{1}(x+m/x)+a_{2}=0} , a 0 ( z 2 − 2 m ) + a 1 ( z ) + a 2 = 0 {\displaystyle a_{0}(z^{2}-2m)+a_{1}(z)+a_{2}=0} , z 2 + ( a 1 / a 0 ) z + ( a 2 / a 0 − 2 m ) = 0 {\displaystyle z^{2}+(a_{1}/a_{0})z+(a_{2}/a_{0}-2m)=0} (a quadratic in z = x + m/x) === Multiple roots === If the quartic has a double root, it can be found by taking the polynomial greatest common divisor with its derivative. Then they can be divided out and the resulting quadratic equation solved. In general, there exist only four possible cases of quartic equations with multiple roots, which are listed below: Multiplicity-4 (M4): when the general quartic equation can be expressed as a ( x − l ) 4 = 0 {\displaystyle a(x-l)^{4}=0} , for some real number l {\displaystyle l} . This case can always be reduced to a biquadratic equation. Multiplicity-3 (M3): when the general quartic equation can be expressed as a ( x − l ) 3 ( x − m ) = 0 {\displaystyle a(x-l)^{3}(x-m)=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers. This is the only case that can never be reduced to a biquadratic equation. Double Multiplicity-2 (DM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) 2 = 0 {\displaystyle a(x-l)^{2}(x-m)^{2}=0} , where l {\displaystyle l} and m {\displaystyle m} are two different real numbers or a pair of non-real complex conjugate numbers. This case can also always be reduced to a biquadratic equation. Single Multiplicity-2 (SM2): when the general quartic equation can be expressed as a ( x − l ) 2 ( x − m ) ( x − n ) = 0 {\displaystyle a(x-l)^{2}(x-m)(x-n)=0} , where l {\displaystyle l} , m {\displaystyle m} , and n {\displaystyle n} are three different real numbers or l {\displaystyle l} is a real number and m {\displaystyle m} and n {\displaystyle n} are a pair of non-real complex conjugate numbers. This case is divided into two subcases, those that can be reduced to a biquadratic equation and those that can't. Consider the case in which the three non-monic coefficients of the depressed quartic equation, x 4 + p x 2 + q x + r = 0 {\displaystyle x^{4}+px^{2}+qx+r=0} , can be expressed in terms of the five coefficients of the general quartic equation as follows: p = 8 a c − 3 b 2 8 a 2 {\displaystyle p={\frac {8ac-3b^{2}}{8a^{2}}}} q = b 3 − 4 a b c + 8 a 2 d 8 a 3 {\displaystyle q={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}} r = 16 a b 2 c − 64 a 2 b d − 3 b 4 + 256 a 3 e 256 a 4 {\displaystyle r={\frac {16ab^{2}c-64a^{2}bd-3b^{4}+256a^{3}e}{256a^{4}}}} , Then, the criteria to identify a priori each case of quartic equations with multiple roots and their respective |
3 {\displaystyle q={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}} r = 16 a b 2 c − 64 a 2 b d − 3 b 4 + 256 a 3 e 256 a 4 {\displaystyle r={\frac {16ab^{2}c-64a^{2}bd-3b^{4}+256a^{3}e}{256a^{4}}}} , Then, the criteria to identify a priori each case of quartic equations with multiple roots and their respective solutions are shown below. M4. The general quartic equation corresponds to this case whenever p = q = r = 0 {\displaystyle p=q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = x 3 = x 4 = − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=x_{4}=-{\frac {b}{4a}}} . M3. The general quartic equation corresponds to this case whenever p 2 = − 12 r > 0 {\displaystyle p^{2}=-12r>0} and 27 q 2 = − 8 p 3 > 0 {\displaystyle 27q^{2}=-8p^{3}>0} , so the four roots of this equation are given as follows if q > 0 {\displaystyle q>0} : x 1 = x 2 = x 3 = − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}={\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} x 4 = − − 3 p 2 − b 4 a {\displaystyle x_{4}=-{\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} Otherwise, if q ≤ 0 {\displaystyle q\leq 0} : x 1 = x 2 = x 3 = − − p 6 − b 4 a {\displaystyle x_{1}=x_{2}=x_{3}=-{\sqrt {-{\frac {p}{6}}}}-{\frac {b}{4a}}} x 4 = − 3 p 2 − b 4 a {\displaystyle x_{4}={\sqrt {-{\frac {3p}{2}}}}-{\frac {b}{4a}}} . DM2. The general quartic equation corresponds to this case whenever p 2 = 4 r > 0 = q {\displaystyle p^{2}=4r>0=q} , so the four roots of this equation are given as follows: x 1 = x 3 = − p 2 − b 4 a {\displaystyle x_{1}=x_{3}={\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} x 2 = x 4 = − − p 2 − b 4 a {\displaystyle x_{2}=x_{4}=-{\sqrt {-{\frac {p}{2}}}}-{\frac {b}{4a}}} . Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever p ≠ q = r = 0 {\displaystyle p\neq q=r=0} , so the four roots of this equation are given as follows: x 1 = x 2 = − b 4 a {\displaystyle x_{1}=x_{2}=-{\frac {b}{4a}}} x 3 = − p − b 4 a {\displaystyle x_{3}={\sqrt {-p}}-{\frac {b}{4a}}} x 4 = − − p − b 4 a {\displaystyle x_{4}=-{\sqrt {-p}}-{\frac {b}{4a}}} . Non-Biquadratic SM2. The general quartic equation corresponds to this subcase of the SM2 equations whenever ( p 2 + 12 r ) 3 = [ p ( p 2 − 36 r ) + 27 2 q 2 ] 2 > 0 ≠ q {\displaystyle (p^{2}+12r)^{3}=[p(p^{2}-36r)+{\frac {27}{2}}q^{2}]^{2}>0\neq {q}} , so the four roots of this equation are given by the following formula: x = 1 2 [ ξ s 1 ± 2 ( s 2 − ξ q s 1 ) ] − b 4 a {\displaystyle x={\frac {1}{2}}\left[\xi {\sqrt {s_{1}}}\pm {\sqrt {2{\biggl (}s_{2}-{\frac {\xi q}{\sqrt {s_{1}}}}{\biggr )}}}\right]-{\frac {b}{4a}}} , where: s 1 = 9 q 2 − 32 p r p 2 + 12 r > 0 {\displaystyle |
s 1 ± 2 ( s 2 − ξ q s 1 ) ] − b 4 a {\displaystyle x={\frac {1}{2}}\left[\xi {\sqrt {s_{1}}}\pm {\sqrt {2{\biggl (}s_{2}-{\frac {\xi q}{\sqrt {s_{1}}}}{\biggr )}}}\right]-{\frac {b}{4a}}} , where: s 1 = 9 q 2 − 32 p r p 2 + 12 r > 0 {\displaystyle s_{1}={\frac {9q^{2}-32pr}{p^{2}+12r}}>0} s 2 = − 2 p ( p 2 − 4 r ) + 9 q 2 2 ( p 2 + 12 r ) ≠ 0 {\displaystyle s_{2}=-{\frac {2p(p^{2}-4r)+9q^{2}}{2(p^{2}+12r)}}\neq 0} ξ = ± 1 {\displaystyle \xi =\pm 1} . == The general case == To begin, the quartic must first be converted to a depressed quartic. === Converting to a depressed quartic === Let be the general quartic equation which it is desired to solve. Divide both sides by A, x 4 + B A x 3 + C A x 2 + D A x + E A = 0 . {\displaystyle \ x^{4}+{B \over A}x^{3}+{C \over A}x^{2}+{D \over A}x+{E \over A}=0\ .} The first step, if B is not already zero, should be to eliminate the x3 term. To do this, change variables from x to u, such that x = u − B 4 A . {\displaystyle \ x=u-{B \over 4A}\ .} Then ( u − B 4 A ) 4 + B A ( u − B 4 A ) 3 + C A ( u − B 4 A ) 2 + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u-{B \over 4A}\right)^{4}+{B \over A}\left(u-{B \over 4A}\right)^{3}+{C \over A}\left(u-{B \over 4A}\right)^{2}+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Expanding the powers of the binomials produces ( u 4 − B A u 3 + 6 u 2 B 2 16 A 2 − 4 u B 3 64 A 3 + B 4 256 A 4 ) + B A ( u 3 − 3 u 2 B 4 A + 3 u B 2 16 A 2 − B 3 64 A 3 ) + C A ( u 2 − u B 2 A + B 2 16 A 2 ) + D A ( u − B 4 A ) + E A = 0 . {\displaystyle \ \left(u^{4}-{B \over A}u^{3}+{6u^{2}B^{2} \over 16A^{2}}-{4uB^{3} \over 64A^{3}}+{B^{4} \over 256A^{4}}\right)+{B \over A}\left(u^{3}-{3u^{2}B \over 4A}+{3uB^{2} \over 16A^{2}}-{B^{3} \over 64A^{3}}\right)+{C \over A}\left(u^{2}-{uB \over 2A}+{B^{2} \over 16A^{2}}\right)+{D \over A}\left(u-{B \over 4A}\right)+{E \over A}=0\ .} Collecting the same powers of u yields u 4 + ( − 3 B 2 8 A 2 + C A ) u 2 + ( B 3 8 A 3 − B C 2 A 2 + D A ) u + ( − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A ) = 0 . {\displaystyle \ u^{4}+\left({-3B^{2} \over 8A^{2}}+{C \over A}\right)u^{2}+\left({B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\right)u+\left({-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\right)=0\ .} Now rename the coefficients of u. Let a = − |
C B 2 16 A 3 − B D 4 A 2 + E A ) = 0 . {\displaystyle \ u^{4}+\left({-3B^{2} \over 8A^{2}}+{C \over A}\right)u^{2}+\left({B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\right)u+\left({-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\right)=0\ .} Now rename the coefficients of u. Let a = − 3 B 2 8 A 2 + C A , b = B 3 8 A 3 − B C 2 A 2 + D A , c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle {\begin{aligned}a&={-3B^{2} \over 8A^{2}}+{C \over A}\ ,\\b&={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A}\ ,\\c&={-3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}\ .\end{aligned}}} The resulting equation is which is a depressed quartic equation. If b = 0 {\displaystyle \ b=0\ } then we have the special case of a biquadratic equation, which is easily solved, as explained above. Note that the general solution, given below, will not work for the special case b = 0 . {\displaystyle \ b=0\ .} The equation must be solved as a biquadratic. In either case, once the depressed quartic is solved for u, substituting those values into x = u − B 4 A {\displaystyle \ x=u-{B \over 4A}\ } produces the values for x that solve the original quartic. === Solving a depressed quartic when b ≠ 0 === After converting to a depressed quartic equation u 4 + a u 2 + b u + c = 0 {\displaystyle u^{4}+au^{2}+bu+c=0} and excluding the special case b = 0, which is solved as a biquadratic, we assume from here on that b ≠ 0 . We will separate the terms left and right as u 4 = − a u 2 − b u − c {\displaystyle u^{4}=-au^{2}-bu-c} and add in terms to both sides which make them both into perfect squares. Let y be any solution of this cubic equation: 2 y 3 − a y 2 − 2 c y + ( a c − 1 4 b 2 ) = ( 2 y − a ) ( y 2 − c ) − 1 4 b 2 = 0 . {\displaystyle 2y^{3}-ay^{2}-2cy+(ac-{\tfrac {1}{4}}b^{2})=(2y-a)(y^{2}-c)-{\tfrac {1}{4}}b^{2}=0\ .} Then (since b ≠ 0) 2 y − a ≠ 0 {\displaystyle 2y-a\neq 0} so we may divide by it, giving y 2 − c = b 2 4 ( 2 y − a ) . {\displaystyle y^{2}-c={\frac {b^{2}}{4(2y-a)}}\ .} Then ( u 2 + y ) 2 = u 4 + 2 y u 2 + y 2 = ( 2 y − a ) u 2 − b u + ( y 2 − c ) = ( 2 y − a ) u 2 − b u + b 2 4 ( 2 y − a ) = ( 2 y − a u − b 2 2 y − a ) 2 . {\displaystyle (u^{2}+y)^{2}=u^{4}+2yu^{2}+y^{2}=(2y-a)u^{2}-bu+(y^{2}-c)=(2y-a)u^{2}-bu+{\frac {b^{2}}{\ 4(2y-a)\ }}=\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}\ .} Subtracting, we get the difference |
− a ) u 2 − b u + b 2 4 ( 2 y − a ) = ( 2 y − a u − b 2 2 y − a ) 2 . {\displaystyle (u^{2}+y)^{2}=u^{4}+2yu^{2}+y^{2}=(2y-a)u^{2}-bu+(y^{2}-c)=(2y-a)u^{2}-bu+{\frac {b^{2}}{\ 4(2y-a)\ }}=\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}\ .} Subtracting, we get the difference of two squares which is the product of the sum and difference of their roots ( u 2 + y ) 2 − ( 2 y − a u − b 2 2 y − a ) 2 = ( u 2 + y + 2 y − a u − b 2 2 y − a ) ( u 2 + y − 2 y − a u + b 2 2 y − a ) = 0 {\displaystyle (u^{2}+y)^{2}-\left({\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)^{2}=\left(u^{2}+y+{\sqrt {2y-a\ }}\,u-{\frac {b}{2{\sqrt {2y-a\ }}}}\right)\left(u^{2}+y-{\sqrt {2y-a\ }}\,u+{\frac {b}{2{\sqrt {2y-a\ }}}}\right)=0} which can be solved by applying the quadratic formula to each of the two factors. So the possible values of u are: u = 1 2 ( − 2 y − a + − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}+{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( − 2 y − a − − 2 y − a + 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left(-{\sqrt {2y-a\ }}-{\sqrt {-2y-a+{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} u = 1 2 ( 2 y − a + − 2 y − a − 2 b 2 y − a ) , {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}+{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ ,} or u = 1 2 ( 2 y − a − − 2 y − a − 2 b 2 y − a ) . {\displaystyle u={\tfrac {1}{2}}\left({\sqrt {2y-a\ }}-{\sqrt {-2y-a-{\frac {2b}{\sqrt {2y-a\ }}}\ }}\right)\ .} Using another y from among the three roots of the cubic simply causes these same four values of u to appear in a different order. The solutions of the cubic are: y = a 6 + w − p 3 w {\displaystyle \ y={\frac {a}{6}}+w-{\frac {p}{3w}}\ } w = − q 2 + q 2 4 + p 3 27 3 {\displaystyle \ w={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}\ }}\ }}} using any one of the three possible cube roots. A wise strategy is to choose the sign of the square-root that makes the absolute value of w as large as possible. p = − a 2 12 − c , {\displaystyle \ p=-{\frac {a^{2}}{12}}-c\ ,} q = − a 3 108 + a c 3 − b 2 8 . {\displaystyle \ q=-{\frac {a^{3}}{108}}+{\frac {ac}{3}}-{\frac {b^{2}}{8}}\ .} === Ferrari's solution === Otherwise, the depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. Once the depressed quartic has been obtained, the next step is to add the valid identity ( u 2 + a ) 2 − u 4 − 2 a u 2 = a 2 {\displaystyle \left(u^{2}+a\right)^{2}-u^{4}-2au^{2}=a^{2}} to equation (1), yielding The effect has |
means of a method discovered by Lodovico Ferrari. Once the depressed quartic has been obtained, the next step is to add the valid identity ( u 2 + a ) 2 − u 4 − 2 a u 2 = a 2 {\displaystyle \left(u^{2}+a\right)^{2}-u^{4}-2au^{2}=a^{2}} to equation (1), yielding The effect has been to fold up the u4 term into a perfect square: (u2 + a)2. The second term, au2 did not disappear, but its sign has changed and it has been moved to the right side. The next step is to insert a variable y into the perfect square on the left side of equation (2), and a corresponding 2y into the coefficient of u2 in the right side. To accomplish these insertions, the following valid formulas will be added to equation (2), ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = 2 y ( u 2 + a ) + y 2 = 2 y u 2 + 2 y a + y 2 , {\displaystyle {\begin{aligned}(u^{2}+a+y)^{2}-(u^{2}+a)^{2}&=2y(u^{2}+a)+y^{2}\ \ \\&=2yu^{2}+2ya+y^{2},\end{aligned}}} and 0 = ( a + 2 y ) u 2 − 2 y u 2 − a u 2 {\displaystyle 0=(a+2y)u^{2}-2yu^{2}-au^{2}\,} These two formulas, added together, produce ( u 2 + a + y ) 2 − ( u 2 + a ) 2 = ( a + 2 y ) u 2 − a u 2 + 2 y a + y 2 ( y -insertion ) {\displaystyle \left(u^{2}+a+y\right)^{2}-\left(u^{2}+a\right)^{2}=\left(a+2y\right)u^{2}-au^{2}+2ya+y^{2}\qquad \qquad (y{\hbox{-insertion}})\,} which added to equation (2) produces ( u 2 + a + y ) 2 + b u + c = ( a + 2 y ) u 2 + ( 2 y a + y 2 + a 2 ) . {\displaystyle \left(u^{2}+a+y\right)^{2}+bu+c=\left(a+2y\right)u^{2}+\left(2ya+y^{2}+a^{2}\right).\,} This is equivalent to The objective now is to choose a value for y such that the right side of equation (3) becomes a perfect square. This can be done by letting the discriminant of the quadratic function become zero. To explain this, first expand a perfect square so that it equals a quadratic function: ( s u + t ) 2 = ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) . {\displaystyle \left(su+t\right)^{2}=\left(s^{2}\right)u^{2}+\left(2st\right)u+\left(t^{2}\right).\,} The quadratic function on the right side has three coefficients. It can be verified that squaring the second coefficient and then subtracting four times the product of the first and third coefficients yields zero: ( 2 s t ) 2 − 4 ( s 2 ) ( t 2 ) = 0. {\displaystyle \left(2st\right)^{2}-4\left(s^{2}\right)\left(t^{2}\right)=0.\,} Therefore to make the right side of equation (3) into a perfect square, the following equation must be solved: ( − b ) 2 − 4 ( 2 y + a ) ( y 2 + 2 y a + a 2 − c ) = 0. {\displaystyle (-b)^{2}-4\left(2y+a\right)\left(y^{2}+2ya+a^{2}-c\right)=0.\,} Multiply the binomial with the polynomial, b 2 − 4 ( 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y |
2 y + a ) ( y 2 + 2 y a + a 2 − c ) = 0. {\displaystyle (-b)^{2}-4\left(2y+a\right)\left(y^{2}+2ya+a^{2}-c\right)=0.\,} Multiply the binomial with the polynomial, b 2 − 4 ( 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c ) ) = 0 {\displaystyle b^{2}-4\left(2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac\right)\right)=0\,} Divide both sides by −4, and move the −b2/4 to the right, 2 y 3 + 5 a y 2 + ( 4 a 2 − 2 c ) y + ( a 3 − a c − b 2 4 ) = 0 {\displaystyle 2y^{3}+5ay^{2}+\left(4a^{2}-2c\right)y+\left(a^{3}-ac-{\frac {b^{2}}{4}}\right)=0} Divide both sides by 2, This is a cubic equation in y. Solve for y using any method for solving such equations (e.g. conversion to a reduced cubic and application of Cardano's formula). Any of the three possible roots will do. ==== Folding the second perfect square ==== With the value for y so selected, it is now known that the right side of equation (3) is a perfect square of the form ( s 2 ) u 2 + ( 2 s t ) u + ( t 2 ) = ( ( s 2 ) u + ( 2 s t ) 2 s 2 ) 2 {\displaystyle \left(s^{2}\right)u^{2}+(2st)u+\left(t^{2}\right)=\left(\left({\sqrt {s^{2}}}\right)u+{(2st) \over 2{\sqrt {s^{2}}}}\right)^{2}} (This is correct for both signs of square root, as long as the same sign is taken for both square roots. A ± is redundant, as it would be absorbed by another ± a few equations further down this page.) so that it can be folded: ( a + 2 y ) u 2 + ( − b ) u + ( y 2 + 2 y a + a 2 − c ) = ( ( a + 2 y ) u + ( − b ) 2 a + 2 y ) 2 . {\displaystyle (a+2y)u^{2}+(-b)u+\left(y^{2}+2ya+a^{2}-c\right)=\left(\left({\sqrt {a+2y}}\right)u+{(-b) \over 2{\sqrt {a+2y}}}\right)^{2}.} Note: If b ≠ 0 then a + 2y ≠ 0. If b = 0 then this would be a biquadratic equation, which we solved earlier. Therefore equation (3) becomes Equation (5) has a pair of folded perfect squares, one on each side of the equation. The two perfect squares balance each other. If two squares are equal, then the sides of the two squares are also equal, as shown by: Collecting like powers of u produces Note: The subscript s of ± s {\displaystyle \pm _{s}} and ∓ s {\displaystyle \mp _{s}} is to note that they are dependent. Equation (6) is a quadratic equation for u. Its solution is u = ± s a + 2 y ± t ( a + 2 y ) − 4 ( a + y ± s b 2 a + 2 y ) 2 . {\displaystyle u={\frac {\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {(a+2y)-4\left(a+y\pm _{s}{b \over 2{\sqrt {a+2y}}}\right)}}}{2}}.} Simplifying, one gets u = ± s a + 2 y ± t − ( 3 a + 2 y ± s 2 b a + 2 y ) |
± s b 2 a + 2 y ) 2 . {\displaystyle u={\frac {\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {(a+2y)-4\left(a+y\pm _{s}{b \over 2{\sqrt {a+2y}}}\right)}}}{2}}.} Simplifying, one gets u = ± s a + 2 y ± t − ( 3 a + 2 y ± s 2 b a + 2 y ) 2 . {\displaystyle u={\pm _{s}{\sqrt {a+2y}}\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over {\sqrt {a+2y}}}\right)}} \over 2}.} This is the solution of the depressed quartic, therefore the solutions of the original quartic equation are Remember: The two ± s {\displaystyle \pm _{s}} come from the same place in equation (5'), and should both have the same sign, while the sign of ± t {\displaystyle \pm _{t}} is independent. ==== Summary of Ferrari's method ==== Given the quartic equation A x 4 + B x 3 + C x 2 + D x + E = 0 , {\displaystyle Ax^{4}+Bx^{3}+Cx^{2}+Dx+E=0,\,} its solution can be found by means of the following calculations: a = − 3 B 2 8 A 2 + C A , {\displaystyle a=-{3B^{2} \over 8A^{2}}+{C \over A},} b = B 3 8 A 3 − B C 2 A 2 + D A , {\displaystyle b={B^{3} \over 8A^{3}}-{BC \over 2A^{2}}+{D \over A},} c = − 3 B 4 256 A 4 + C B 2 16 A 3 − B D 4 A 2 + E A . {\displaystyle c=-{3B^{4} \over 256A^{4}}+{CB^{2} \over 16A^{3}}-{BD \over 4A^{2}}+{E \over A}.} If b = 0 , {\displaystyle \,b=0,} then x = − B 4 A ± s − a ± t a 2 − 4 c 2 (for b = 0 only) . {\displaystyle x=-{B \over 4A}\pm _{s}{\sqrt {-a\pm _{t}{\sqrt {a^{2}-4c}} \over 2}}\qquad {\mbox{(for }}b=0{\mbox{ only)}}.} Otherwise, continue with P = − a 2 12 − c , {\displaystyle P=-{a^{2} \over 12}-c,} Q = − a 3 108 + a c 3 − b 2 8 , {\displaystyle Q=-{a^{3} \over 108}+{ac \over 3}-{b^{2} \over 8},} R = − Q 2 ± Q 2 4 + P 3 27 , {\displaystyle R=-{Q \over 2}\pm {\sqrt {{Q^{2} \over 4}+{P^{3} \over 27}}},} (either sign of the square root will do) U = R 3 , {\displaystyle U={\sqrt[{3}]{R}},} (there are 3 complex roots, any one of them will do) y = − 5 6 a + { U = 0 → − Q 3 U ≠ 0 , → U − P 3 U , {\displaystyle y=-{5 \over 6}a+{\begin{cases}U=0&\to -{\sqrt[{3}]{Q}}\\U\neq 0,&\to U-{P \over 3U},\end{cases}}\quad \quad \quad } W = a + 2 y {\displaystyle W={\sqrt {a+2y}}} x = − B 4 A + ± s W ± t − ( 3 a + 2 y ± s 2 b W ) 2 . {\displaystyle x=-{B \over 4A}+{\pm _{s}W\pm _{t}{\sqrt {-\left(3a+2y\pm _{s}{2b \over W}\right)}} \over 2}.} The two ±s must have the same sign, the ±t is independent. To get all roots, compute x for (±s,±t) = (+,+); (+,−); (−,+); (−,−). This formula handles repeated roots without problem. Ferrari was the first to discover one of these labyrinthine solutions. The equation which he solved was x 4 + 6 x |
must have the same sign, the ±t is independent. To get all roots, compute x for (±s,±t) = (+,+); (+,−); (−,+); (−,−). This formula handles repeated roots without problem. Ferrari was the first to discover one of these labyrinthine solutions. The equation which he solved was x 4 + 6 x 2 − 60 x + 36 = 0 {\displaystyle x^{4}+6x^{2}-60x+36=0} which was already in depressed form. It has a pair of solutions which can be found with the set of formulas shown above. === Ferrari's solution in the special case of real coefficients === If the coefficients of the quartic equation are real then the nested depressed cubic equation (5) also has real coefficients, thus it has at least one real root. Furthermore the cubic function C ( v ) = v 3 + P v + Q , {\displaystyle C(v)=v^{3}+Pv+Q,} where P and Q are given by (5) has the properties that C ( a 3 ) = − b 2 8 < 0 {\displaystyle C\left({a \over 3}\right)={-b^{2} \over 8}<0} and lim v → ∞ C ( v ) = ∞ , {\displaystyle \lim _{v\to \infty }C(v)=\infty ,} where a and b are given by (1). This means that (5) has a real root greater than a 3 {\displaystyle a \over 3} , and therefore that (4) has a real root greater than − a 2 {\displaystyle -a \over 2} . Using this root the term a + 2 y {\displaystyle {\sqrt {a+2y}}} in (6) is always real, which ensures that the two quadratic equations (6) have real coefficients. === Obtaining alternative solutions the hard way === It could happen that one only obtained one solution through the formulae above, because not all four sign patterns are tried for four solutions, and the solution obtained is complex. It may also be the case that one is only looking for a real solution. Let x1 denote the complex solution. If all the original coefficients A, B, C, D and E are real—which should be the case when one desires only real solutions – then there is another complex solution x2 which is the complex conjugate of x1. If the other two roots are denoted as x3 and x4 then the quartic equation can be expressed as ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) ( x − x 4 ) = 0 , {\displaystyle (x-x_{1})(x-x_{2})(x-x_{3})(x-x_{4})=0,\,} but this quartic equation is equivalent to the product of two quadratic equations: and Since x 2 = x 1 ⋆ {\displaystyle x_{2}=x_{1}^{\star }} then ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 1 ⋆ ) x + x 1 x 1 ⋆ = x 2 − 2 Re ( x 1 ) x + [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 . {\displaystyle {\begin{aligned}(x-x_{1})(x-x_{2})&=x^{2}-(x_{1}+x_{1}^{\star })x+x_{1}x_{1}^{\star }\\&=x^{2}-2\operatorname {Re} (x_{1})x+[\operatorname {Re} (x_{1})]^{2}+[\operatorname {Im} (x_{1})]^{2}.\end{aligned}}} Let a = − 2 Re |
⋆ = x 2 − 2 Re ( x 1 ) x + [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 . {\displaystyle {\begin{aligned}(x-x_{1})(x-x_{2})&=x^{2}-(x_{1}+x_{1}^{\star })x+x_{1}x_{1}^{\star }\\&=x^{2}-2\operatorname {Re} (x_{1})x+[\operatorname {Re} (x_{1})]^{2}+[\operatorname {Im} (x_{1})]^{2}.\end{aligned}}} Let a = − 2 Re ( x 1 ) , {\displaystyle a=-2\operatorname {Re} (x_{1}),} b = [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 {\displaystyle b=\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}} so that equation (9) becomes Also let there be (unknown) variables w and v such that equation (10) becomes Multiplying equations (11) and (12) produces Comparing equation (13) to the original quartic equation, it can be seen that a + w = B A , {\displaystyle a+w={B \over A},} b + w a + v = C A , {\displaystyle b+wa+v={C \over A},} w b + v a = D A , {\displaystyle wb+va={D \over A},} and v b = E A . {\displaystyle vb={E \over A}.} Therefore w = B A − a = B A + 2 Re ( x 1 ) , {\displaystyle w={B \over A}-a={B \over A}+2\operatorname {Re} (x_{1}),} v = E A b = E A ( [ Re ( x 1 ) ] 2 + [ Im ( x 1 ) ] 2 ) . {\displaystyle v={E \over Ab}={\frac {E}{A\left(\left[\operatorname {Re} (x_{1})\right]^{2}+\left[\operatorname {Im} (x_{1})\right]^{2}\right)}}.} Equation (12) can be solved for x yielding x 3 = − w + w 2 − 4 v 2 , {\displaystyle x_{3}={-w+{\sqrt {w^{2}-4v}} \over 2},} x 4 = − w − w 2 − 4 v 2 . {\displaystyle x_{4}={-w-{\sqrt {w^{2}-4v}} \over 2}.} One of these two solutions should be the desired real solution. == Alternative methods == === Quick and memorable solution from first principles === Most textbook solutions of the quartic equation require a substitution that is hard to memorize. Here is an approach that makes it easy to understand. The job is done if we can factor the quartic equation into a product of two quadratics. Let 0 = x 4 + b x 3 + c x 2 + d x + e = ( x 2 + p x + q ) ( x 2 + r x + s ) = x 4 + ( p + r ) x 3 + ( q + s + p r ) x 2 + ( p s + q r ) x + q s {\displaystyle {\begin{aligned}0&=x^{4}+bx^{3}+cx^{2}+dx+e\\&=\left(x^{2}+px+q\right)\left(x^{2}+rx+s\right)\\&=x^{4}+(p+r)x^{3}+(q+s+pr)x^{2}+(ps+qr)x+qs\end{aligned}}} By equating coefficients, this results in the following set of simultaneous equations: b = p + r c = q + s + p r d = p s + q r e = q s {\displaystyle {\begin{aligned}b&=p+r\\c&=q+s+pr\\d&=ps+qr\\e&=qs\end{aligned}}} This is harder to solve than it looks, but if we start again with a depressed quartic where b = 0 {\displaystyle b=0} , which can be obtained by substituting ( x − b / 4 ) {\displaystyle (x-b/4)} for x {\displaystyle x} , then r = − p {\displaystyle r=-p} |
This is harder to solve than it looks, but if we start again with a depressed quartic where b = 0 {\displaystyle b=0} , which can be obtained by substituting ( x − b / 4 ) {\displaystyle (x-b/4)} for x {\displaystyle x} , then r = − p {\displaystyle r=-p} , and: c + p 2 = s + q d / p = s − q e = s q {\displaystyle {\begin{aligned}c+p^{2}&=s+q\\d/p&=s-q\\e&=sq\end{aligned}}} It's now easy to eliminate both s {\displaystyle s} and q {\displaystyle q} by doing the following: ( c + p 2 ) 2 − ( d / p ) 2 = ( s + q ) 2 − ( s − q ) 2 = 4 s q = 4 e {\displaystyle {\begin{aligned}\left(c+p^{2}\right)^{2}-(d/p)^{2}&=(s+q)^{2}-(s-q)^{2}\\&=4sq\\&=4e\end{aligned}}} If we set P = p 2 {\displaystyle P=p^{2}} , then this equation turns into the cubic equation: P 3 + 2 c P 2 + ( c 2 − 4 e ) P − d 2 = 0 {\displaystyle P^{3}+2cP^{2}+\left(c^{2}-4e\right)P-d^{2}=0} which is solved elsewhere. Once you have p {\displaystyle p} , then: r = − p 2 s = c + p 2 + d / p 2 q = c + p 2 − d / p {\displaystyle {\begin{aligned}r&=-p\\2s&=c+p^{2}+d/p\\2q&=c+p^{2}-d/p\end{aligned}}} The symmetries in this solution are easy to see. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of p {\displaystyle p} for the square root of P {\displaystyle P} merely exchanges the two quadratics with one another. === Möbius transformation method === A suitably chosen Möbius transformation can transform a quartic equation into a quadratic equation in the new variable squared. This is a known method. Finding such a Möbius transformation involves solving a cubic equation and so simplifies the problem. For example, start with the depressed quartic equation with unity leading coefficient and with neither a 1 {\displaystyle a_{1}} nor a 0 {\displaystyle a_{0}} equal to zero: x 4 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle x^{4}+a_{2}x^{2}+a_{1}x+a_{0}=0} and do the Möbius transformation: x = A + B y 1 + y {\displaystyle x={\frac {A+By}{1+y}}} Set the first and third order coefficients of the resulting quartic equation in y {\displaystyle y} to zero. After some algebra, one finds A + B {\displaystyle A+B} is to be obtained from the cubic equation a 1 ( A + B ) 3 + ( 4 a 0 − 2 a 1 a 2 − a 2 2 ) ( A + B ) 2 − 2 a 1 a 2 ( A + B ) − a 1 2 = 0 {\displaystyle a_{1}(A+B)^{3}+(4a_{0}-2a_{1}a_{2}-{a_{2}}^{2})(A+B)^{2}-2a_{1}a_{2}(A+B)-{a_{1}}^{2}=0} and, regarding A + B {\displaystyle A+B} as known, A {\displaystyle A} is to be obtained from the quadratic equation 2 ( A + B ) A 2 − 2 ( A + B ) 2 A − a 2 ( A + B ) − a 1 = 0 {\displaystyle 2(A+B)A^{2}-2(A+B)^{2}A-a_{2}(A+B)-a_{1}=0} Solving the resulting quadratic |
{\displaystyle A+B} as known, A {\displaystyle A} is to be obtained from the quadratic equation 2 ( A + B ) A 2 − 2 ( A + B ) 2 A − a 2 ( A + B ) − a 1 = 0 {\displaystyle 2(A+B)A^{2}-2(A+B)^{2}A-a_{2}(A+B)-a_{1}=0} Solving the resulting quadratic equation for y 2 {\displaystyle y^{2}} gives two values for y 2 {\displaystyle y^{2}} and each square root of y 2 {\displaystyle y^{2}} has two values, giving a total of four solutions, as expected. The cubic equation in A + B {\displaystyle {\textbf {A}}+{\textbf {B}}} given earlier is the same as P 2 − Q ( A + B ) 2 = 0 {\displaystyle P^{2}-Q(A+B)^{2}=0} , where P ≡ b 1 − b 3 2 ( A − B ) = 2 A B ( A + B ) + a 2 ( A + B ) + a 1 {\displaystyle P\equiv {\frac {b_{1}-b_{3}}{2(A-B)}}=2\,A\,B\,(A+B)+a_{2}(A+B)+a_{1}} Q ≡ B b 1 − A b 3 A − B = 4 A 2 B 2 − a 1 ( A + B ) − 4 a 0 = 0 {\displaystyle Q\equiv {\frac {B\,b_{1}-A\,b_{3}}{A-B}}=4A^{2}B^{2}-a_{1}(A+B)-4a_{0}=0} Here bi are the coefficients of the quartic polynomial in y. This shows how this equation was obtained. === Galois theory and factorization === The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots. Suppose ri for i from 0 to 3 are roots of x 4 + b x 3 + c x 2 + d x + e = 0 ( 1 ) {\displaystyle x^{4}+bx^{3}+cx^{2}+dx+e=0\qquad (1)} If we now set s 0 = 1 2 ( r 0 + r 1 + r 2 + r 3 ) , s 1 = 1 2 ( r 0 − r 1 + r 2 − r 3 ) , s 2 = 1 2 ( r 0 + r 1 − r 2 − r 3 ) , s 3 = 1 2 ( r 0 − r 1 − r 2 + r 3 ) , {\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(r_{0}+r_{1}+r_{2}+r_{3}),\\s_{1}&={\tfrac {1}{2}}(r_{0}-r_{1}+r_{2}-r_{3}),\\s_{2}&={\tfrac {1}{2}}(r_{0}+r_{1}-r_{2}-r_{3}),\\s_{3}&={\tfrac {1}{2}}(r_{0}-r_{1}-r_{2}+r_{3}),\end{aligned}}} then since the transformation is an involution, we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −b/2, we really only need the values for s1, s2 and s3. These we may find by expanding the polynomial ( z 2 − s 1 2 ) ( z 2 − s 2 2 ) ( z 2 − s 3 2 ) ( 2 ) {\displaystyle \left(z^{2}-s_{1}^{2}\right)\left(z^{2}-s_{2}^{2}\right)\left(z^{2}-s_{3}^{2}\right)\qquad (2)} which if we make the simplifying assumption that b = 0, is equal to z 6 + 2 c z 4 + ( c 2 − 4 e ) z 2 − d 2 ( 3 ) {\displaystyle z^{6}+2cz^{4}+\left(c^{2}-4e\right)z^{2}-d^{2}\qquad (3)} This polynomial is of degree six, but only of degree three in z2, and so the corresponding equation |
that b = 0, is equal to z 6 + 2 c z 4 + ( c 2 − 4 e ) z 2 − d 2 ( 3 ) {\displaystyle z^{6}+2cz^{4}+\left(c^{2}-4e\right)z^{2}-d^{2}\qquad (3)} This polynomial is of degree six, but only of degree three in z2, and so the corresponding equation is solvable. By trial we can determine which three roots are the correct ones, and hence find the solutions of the quartic. We can remove any requirement for trial by using a root of the same resolvent polynomial for factoring; if w is any root of (3), and if F 1 = x 2 + w x + 1 2 w 2 + 1 2 c − 1 2 ⋅ c 2 w d − 1 2 ⋅ w 5 d − c w 3 d + 2 e w d {\displaystyle F_{1}=x^{2}+wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c-{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}-{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}-{\frac {cw^{3}}{d}}+2{\frac {ew}{d}}} F 2 = x 2 − w x + 1 2 w 2 + 1 2 c + 1 2 ⋅ w 5 d + c w 3 d − 2 e w d + 1 2 ⋅ c 2 w d {\displaystyle F_{2}=x^{2}-wx+{\frac {1}{2}}w^{2}+{\frac {1}{2}}c+{\frac {1}{2}}\cdot {\frac {w^{5}}{d}}+{\frac {cw^{3}}{d}}-2{\frac {ew}{d}}+{\frac {1}{2}}\cdot {\frac {c^{2}w}{d}}} then F 1 F 2 = x 4 + c x 2 + d x + e ( 4 ) {\displaystyle F_{1}F_{2}=x^{4}+cx^{2}+dx+e\qquad \qquad (4)} We therefore can solve the quartic by solving for w and then solving for the roots of the two factors using the quadratic formula. === Approximate methods === The methods described above are, in principle, exact root-finding methods. It is also possible to use successive approximation methods which iteratively converge towards the roots, such as the Durand–Kerner method. Iterative methods are the only ones available for quintic and higher-order equations, beyond trivial or special cases. == See also == Linear equation Quadratic equation Cubic equation Quintic equation Polynomial Newton's method Principal equation form == References == Ferrari's achievement Quartic formula as four single equations at PlanetMath. == Notes == == External links == Calculator for solving Quartics |
Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers Z {\displaystyle \mathbb {Z} } ; and p-adic integers. Commutative algebra is the main technical tool of algebraic geometry, and many results and concepts of commutative algebra are strongly related with geometrical concepts. The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras. == Overview == Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry. Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings. Polynomial rings in several indeterminates over a field are examples of commutative rings. Since algebraic geometry is fundamentally the study of the common zeros of these rings, many results and concepts of algebraic geometry have counterparts in commutative algebra, and their names recall often their geometric origin; for example "Krull dimension", "localization of a ring", "local ring", "regular ring". An affine algebraic variety corresponds to a prime ideal in a polynomial ring, and the points of such an affine variety correspond to the maximal ideals that contain this prime ideal. The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal. The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology. This is the starting point of scheme theory, a generalization of algebraic geometry introduced by Grothendieck, which is strongly based on commutative algebra, and has induced, in turns, many developments of commutative algebra. == History == The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem. The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that |
of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem. The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that of regular local rings. He established the concept of the Krull dimension of a ring, first for Noetherian rings before moving on to expand his theory to cover general valuation rings and Krull rings. To this day, Krull's principal ideal theorem is widely considered the single most important foundational theorem in commutative algebra. These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject. Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Krull and Noether. == Main tools and results == === Noetherian rings === A Noetherian ring, named after Emmy Noether, is a ring in which every ideal is finitely generated; that is, all elements of any ideal can be written as a linear combinations of a finite set of elements, with coefficients in the ring. Many commonly considered commutative rings are Noetherian, in particular, every field, the ring of the integer, and every polynomial ring in one or several indeterminates over them. The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem. Moreover, many ring constructions preserve the Noetherian property. In particular, if a commutative ring R is Noetherian, the same is true for every polynomial ring over it, and for every quotient ring, localization, or completion of the ring. The importance of the Noetherian property lies in its ubiquity and also in the fact that many important theorems of commutative algebra require that the involved rings are Noetherian, This is the case, in particular of Lasker–Noether theorem, the Krull intersection theorem, and Nakayama's lemma. Furthermore, if a ring is Noetherian, then it satisfies the descending chain condition on prime ideals, which implies that every Noetherian local ring has a finite Krull dimension. === Primary decomposition === An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals. The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic: For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. |
as the intersection of finitely many primary ideals. The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic: For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime. === Localization === The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions m s {\displaystyle {\frac {m}{s}}} . where the denominators s range in a given subset S of R. The archetypal example is the construction of the ring Q of rational numbers from the ring Z of integers. === Completion === A completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have simpler structure than the general ones and Hensel's lemma applies to them. === Zariski topology on prime ideals === The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets V ( I ) = { P ∈ Spec ( A ) ∣ I ⊆ P } {\displaystyle V(I)=\{P\in \operatorname {Spec} \,(A)\mid I\subseteq P\}} where A is a fixed commutative ring and I is an ideal. This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring. == Connections with algebraic geometry == Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form |
rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form a category that is antiequivalent (dual) to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc. Nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions, leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. == See also == List of commutative algebra topics Glossary of commutative algebra Combinatorial commutative algebra Gröbner basis Homological algebra == Notes == == References == Atiyah, Michael; Macdonald, Ian G. (2018) [1969]. Introduction to Commutative Algebra. CRC Press. ISBN 978-0-429-96218-9. Bourbaki, Nicolas (1998) [1989]. "Chapters 1–7". Commutative algebra. Elements of Mathematics. Springer. ISBN 3-540-64239-0. Bourbaki, Nicolas (2006) [1983]. "Chapitres 8 et 9". Algèbre commutative. Éléments de mathématique. Springer. ISBN 978-3-540-33942-7. Eisenbud, David (1995). Commutative algebra with a view toward algebraic geometry. Graduate Texts in Mathematics. Vol. 150. New York: Springer-Verlag. xvi+785. ISBN 0-387-94268-8. MR 1322960. Goblot, Rémi (2001). Algèbre commutative, cours et exercices corrigés (2e ed.). Dunod. ISBN 2-10-005779-0. Kunz, Ernst (1985). Introduction to Commutative algebra and algebraic geometry. Birkhauser. ISBN 0-8176-3065-1. Matsumura, Hideyuki (1980). Commutative algebra. Mathematics Lecture Note Series. Vol. 56 (2nd ed.). Benjamin/Cummings. ISBN 0-8053-7026-9. Matsumura, Hideyuki (1989). Commutative Ring Theory. Cambridge Studies in Advanced Mathematics (2nd ed.). Cambridge University Press. ISBN 0-521-36764-6. Nagata, Masayoshi (1975) [1962]. Local rings. Interscience Tracts in Pure and Applied Mathematics. Vol. 13. Interscience. ISBN 978-0-88275-228-0. OCLC 1137934. Reid, Miles (1996). Undergraduate Commutative Algebra. London Mathematical Society Student Texts. Cambridge University Press. ISBN 978-0-521-45889-4. Serre, Jean-Pierre (2000). Local algebra. Springer Monographs in Mathematics. Translated by Chin, CheeWhye. Springer. ISBN 3-540-66641-9. Sharp, R.Y. (2000). Steps in commutative algebra. London Mathematical Society Student Texts. Vol. 51 (2nd ed.). Cambridge University Press. p. 2000. ISBN 0-521-64623-5. Zariski, Oscar; Samuel, Pierre (1975). Commutative algebra. Graduate Texts in Mathematics. Vol. 28. Springer. ISBN 978-0-387-90171-8. Zariski, Oscar; Samuel, Pierre (1975). Vol II. Vol. 29. Springer. ISBN 978-0-387-90089-6. |
Texts in Mathematics. Vol. 28. Springer. ISBN 978-0-387-90171-8. Zariski, Oscar; Samuel, Pierre (1975). Vol II. Vol. 29. Springer. ISBN 978-0-387-90089-6. |
An independent equation is an equation in a system of simultaneous equations which cannot be derived algebraically from the other equations. The concept typically arises in the context of linear equations. If it is possible to duplicate one of the equations in a system by multiplying each of the other equations by some number (potentially a different number for each equation) and summing the resulting equations, then that equation is dependent on the others. But if this is not possible, then that equation is independent of the others. If an equation is independent of the other equations in its system, then it provides information beyond that which is provided by the other equations. In contrast, if an equation is dependent on the others, then it provides no information not contained in the others collectively, and the equation can be dropped from the system without any information loss. The number of independent equations in a system equals the rank of the augmented matrix of the system—the system's coefficient matrix with one additional column appended, that column being the column vector of constants. The number of independent equations in a system of consistent equations (a system that has at least one solution) can never be greater than the number of unknowns. Equivalently, if a system has more independent equations than unknowns, it is inconsistent and has no solutions. The concepts of dependence and independence of systems are partially generalized in numerical linear algebra by the condition number, which (roughly) measures how close a system of equations is to being dependent (a condition number of infinity is a dependent system, and a system of orthogonal equations is maximally independent and has a condition number close to 1.) == See also == Linear algebra Indeterminate system Independent variable == References == |
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: Data compression (or source coding) Error control (or channel coding) Cryptographic coding Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination. Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes. == History of coding theory == Shannon’s paper focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory. The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth. Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, and Hamming distance. In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3. == Source coding == The aim of source coding is to take the source data and make it smaller. === Definition === Data can be seen as a random variable X : Ω → X {\displaystyle X:\Omega \to {\mathcal {X}}} , where x ∈ X {\displaystyle x\in {\mathcal {X}}} appears with probability P [ X = x ] {\displaystyle \mathbb {P} [X=x]} . Data are encoded by strings (words) over an alphabet Σ {\displaystyle |
can be seen as a random variable X : Ω → X {\displaystyle X:\Omega \to {\mathcal {X}}} , where x ∈ X {\displaystyle x\in {\mathcal {X}}} appears with probability P [ X = x ] {\displaystyle \mathbb {P} [X=x]} . Data are encoded by strings (words) over an alphabet Σ {\displaystyle \Sigma } . A code is a function C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} (or Σ + {\displaystyle \Sigma ^{+}} if the empty string is not part of the alphabet). C ( x ) {\displaystyle C(x)} is the code word associated with x {\displaystyle x} . Length of the code word is written as l ( C ( x ) ) . {\displaystyle l(C(x)).} Expected length of a code is l ( C ) = ∑ x ∈ X l ( C ( x ) ) P [ X = x ] . {\displaystyle l(C)=\sum _{x\in {\mathcal {X}}}l(C(x))\mathbb {P} [X=x].} The concatenation of code words C ( x 1 , … , x k ) = C ( x 1 ) C ( x 2 ) ⋯ C ( x k ) {\displaystyle C(x_{1},\ldots ,x_{k})=C(x_{1})C(x_{2})\cdots C(x_{k})} . The code word of the empty string is the empty string itself: C ( ϵ ) = ϵ {\displaystyle C(\epsilon )=\epsilon } === Properties === C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} is non-singular if injective. C : X ∗ → Σ ∗ {\displaystyle C:{\mathcal {X}}^{*}\to \Sigma ^{*}} is uniquely decodable if injective. C : X → Σ ∗ {\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}} is instantaneous if C ( x 1 ) {\displaystyle C(x_{1})} is not a proper prefix of C ( x 2 ) {\displaystyle C(x_{2})} (and vice versa). === Principle === Entropy of a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information. Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding. Various techniques used by source coding schemes try to achieve the limit of entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source. === Example === Facsimile transmission uses a simple run length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission. == Channel coding == The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade-off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. CDs use cross-interleaved Reed–Solomon coding to spread the data out over the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.