source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Torus
In geometry, a torus (: tori or toruses) is a surface of revolution generated by revolving a circle in three-dimensional space one full revolution about an axis that is coplanar with the circle. The main types of toruses include ring toruses, horn toruses, and spindle toruses. A ring torus is sometimes colloquially referred to as a donut or doughnut. If the axis of revolution does not touch the circle, the surface has a ring shape and is called a torus of revolution, also known as a ring torus. If the axis of revolution is tangent to the circle, the surface is a horn torus. If the axis of revolution passes twice through the circle, the surface is a spindle torus (or self-crossing torus or self-intersecting torus). If the axis of revolution passes through the center of the circle, the surface is a degenerate torus, a double-covered sphere. If the revolved curve is not a circle, the surface is called a toroid, as in a square toroid. Real-world objects that approximate a torus of revolution include swim rings, inner tubes and ringette rings. A torus should not be confused with a solid torus, which is formed by rotating a disk, rather than a circle, around an axis. A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a solid torus include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels. In topology, a ring torus is homeomorphic to the Cartesian product of two circles: , and the latter is taken to be the definition in that context. It is a compact 2-manifold of genus 1. The ring torus is one way to embed this space into Euclidean space, but another way to do this is the Cartesian product of the embedding of in the plane with itself. This produces a geometric object called the Clifford torus, a surface in 4-space. In the field of topology, a torus is any topological space that is homeomorphic to a torus. The surface of a coffee cup and a doughnut are both topological tori with genus one. An example of a torus can be constructed by taking a rectangular strip of flexible material such as rubber, and joining the top edge to the bottom edge, and the left edge to the right edge, without any half-twists (compare Möbius strip). Etymology mid 16th century (in torus (sense 2)): from Latin, literally ‘swelling, bolster, round molding’. The other senses date from the 19th century. Geometry A torus can be parametrized as: using angular coordinates representing rotation around the tube and rotation around the torus' axis of revolution, respectively, where the major radius is the distance from the center of the tube to the center of the torus and the minor radius is the radius of the tube. The ratio is called the aspect ratio of the torus. The typical doughnut confectionery has an aspect ratio of about 3 to 2. An implicit equation in Cartesian coordinates for a torus radially symmetric about the -axis is Algebraically eliminating the square root gives a quartic equation, The three classes o
https://en.wikipedia.org/wiki/Poker%20probability
In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. History Probability and gambling have been ideas since long before the invention of poker. The development of probability theory in the late 1400s was attributed to gambling; when playing a game with high stakes, players wanted to know what the chance of winning would be. In 1494, Fra Luca Paccioli released his work which was the first written text on probability. Motivated by Paccioli's work, Girolamo Cardano (1501-1576) made further developments in probability theory. His work from 1550, titled Liber de Ludo Aleae, discussed the concepts of probability and how they were directly related to gambling. However, his work did not receive any immediate recognition since it was not published until after his death. Blaise Pascal (1623-1662) also contributed to probability theory. His friend, Chevalier de Méré, was an avid gambler with the goal to become wealthy from it. De Méré tried a new mathematical approach to a gambling game but did not get the desired results. Determined to know why his strategy was unsuccessful, he consulted with Pascal. Pascal's work on this problem began an important correspondence between him and fellow mathematician Pierre de Fermat (1601-1665). Communicating through letters, the two continued to exchange their ideas and thoughts. These interactions led to the conception of basic probability theory. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. Frequencies 5-card poker hands In straight poker and five-card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. Wild cards are not considered. In this chart: Distinct hands is the number of different ways to draw the hand, not counting different suits. Frequency is the number of ways to draw the hand, including the same card values in different suits. The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; ). For example, there are 4 different ways to draw a royal flush (one for each suit), so the probability is , or one in 649,740. One would then expect to draw this hand about once in every 649,740 draws, or nearly 0.000154% of the time. Cumulative probability refers to the probability of drawing a hand as good as or better than the specified one. For example, the probability of drawing three of a kind is approximately 2.11%, while the probability of drawing a hand at least as good as three of a kind is about 2.87%. The cumulative probability is determined by adding one hand's probability with the probabilities of all hands
https://en.wikipedia.org/wiki/Sum
Sum most commonly means the total of two or more numbers added together; see addition. Sum can also refer to: Mathematics Sum (category theory), the generic concept of summation in mathematics Sum, the result of summation, the addition of a sequence of numbers 3SUM, a term from computational complexity theory Band sum, a way of connecting mathematical knots Connected sum, a way of gluing manifolds Digit sum, in number theory Direct sum, a combination of algebraic objects Direct sum of groups Direct sum of modules Direct sum of permutations Direct sum of topological groups Einstein summation, a way of contracting tensor indices Empty sum, a sum with no terms Indefinite sum, the inverse of a finite difference Kronecker sum, an operation considered a kind of addition for matrices Matrix addition, in linear algebra Minkowski addition, a sum of two subsets of a vector space Power sum symmetric polynomial, in commutative algebra Prefix sum, in computing Pushout (category theory) (also called an amalgamated sum or a cocartesian square, fibered coproduct, or fibered sum), the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domainor pushout, leading to a fibered sum in category theory QCD sum rules, in quantum field theory Riemann sum, in calculus Rule of sum, in combinatorics Subset sum problem, in cryptography Sum rule in differentiation, in calculus Sum rule in integration, in calculus Sum rule in quantum mechanics Wedge sum, a one-point union of topological spaces Whitney sum, of fiber bundles Zero-sum problem in combinatorics Computing and technology Sum (Unix), a program for generating checksums StartUp-Manager, a program to configure GRUB, GRUB2, Usplash and Splashy Sum type, a computer science term Art and entertainment Sum, the first beat (pronounced like "some") in any rhythmic cycle of Hindustani classical music "Sum", a song by Pink Floyd from The Endless River Sum: Forty Tales from the Afterlives, a 2009 collection of short stories by David Eagleman Sum 41, a Canadian punk band SUM, the computer in Goat Song (novelette) story by Poul Anderson in Magazine of Fantasy and Science Fiction, (1972). Organizations Senter for utvikling og miljø (Centre for Development and the Environment), a research institute which is part of the University of Oslo Soccer United Marketing, the for-profit marketing arm of Major League Soccer and the exclusive marketing partner of the United States Soccer Federation Society for the Establishment of Useful Manufactures, a now-defunct private state-sponsored corporation founded in 1791 to promote industrial development along the Passaic River in New Jersey in the United States The State University of Management, a Russian university Save Uganda Movement, a Ugandan militant opposition group Places Sum (administrative division), an administrative division in Mongolia, China, and some areas of Russia Sum (Mongolia), an administrativ
https://en.wikipedia.org/wiki/Line
Line most often refers to: Line (geometry), object that has zero thickness and curvature and stretches to infinity Telephone line, a single-user circuit on a telephone communication system Line, lines, The Line, or LINE may also refer to: Arts, entertainment, and media Films Lines (film), a 2016 Greek film The Line (2023 film), an American drama The Line (2022 film), a Swiss, French, and Belgian drama The Line (2017 film), a Slovak-Ukrainian crime film The Line (2009 film), an American action-crime film The Line, a 2009 independent film by Nancy Schwartzman Podcasts The Line (podcast), 2021 by Dan Taberski Literature Line (comics), a term to describe a subset of comic book series by a publisher Line (play), by Israel Horovitz, 1967 Line (poetry), the fundamental unit of poetic composition "Lines" (poem), an 1837 poem by Emily Brontë The Line (memoir), by Arch and Martin Flanagan The Line (play), by Timberlake Wertenbaker, 2009 Music Albums Lines (The Walker Brothers album), 1976 Lines (Pandelis Karayorgis album), 1995 Lines (Unthanks album), 2018 Songs "Line" (song), 2017 single by Triana Park "Lines" (song), 1976 single by The Walker Brothers "The Line" (Foo Fighters song), 2017 "The Line" (Lisa Stansfield song), 1997 "The Line" (Raye song), 2017 "Line", 2018 song by The Story So Far from Proper Dose "LINE", 2015 single by Sukima Switch "The Line", 2010 single by Battles "The Line", 2000 song by Sadist from Lego "The Line", 1995 song by Bruce Springsteen from The Ghost of Tom Joad Other uses in music Line (melody), a linear succession of musical tones that the listener perceives as a single entity Line (music) or part, a strand or melody of music played by an individual instrument or voice Television The Line (game show), a 2014 American game show The Line (TV series), a Canadian television series "The Line" (Heroes), a 2007 episode of the American television series Heroes The Line, a 2021 documentary series directed by Jeff Zimbalist and Doug Shultz Business Below the line, an advertising term Bottom line, or profit line, net profit Line function, primary business activity that negatively affects income if interrupted Line item, an accounting term Line of business, a product or set of related products Poverty line, an economics term Product lining, offering several related products for sale individually Computing and telecommunications Line (electrical engineering), any circuit or loop of an electrical system Line level, a common standard for audio signals Transmission line, a specialized structure designed to carry alternating current of radio frequency Line (text file), a row of characters as a unit of organization within text files Line (video), a measure of video display resolution or image resolution Line Corporation, a Tokyo-based technology company Line (software), a social messaging application operated by Line Corporation Military Land warfare Line (formation), standard t
https://en.wikipedia.org/wiki/Multistage%20sampling
In statistics, multistage sampling is the taking of samples in stages using smaller and smaller sampling units at each stage. Multistage sampling can be a complex form of cluster sampling because it is a type of sampling which involves dividing the population into groups (or clusters). Then, one or more clusters are chosen at random and everyone within the chosen cluster is sampled. Using all the sample elements in all the selected clusters may be prohibitively expensive or unnecessary. Under these circumstances, multistage cluster sampling becomes useful. Instead of using all the elements contained in the selected clusters, the researcher randomly selects elements from each cluster. Constructing the clusters is the first stage. Deciding what elements within the cluster to use is the second stage. The technique is used frequently when a complete list of all members of the population does not exist and is inappropriate. In some cases, several levels of cluster selection may be applied before the final sample elements are reached. For example, household surveys conducted by the Australian Bureau of Statistics begin by dividing metropolitan regions into 'collection districts' and selecting some of these collection districts (first stage). The selected collection districts are then divided into blocks, and blocks are chosen from within each selected collection district (second stage). Next, dwellings are listed within each selected block, and some of these dwellings are selected (third stage). This method makes it unnecessary to create a list of every dwelling in the region and necessary only for selected blocks. In remote areas, an additional stage of clustering is used, in order to reduce travel requirements. Although cluster sampling and stratified sampling bear some superficial similarities, they are substantially different. In stratified sampling, a random sample is drawn from all the strata, where in cluster sampling only the selected clusters are studied, either in single- or multi-stage. Advantages Cost and speed that the survey can be done in Convenience of finding the survey sample Normally more accurate than cluster sampling for the same size sample Disadvantages Not as accurate as Simple Random Sample if the sample is the same size More testing is difficult to do See also Statistics References External links Hadley's notes Sampling techniques Market research
https://en.wikipedia.org/wiki/Julius%20Petersen
Julius Peter Christian Petersen (16 June 1839, Sorø, West Zealand – 5 August 1910, Copenhagen) was a Danish mathematician. His contributions to the field of mathematics led to the birth of graph theory. Biography Petersen's interests in mathematics were manifold, including: geometry, complex analysis, number theory, mathematical physics, mathematical economics, cryptography and graph theory. His famous paper Die Theorie der regulären graphs was a fundamental contribution to modern graph theory as we know it today. In 1898, he presented a counterexample to Tait's claimed theorem about 1-factorability of 3-regular graphs, which is nowadays known as the "Petersen graph". In cryptography and mathematical economics he made contributions which today are seen as pioneering. He published a systematic treatment of geometrical constructions (with straightedge and compass) in 1880. A French translation was reprinted in 1990. A special issue of Discrete Mathematics has been dedicated to the 150th birthday of Petersen. Petersen, as he claimed, had a very independent way of thinking. In order to preserve this independence he made a habit to read as little as possible of other people's mathematics, pushing it to extremes. The consequences for his lack of knowledge of the literature of the time were severe. He spent a significant part of his time rediscovering already known results, in other cases already existing results had to be removed from a submitted paper and in other more serious cases a paper did not get published at all. He started from very modest beginnings, and by hard work, some luck and some good connections, moved steadily upward to a station of considerable importance. In 1891 his work received royal recognition through the award of the Order of the Dannebrog. Among mathematicians he enjoyed an international reputation. At his death –which was front-page news in Copenhagen– the socialist newspaper Social-Demokraten correctly sensed the popular appeal of his story: here was a kind of Hans Christian Andersen of science, a child of the people who had made good in the intellectual world. Early life and education Peter Christian Julius Petersen was born on the 16th of June 1839 in Sorø on Zealand. His parents were Jens Petersen (1803–1873), a dyer by profession, and Anna Cathrine Petersen (1813–1896), born Wiuff. He had two younger brothers, Hans Christian Rudolf Petersen (1844–1868) and Carl Sophus Valdemar Petersen (1846–1935), and two sisters, Nielsine Cathrine Marie Petersen (1837–?) and Sophie Caroline Petersen (1842–?). After preparation in a private school, he was admitted in 1849 into second grade at the Sorø Academy, a prestigious boarding school. He was taken out of school after his confirmation in 1854, because his parents could not afford to keep him there, and he worked as an apprentice for almost a year in an uncle's grocery in Kolding, Jutland. The uncle died, however, and left Petersen a sum of money that enabled him to return t
https://en.wikipedia.org/wiki/Perpendicular
In elementary geometry, two geometric objects are perpendicular if their intersection forms right angles (angles that are 90 degrees or π/2 radians wide) at the point of intersection called a foot. The condition of perpendicularity may be represented graphically using the perpendicular symbol, ⟂. Perpendicular intersections can happen between two lines (or two line segments), between a line and a plane, and between two planes. Perpendicularity is one particular instance of the more general mathematical concept of orthogonality; perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the word "perpendicular" is sometimes used to describe much more complicated geometric orthogonality conditions, such as that between a surface and its normal vector. A line is said to be perpendicular to another line if the two lines intersect at a right angle. Explicitly, a first line is perpendicular to a second line if (1) the two lines meet; and (2) at the point of intersection the straight angle on one side of the first line is cut by the second line into two congruent angles. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, then the second line is also perpendicular to the first. For this reason, we may speak of two lines as being perpendicular (to each other) without specifying an order. A great example of perpendicularity can be seen in any compass, note the cardinal points; North, East, South, West (NESW) The line N-S is perpendicular to the line W-E and the angles N-E, E-S, S-W and W-N are all 90° to one another. Perpendicularity easily extends to segments and rays. For example, a line segment is perpendicular to a line segment if, when each is extended in both directions to form an infinite line, these two resulting lines are perpendicular in the sense above. In symbols, means line segment AB is perpendicular to line segment CD. A line is said to be perpendicular to a plane if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines. Two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle. Foot of a perpendicular The word foot is frequently used in connection with perpendiculars. This usage is exemplified in the top diagram, above, and its caption. The diagram can be in any orientation. The foot is not necessarily at the bottom. More precisely, let be a point and a line. If is the point of intersection of and the unique line through that is perpendicular to , then is called the foot of this perpendicular through . Construction of the perpendicular To make the perpendicular to the line AB through the point P using compass-and-straightedge construction, proceed as follows (see figure left): Step 1 (red): construct a circle with center at P to create points A' and B' on the line AB, which are equidistant
https://en.wikipedia.org/wiki/Parallel
Parallel may refer to: Mathematics Parallel (geometry), two lines in the Euclidean plane which never intersect Parallel postulate, an axiom from Euclid's Elements establishing flat Euclidean geometry Parallel transport, in differential geometry, a way of transporting geometrical data along smooth curves Parallel (operator), mathematical operation named after the composition of electrical resistance in parallel circuits Science and engineering Parallel (latitude), an imaginary east–west line circling a globe Parallel of declination, used in astronomy Parallel evolution, the similar development of a trait in unrelated biological species Parallel circuit, with electrical components connected side-by-side (in a series circuit, components are connected end-to-end ) Parallel manipulator, a mechanical system with two platforms connected by linear actuators Parallel cousin, in anthropology, a cousin from a parent's same-sex sibling (a cross-cousin is from a parent's opposite-sex sibling) Parallel, a geometric term of location meaning "in the same direction" Computing Parallel algorithm Parallel computing Parallel communication Parallel metaheuristic Parallel ATA Parallel port Parallel (software), a UNIX utility for running programs in parallel Parallel Computers, Inc., an American computer manufacturer of the 1980s Parallel Sysplex, a cluster of IBM mainframes Music theory Parallel chord Parallel key, the minor (or major) key of a major (or minor) key with the same tonic Language Parallelism (grammar), a balance of two or more similar words, phrases, or clauses Parallelism (rhetoric) Entertainment Parallel (manga) Parallel (2018 film), a Canadian science fiction thriller film Parallel (2023 film) an upcoming American science fiction thriller film Parallel (video), a compilation of music videos by REM Parallel (The Black Dog album), 1995 Parallel (Four Tet album) Parallel (EP), a 2017 EP by GFriend "The Parallel", an episode of the TV series The Twilight Zone Other uses Parallel, a type of trench; see Siege#Age of gunpowder Avinguda del Paral·lel Parallel (filling stations operator), an operator in Ukraine's oil wholesale and retail markets See also Parallels (disambiguation) Parallel lines (disambiguation) Parallel universe (disambiguation) Parallel World (disambiguation)
https://en.wikipedia.org/wiki/Right%20angle
In geometry and trigonometry, a right angle is an angle of exactly 90 degrees or radians corresponding to a quarter turn. If a ray is placed so that its endpoint is on a line and the adjacent angles are equal, then they are right angles. The term is a calque of Latin angulus rectus; here rectus means "upright", referring to the vertical perpendicular to a horizontal base line. Closely related and important geometrical concepts are perpendicular lines, meaning lines that form right angles at their point of intersection, and orthogonality, which is the property of forming right angles, usually applied to vectors. The presence of a right angle in a triangle is the defining factor for right triangles, making the right angle basic to trigonometry. Etymology The meaning of right in right angle possibly refers to the Latin adjective rectus 'erect, straight, upright, perpendicular'. A Greek equivalent is orthos 'straight; perpendicular' (see orthogonality). In elementary geometry A rectangle is a quadrilateral with four right angles. A square has four right angles, in addition to equal-length sides. The Pythagorean theorem states how to determine when a triangle is a right triangle. Symbols In Unicode, the symbol for a right angle is . It should not be confused with the similarly shaped symbol . Related symbols are , , and . In diagrams, the fact that an angle is a right angle is usually expressed by adding a small right angle that forms a square with the angle in the diagram, as seen in the diagram of a right triangle (in British English, a right-angled triangle) to the right. The symbol for a measured angle, an arc, with a dot, is used in some European countries, including German-speaking countries and Poland, as an alternative symbol for a right angle. In some American schools this notation has been nicknamed "The bullard box", this may be due to the alliteration of the name. Euclid Right angles are fundamental in Euclid's Elements. They are defined in Book 1, definition 10, which also defines perpendicular lines. Definition 10 does not use numerical degree measurements but rather touches at the very heart of what a right angle is, namely two straight lines intersecting to form two equal and adjacent angles. The straight lines which form right angles are called perpendicular. Euclid uses right angles in definitions 11 and 12 to define acute angles (those smaller than a right angle) and obtuse angles (those greater than a right angle). Two angles are called complementary if their sum is a right angle. Book 1 Postulate 4 states that all right angles are equal, which allows Euclid to use a right angle as a unit to measure other angles with. Euclid's commentator Proclus gave a proof of this postulate using the previous postulates, but it may be argued that this proof makes use of some hidden assumptions. Saccheri gave a proof as well but using a more explicit assumption. In Hilbert's axiomatization of geometry this statement is given as
https://en.wikipedia.org/wiki/Gaspard%20Monge
Gaspard Monge, Comte de Péluse (9 May 1746 – 28 July 1818) was a French mathematician, commonly presented as the inventor of descriptive geometry, (the mathematical basis of) technical drawing, and the father of differential geometry. During the French Revolution he served as the Minister of the Marine, and was involved in the reform of the French educational system, helping to found the École Polytechnique. Biography Early life Monge was born at Beaune, Côte-d'Or, the son of a merchant. He was educated at the college of the Oratorians at Beaune. In 1762 he went to the Collège de la Trinité at Lyon, where, one year after he had begun studying, he was made a teacher of physics at the age of just seventeen. After finishing his education in 1764 he returned to Beaune, where he made a large-scale plan of the town, inventing the methods of observation and constructing the necessary instruments; the plan was presented to the town, and is still preserved in their library. An officer of engineers who saw it wrote to the commandant of the École Royale du Génie at Mézières, recommending Monge to him and he was given a job as a draftsman. L. T. C. Rolt, an engineer and historian of technology, credited Monge with the birth of engineering drawing. When in the Royal School, he became a member of Freemasonry, initiated into ″L’Union parfaite″ lodge. Career Those studying at the officer school were exclusively drawn from the aristocracy, so he was not allowed admission to the institution itself. His manual skill was highly regarded, but his mathematical skills were not made use of. Nevertheless, he worked on the development of his ideas in his spare time. At this time he came to contact with Charles Bossut, the professor of mathematics at the École Royale du Génie. "I was a thousand times tempted," he said long afterwards, "to tear up my drawings in disgust at the esteem in which they were held, as if I had been good for nothing better." After a year at the École Royale, Monge was asked to produce a plan for a fortification in such a way as to optimise its defensive arrangement. There was an established method for doing this which involved lengthy calculations but Monge devised a way of solving the problems by using drawings. At first his solution was not accepted, since it had not taken the time judged to be necessary, but upon examination the value of the work was recognised, and Monge's exceptional abilities were recognised. After Bossut left the École Royale du Génie, Monge took his place in January 1769, and in 1770 he was also appointed instructor in experimental physics. In 1777, Monge married Cathérine Huart, who owned a forge. This led Monge to develop an interest in metallurgy. In 1780 he became a member of the French Academy of Sciences; his friendship with chemist C. L. Berthollet began at this time. In 1783, after leaving Mézières, he was, on the death of É. Bézout, appointed examiner of naval candidates. Although pressed by the minister t
https://en.wikipedia.org/wiki/Maple%20%28software%29
Maple is a symbolic and numeric computing environment as well as a multi-paradigm programming language. It covers several areas of technical computing, such as symbolic mathematics, numerical analysis, data processing, visualization, and others. A toolbox, MapleSim, adds functionality for multidomain physical modeling and code generation. Maple's capacity for symbolic computing include those of a general-purpose computer algebra system. For instance, it can manipulate mathematical expressions and find symbolic solutions to certain problems, such as those arising from ordinary and partial differential equations. Maple is developed commercially by the Canadian software company Maplesoft. The name 'Maple' is a reference to the software's Canadian heritage. Overview Core functionality Users can enter mathematics in traditional mathematical notation. Custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization. Examples of symbolic computations are given below. Maple incorporates a dynamically typed imperative-style programming language (resembling Pascal), which permits variables of lexical scope. There are also interfaces to other languages (C, C#, Fortran, Java, MATLAB, and Visual Basic), as well as to Microsoft Excel. Maple supports MathML 2.0, which is a W3C format for representing and interpreting mathematical expressions, including their display in web pages. There is also functionality for converting expressions from traditional mathematical notation to markup suitable for the typesetting system LaTeX. Architecture Maple is based on a small kernel, written in C, which provides the Maple language. Most functionality is provided by libraries, which come from a variety of sources. Most of the libraries are written in the Maple language; these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, or GMP libraries. Different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs. The standard interface and calculator interface are written in Java. History The first concept of Maple arose from a meeting in late 1980 at the University of Waterloo. Researchers at the university wished to purchase a computer powerful enough to run the Lisp-based computer algebra system Macsyma. Instead, they opted to develop their own computer algebra system, named Maple, that would run on lower cost computers. Aiming for portability, they began writing Maple in programming languages from the BCPL family (initially using a subset of B and C, and later on only C). A first limited version appeared after three weeks, and fuller versions entered mainstream use beginning in 1982. By the end of 1983, over 50 universities had copies of Maple installed on their machines. In 1984, the research group arranged with Watcom Pr
https://en.wikipedia.org/wiki/Louis%20de%20Branges%20de%20Bourcia
Louis de Branges de Bourcia (born August 21, 1932) is a French-American mathematician. He is the Edward C. Elliott Distinguished Professor of Mathematics at Purdue University in West Lafayette, Indiana. He is best known for proving the long-standing Bieberbach conjecture in 1984, now called de Branges's theorem. He claims to have proved several important conjectures in mathematics, including the generalized Riemann hypothesis. Born to American parents who lived in Paris, de Branges moved to the US in 1941 with his mother and sisters. His native language is French. He did his undergraduate studies at the Massachusetts Institute of Technology (1949–53), and received a PhD in mathematics from Cornell University (1953–57). His advisors were Wolfgang Fuchs and then-future Purdue colleague Harry Pollard. He spent two years (1959–60) at the Institute for Advanced Study and another two (1961–62) at the Courant Institute of Mathematical Sciences. He was appointed to Purdue in 1962. An analyst, de Branges has made incursions into real, functional, complex, harmonic (Fourier) and Diophantine analyses. As far as particular techniques and approaches are concerned, he is an expert in spectral and operator theories. Works De Branges' proof of the Bieberbach conjecture was not initially accepted by the mathematical community. Rumors of his proof began to circulate in March 1984, but many mathematicians were skeptical because de Branges had earlier announced some false results, including a claimed proof of the invariant subspace conjecture in 1964 (incidentally, in December 2008 he published a new claimed proof for this conjecture on his website). It took verification by a team of mathematicians at Steklov Institute of Mathematics in Leningrad to validate de Branges' proof, a process that took several months and led later to significant simplification of the main argument. The original proof uses hypergeometric functions and innovative tools from the theory of Hilbert spaces of entire functions, largely developed by de Branges. Actually, the correctness of the Bieberbach conjecture was not the only important consequence of de Branges' proof, which covers a more general problem, the Milin conjecture. In June 2004, de Branges announced he had a proof of the Riemann hypothesis, often called the greatest unsolved problem in mathematics, and published the 124-page proof on his website. That original preprint suffered a number of revisions until it was replaced in December 2007 by a much more ambitious claim, which he had been developing for one year in the form of a parallel manuscript. Since that time, he has released evolving versions of two purported generalizations, following independent but complementary approaches, of his original argument. In the shortest of them (43 pages as of 2009), which he titles "Apology for the Proof of the Riemann Hypothesis" (using the word "apology" in the rarely used sense of apologia), he claims to use his tools on the theory
https://en.wikipedia.org/wiki/Feigenbaum%20constants
In mathematics, specifically bifurcation theory, the Feigenbaum constants are two mathematical constants which both express ratios in a bifurcation diagram for a non-linear map. They are named after the physicist Mitchell J. Feigenbaum. History Feigenbaum originally related the first constant to the period-doubling bifurcations in the logistic map, but also showed it to hold for all one-dimensional maps with a single quadratic maximum. As a consequence of this generality, every chaotic system that corresponds to this description will bifurcate at the same rate. Feigenbaum made this discovery in 1975, and he officially published it in 1978. The first constant The first Feigenbaum constant is the limiting ratio of each bifurcation interval to the next between every period doubling, of a one-parameter map where is a function parameterized by the bifurcation parameter . It is given by the limit where are discrete values of at the th period doubling. Names Feigenbaum constant Feigenbaum bifurcation velocity delta Value 30 decimal places : = A simple rational approximation is: , which is correct to 5 significant values (when rounding). For more precision use , which is correct to 7 significant values. Is approximately equal to , with an error of 0.0047% Illustration Non-linear maps To see how this number arises, consider the real one-parameter map Here is the bifurcation parameter, is the variable. The values of for which the period doubles (e.g. the largest value for with no period-2 orbit, or the largest with no period-4 orbit), are , etc. These are tabulated below: {| class="wikitable" |- ! ! Period ! Bifurcation parameter () ! Ratio |- | 1 || 2 || 0.75 || — |- | 2 || 4 || 1.25 || — |- | 3 || 8 || || 4.2337 |- | 4 || 16 || || 4.5515 |- | 5 || 32 || || 4.6458 |- | 6 || 64 || || 4.6639 |- | 7 || 128 || || 4.6682 |- | 8 || 256 || || 4.6689 |- |} The ratio in the last column converges to the first Feigenbaum constant. The same number arises for the logistic map with real parameter and variable . Tabulating the bifurcation values again: {| class="wikitable" |- ! ! Period ! Bifurcation parameter () ! Ratio |- | 1 || 2 || 3 || — |- | 2 || 4 || || — |- | 3 || 8 || || 4.7514 |- | 4 || 16 || || 4.6562 |- | 5 || 32 || || 4.6683 |- | 6 || 64 || || 4.6686 |- | 7 || 128 || || 4.6680 |- | 8 || 256 || || 4.6768 |- |} Fractals In the case of the Mandelbrot set for complex quadratic polynomial the Feigenbaum constant is the limiting ratio between the diameters of successive circles on the real axis in the complex plane (see animation on the right). {| class="wikitable" |- ! ! Period = ! Bifurcation parameter () ! Ratio |- | 1 || 2 || || — |- | 2 || 4 || || — |- | 3 || 8 || || 4.2337 |- | 4 || 16 || || 4.5515 |- | 5 || 32 || || 4.6459 |- | 6 || 64 || || 4.6639 |- | 7 || 128 || || 4.6668 |- | 8 || 256 || || 4.6740 |- |9 ||512 || ||4.6596 |- |10 ||1024 || ||4.6750 |- | || || ... || |} Bifurcation par
https://en.wikipedia.org/wiki/Constant%20term
In mathematics, a constant term (sometimes referred to as a free term) is a term in an algebraic expression that does not contain any variables and therefore is constant. For example, in the quadratic polynomial the 3 is a constant term. After like terms are combined, an algebraic expression will have at most one constant term. Thus, it is common to speak of the quadratic polynomial where is the variable, as having a constant term of If the constant term is 0, then it will conventionally be omitted when the quadratic is written out. Any polynomial written in standard form has a unique constant term, which can be considered a coefficient of In particular, the constant term will always be the lowest degree term of the polynomial. This also applies to multivariate polynomials. For example, the polynomial has a constant term of −4, which can be considered to be the coefficient of where the variables are eliminated by being exponentiated to 0 (any non-zero number exponentiated to 0 becomes 1). For any polynomial, the constant term can be obtained by substituting in 0 instead of each variable; thus, eliminating each variable. The concept of exponentiation to 0 can be applied to power series and other types of series, for example in this power series: is the constant term. Constant of integration The derivative of a constant term is 0, so when a term containing a constant term is differentiated, the constant term vanishes, regardless of its value. Therefore the antiderivative is only determined up to an unknown constant term, which is called "the constant of integration" and added in symbolic form. See also Constant (mathematics) References Polynomials Elementary algebra
https://en.wikipedia.org/wiki/Division%20by%20two
In mathematics, division by two or halving has also been called mediation or dimidiation. The treatment of this as a different operation from multiplication and division by other numbers goes back to the ancient Egyptians, whose multiplication algorithm used division by two as one of its fundamental steps. Some mathematicians as late as the sixteenth century continued to view halving as a separate operation, and it often continues to be treated separately in modern computer programming. Performing this operation is simple in decimal arithmetic, in the binary numeral system used in computer programming, and in other even-numbered bases. Binary In binary arithmetic, division by two can be performed by a bit shift operation that shifts the number one place to the right. This is a form of strength reduction optimization. For example, 1101001 in binary (the decimal number 105), shifted one place to the right, is 110100 (the decimal number 52): the lowest order bit, a 1, is removed. Similarly, division by any power of two 2k may be performed by right-shifting k positions. Because bit shifts are often much faster operations than division, replacing a division by a shift in this way can be a helpful step in program optimization. However, for the sake of software portability and readability, it is often best to write programs using the division operation and trust in the compiler to perform this replacement. An example from Common Lisp: (setq number #b1101001) ; #b1101001 — 105 (ash number -1) ; #b0110100 — 105 >> 1 ⇒ 52 (ash number -4) ; #b0000110 — 105 >> 4 ≡ 105 / 2⁴ ⇒ 6 The above statements, however, are not always true when dealing with dividing signed binary numbers. Shifting right by 1 bit will divide by two, always rounding down. However, in some languages, division of signed binary numbers round towards 0 (which, if the result is negative, means it rounds up). For example, Java is one such language: in Java, -3 / 2 evaluates to -1, whereas -3 >> 1 evaluates to -2. So in this case, the compiler cannot optimize division by two by replacing it by a bit shift, when the dividend could possibly be negative. Binary floating point In binary floating-point arithmetic, division by two can be performed by decreasing the exponent by one (as long as the result is not a subnormal number). Many programming languages provide functions that can be used to divide a floating point number by a power of two. For example, the Java programming language provides the method java.lang.Math.scalb for scaling by a power of two, and the C programming language provides the function ldexp for the same purpose. Decimal The following algorithm is for decimal. However, it can be used as a model to construct an algorithm for taking half of any number N in any even base. Write out N, putting a zero to its left. Go through the digits of N in overlapping pairs, writing down digits of the result from the following table. Example: 1738/2=? Write 01
https://en.wikipedia.org/wiki/Analytic%20continuation
In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where the infinite series representation which initially defined the function becomes divergent. The step-wise continuation technique may, however, come up against difficulties. These may have an essentially topological nature, leading to inconsistencies (defining more than one value). They may alternatively have to do with the presence of singularities. The case of several complex variables is rather different, since singularities then need not be isolated points, and its investigation was a major reason for the development of sheaf cohomology. Initial discussion Suppose f is an analytic function defined on a non-empty open subset U of the complex plane If V is a larger open subset of containing U, and F is an analytic function defined on V such that then F is called an analytic continuation of f. In other words, the restriction of F to U is the function f we started with. Analytic continuations are unique in the following sense: if V is the connected domain of two analytic functions F1 and F2 such that U is contained in V and for all z in U then on all of V. This is because F1 − F2 is an analytic function which vanishes on the open, connected domain U of f and hence must vanish on its entire domain. This follows directly from the identity theorem for holomorphic functions. Applications A common way to define functions in complex analysis proceeds by first specifying the function on a small domain only, and then extending it by analytic continuation. In practice, this continuation is often done by first establishing some functional equation on the small domain and then using this equation to extend the domain. Examples are the Riemann zeta function and the gamma function. The concept of a universal cover was first developed to define a natural domain for the analytic continuation of an analytic function. The idea of finding the maximal analytic continuation of a function in turn led to the development of the idea of Riemann surfaces. Analytic continuation is used in Riemannian manifolds, solutions of Einstein's equations. For example, the analytic continuation of Schwarzschild coordinates into Kruskal–Szekeres coordinates. Worked example Begin with a particular analytic function . In this case, it is given by a power series centered at : By the Cauchy–Hadamard theorem, its radius of convergence is 1. That is, is defined and analytic on the open set which has boundary . Indeed, the series diverges at . Pretend we don't know that , and focus on recentering the power series at a different point : We'll calculate the 's and determine whether this new power series converges in an open set which is not contained in . If so, we will have analytically continued to the region which
https://en.wikipedia.org/wiki/Zeros%20and%20poles
In complex analysis (a branch of mathematics), a pole is a certain type of singularity of a complex-valued function of a complex variable. It is the simplest type of non-removable singularity of such a function (see essential singularity). Technically, a point is a pole of a function if it is a zero of the function and is holomorphic (i.e. complex differentiable) in some neighbourhood of . A function is meromorphic in an open set if for every point of there is a neighborhood of in which either or is holomorphic. If is meromorphic in , then a zero of is a pole of , and a pole of is a zero of . This induces a duality between zeros and poles, that is fundamental for the study of meromorphic functions. For example, if a function is meromorphic on the whole complex plane plus the point at infinity, then the sum of the multiplicities of its poles equals the sum of the multiplicities of its zeros. Definitions A function of a complex variable is holomorphic in an open domain if it is differentiable with respect to at every point of . Equivalently, it is holomorphic if it is analytic, that is, if its Taylor series exists at every point of , and converges to the function in some neighbourhood of the point. A function is meromorphic in if every point of has a neighbourhood such that either or is holomorphic in it. A zero of a meromorphic function is a complex number such that . A pole of is a zero of . If is a function that is meromorphic in a neighbourhood of a point of the complex plane, then there exists an integer such that is holomorphic and nonzero in a neighbourhood of (this is a consequence of the analytic property). If , then is a pole of order (or multiplicity) of . If , then is a zero of order of . Simple zero and simple pole are terms used for zeroes and poles of order Degree is sometimes used synonymously to order. This characterization of zeros and poles implies that zeros and poles are isolated, that is, every zero or pole has a neighbourhood that does not contain any other zero and pole. Because of the order of zeros and poles being defined as a non-negative number and the symmetry between them, it is often useful to consider a pole of order as a zero of order and a zero of order as a pole of order . In this case a point that is neither a pole nor a zero is viewed as a pole (or zero) of order 0. A meromorphic function may have infinitely many zeros and poles. This is the case for the gamma function (see the image in the infobox), which is meromorphic in the whole complex plane, and has a simple pole at every non-positive integer. The Riemann zeta function is also meromorphic in the whole complex plane, with a single pole of order 1 at . Its zeros in the left halfplane are all the negative even integers, and the Riemann hypothesis is the conjecture that all other zeros are along . In a neighbourhood of a point a nonzero meromorphic function is the sum of a Laurent series with at most finite p
https://en.wikipedia.org/wiki/Proportionality%20%28mathematics%29
In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality. This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons. Two functions and are proportional if their ratio is a constant function. If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., (for details see Ratio). Proportionality is closely related to linearity. Direct proportionality Given an independent variable x and a dependent variable y, y is directly proportional to x if there is a non-zero constant k such that: The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~": (or ) For the proportionality constant can be expressed as the ratio: It is also called the constant of variation or constant of proportionality. A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of and a slope of k. This corresponds to linear growth. Examples If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality. The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to . On a map of a sufficiently small geographical area, drawn to scale distances, the distance between any two points on the map is directly proportional to the beeline distance between the two locations represented by those points; the constant of proportionality is the scale of the map. The force, acting on a small object with small mass by a nearby large extended mass due to gravity, is directly proportional to the object's mass; the constant of proportionality between the force and the mass is known as gravitational acceleration. The net force acting on an object is proportional to the acceleration of that object with respect to an inertial frame of reference. The constant of proportionality in this, Newton's second law, is the classical mass of the object. Inverse proportionality The concept of inverse proportionality can be contrasted with direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value o
https://en.wikipedia.org/wiki/Coordinate%20system
In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in "the x-coordinate". The coordinates are taken to be real numbers in elementary mathematics, but may be complex numbers or elements of a more abstract system such as a commutative ring. The use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa; this is the basis of analytic geometry. Common coordinate systems Number line The simplest example of a coordinate system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O (the origin) is chosen on a given line. The coordinate of a point P is defined as the signed distance from O to P, where the signed distance is the distance taken as positive or negative depending on which side of the line P lies. Each point is given a unique coordinate and each real number is the coordinate of a unique point. Cartesian coordinate system The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three mutually orthogonal planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space. Depending on the direction and order of the coordinate axes, the three-dimensional system may be a right-handed or a left-handed system. Polar coordinate system Another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis. For a given angle θ, there is a single line through the pole whose angle with the polar axis is θ (measured counterclockwise from the axis to the line). Then there is a unique point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates (r, θ) there is a single point, but any point is represented by many pairs of coordinates. For example, (r, θ), (r, θ+2π) and (−r, θ+π) are all polar coordinates for the same point. The pole is represented by (0, θ) for any value of θ. Cylindrical and spherical coordinate systems There are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the same meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple (r, θ, z). Spherical coordinates take this a step further by converting the pair of cylindrical coordinates (r, z) t
https://en.wikipedia.org/wiki/Factorization
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, is an integer factorization of , and is a polynomial factorization of . Factorization is not usually considered meaningful within number systems possessing division, such as the real or complex numbers, since any can be trivially written as whenever is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by ancient Greek mathematicians in the case of integers. They proved the fundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product of prime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is unique up to the order of the factors. Although integer factorization is a sort of inverse to multiplication, it is much more difficult algorithmically, a fact which is exploited in the RSA cryptosystem to implement public-key cryptography. Polynomial factorization has also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding its roots to finding the roots of the factors. Polynomials with coefficients in the integers or in a field possess the unique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced by irreducible polynomials. In particular, a univariate polynomial with complex coefficients admits a unique (up to ordering) factorization into linear polynomials: this is a version of the fundamental theorem of algebra. In this case, the factorization can be done with root-finding algorithms. The case of polynomials with integer coefficients is fundamental for computer algebra. There are efficient computer algorithms for computing (complete) factorizations within the ring of polynomials with rational number coefficients (see factorization of polynomials). A commutative ring possessing the unique factorization property is called a unique factorization domain. There are number systems, such as certain rings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property of Dedekind domains: ideals factor uniquely into prime ideals. Factorization may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix with all diagonal entries equal to
https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt%20process
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors for and generates an orthogonal set that spans the same k-dimensional subspace of Rn as S. The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. In the theory of Lie group decompositions, it is generalized by the Iwasawa decomposition. The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix). The Gram–Schmidt process The vector projection of a vector on a nonzero vector is defined as where denotes the inner product of the vectors and . This means that is the orthogonal projection of onto the line spanned by . If is the zero vector, then is defined as the zero vector. Given vectors the Gram–Schmidt process defines the vectors as follows: The sequence is the required system of orthogonal vectors, and the normalized vectors form an orthonormal set. The calculation of the sequence is known as Gram–Schmidt orthogonalization, and the calculation of the sequence is known as Gram–Schmidt orthonormalization. To check that these formulas yield an orthogonal sequence, first compute by substituting the above formula for u2: we get zero. Then use this to compute again by substituting the formula for u3: we get zero. The general proof proceeds by mathematical induction. Geometrically, this method proceeds as follows: to compute ui, it projects vi orthogonally onto the subspace U generated by , which is the same as the subspace generated by . The vector ui is then defined to be the difference between vi and this projection, guaranteed to be orthogonal to all of the vectors in the subspace U. The Gram–Schmidt process also applies to a linearly independent countably infinite sequence . The result is an orthogonal (or orthonormal) sequence such that for natural number : the algebraic span of is the same as that of . If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the vector on the ith step, assuming that is a linear combination of . If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs. A variant of the Gram–Schmidt process using transfinite recursion applied to a (possibly uncountably) infinite sequence of vectors yields a set of orthonormal vectors with such that for any , the completio
https://en.wikipedia.org/wiki/Gnomon
A gnomon (; ) is the part of a sundial that casts a shadow. The term is used for a variety of purposes in mathematics and other fields. History A painted stick dating from 2300 BC that was excavated at the astronomical site of Taosi is the oldest gnomon known in China. The gnomon was widely used in ancient China from the second millennium BC onward in order to determine the changes in seasons, orientation, and geographical latitude. The ancient Chinese used shadow measurements for creating calendars that are mentioned in several ancient texts. According to the collection of Zhou Chinese poetic anthologies Classic of Poetry, one of the distant ancestors of King Wen of the Zhou dynasty used to measure gnomon shadow lengths to determine the orientation around the 14th century BC. The ancient Greek philosopher Anaximander (610–546 BC) is credited with introducing this Babylonian instrument to the Ancient Greeks. The ancient Greek mathematician and astronomer Oenopides used the phrase drawn gnomon-wise to describe a line drawn perpendicular to another. Later, the term was used for an L-shaped instrument like a steel square used to draw right angles. This shape may explain its use to describe a shape formed by cutting a smaller square from a larger one. Euclid extended the term to the plane figure formed by removing a similar parallelogram from a corner of a larger parallelogram. Indeed, the gnomon is the increment between two successive figurate numbers, including square and triangular numbers. Definition of Hero of Alexandria The ancient Greek mathematician and engineer Hero of Alexandria defined a gnomon as that which, when added or subtracted to an entity (number or shape), makes a new entity similar to the starting entity. In this sense Theon of Smyrna used it to describe a number which added to a polygonal number produces the next one of the same type. The most common use in this sense is an odd integer especially when seen as a figurate number between square numbers. Vitruvius Vitruvius mentions the gnomon as "" in the first sentence of chapter 3 in volume 1 of his book De Architectura. That Latin term "" leaves room for interpretation. Despite its similarity to "" (or its feminine form ""), it appears unlikely that Vitruvius refers to judgement on the one hand or to the design of sundials on the other. It appears to be more appropriate to assume that he refers to geometry, a science upon which gnomons rely heavily. In those days, calculations were carried out geometrically, in stark contrast to the algebraic methods in use today. Thus, it seems that he indirectly refers to mathematics and geodesy. Pinhole gnomons Perforated gnomons projecting a pinhole image of the Sun whose location can be measured to tell the time of day and year were described in the Chinese Zhoubi Suanjing, possibly dating as early as the early Zhou (11th century BC) but surviving only in forms dating to the Eastern Han (3rd century). In the Middle East and Eur
https://en.wikipedia.org/wiki/Tychonoff%27s%20theorem
In mathematics, Tychonoff's theorem states that the product of any collection of compact topological spaces is compact with respect to the product topology. The theorem is named after Andrey Nikolayevich Tikhonov (whose surname sometimes is transcribed Tychonoff), who proved it first in 1930 for powers of the closed unit interval and in 1935 stated the full theorem along with the remark that its proof was the same as for the special case. The earliest known published proof is contained in a 1935 article of Tychonoff, A., "Uber einen Funktionenraum", Mathematical Annals, 111, pp. 762–766 (1935). (This reference is mentioned in "Topology" by Hocking and Young, Dover Publications, Ind.) Tychonoff's theorem is often considered as perhaps the single most important result in general topology (along with Urysohn's lemma). The theorem is also valid for topological spaces based on fuzzy sets. Topological definitions The theorem depends crucially upon the precise definitions of compactness and of the product topology; in fact, Tychonoff's 1935 paper defines the product topology for the first time. Conversely, part of its importance is to give confidence that these particular definitions are the most useful (i.e. most well-behaved) ones. Indeed, the Heine–Borel definition of compactness—that every covering of a space by open sets admits a finite subcovering—is relatively recent. More popular in the 19th and early 20th centuries was the Bolzano-Weierstrass criterion that every bounded infinite sequence admits a convergent subsequence, now called sequential compactness. These conditions are equivalent for metrizable spaces, but neither one implies the other in the class of all topological spaces. It is almost trivial to prove that the product of two sequentially compact spaces is sequentially compact—one passes to a subsequence for the first component and then a subsubsequence for the second component. An only slightly more elaborate "diagonalization" argument establishes the sequential compactness of a countable product of sequentially compact spaces. However, the product of continuum many copies of the closed unit interval (with its usual topology) fails to be sequentially compact with respect to the product topology, even though it is compact by Tychonoff's theorem (e.g., see ). This is a critical failure: if X is a completely regular Hausdorff space, there is a natural embedding from X into [0,1]C(X,[0,1]), where C(X,[0,1]) is the set of continuous maps from X to [0,1]. The compactness of [0,1]C(X,[0,1]) thus shows that every completely regular Hausdorff space embeds in a compact Hausdorff space (or, can be "compactified".) This construction is the Stone–Čech compactification. Conversely, all subspaces of compact Hausdorff spaces are completely regular Hausdorff, so this characterizes the completely regular Hausdorff spaces as those that can be compactified. Such spaces are now called Tychonoff spaces. Applications Tychonoff's theorem has been
https://en.wikipedia.org/wiki/Euclidean%20planes%20in%20three-dimensional%20space
In Euclidean geometry, a plane is a flat two-dimensional surface that extends indefinitely. Euclidean planes often arise as subspaces of three-dimensional space . A prototypical example is one of a room's walls, infinitely extended and assumed infinitesimal thin. While a pair of real numbers suffices to describe points on a plane, the relationship with out-of-plane points requires special consideration for their embedding in the ambient space . Derived concepts A plane segment (or simply "plane", in lay use) is a planar surface region; it is analogous to a line segment. A bivector is an oriented plane segment, analogous to directed line segments. A face is a plane segment bounding a solid object. A slab is a region bounded by two parallel planes. A parallelepiped is a region bounded by three pairs of parallel planes. Occurrence in nature A plane serves as a mathematical model for many physical phenomena, such as specular reflection in a plane mirror or wavefronts in a traveling plane wave. The free surface of undisturbed liquids tends to be nearly flat (see flatness). The flattest surface ever manufactured is a quantum-stabilized atom mirror. In astronomy, various reference planes are used to define positions in orbit. Anatomical planes may be lateral ("sagittal"), frontal ("coronal") or transversal. In geology, beds (layers of sediments) often are planar. Planes are involved in different forms of imaging, such as the focal plane, picture plane, and image plane. Background Euclid set forth the first great landmark of mathematical thought, an axiomatic treatment of geometry. He selected a small core of undefined terms (called common notions) and postulates (or axioms) which he then used to prove various geometrical statements. Although the plane in its modern sense is not directly given a definition anywhere in the Elements, it may be thought of as part of the common notions. Euclid never used numbers to measure length, angle, or area. The Euclidean plane equipped with a chosen Cartesian coordinate system is called a Cartesian plane; a non-Cartesian Euclidean plane equipped with a polar coordinate system would be called a polar plane. A plane is a ruled surface. Euclidean plane Representation This section is solely concerned with planes embedded in three dimensions: specifically, in . Determination by contained points and lines In a Euclidean space of any number of dimensions, a plane is uniquely determined by any of the following: Three non-collinear points (points not on a single line). A line and a point not on that line. Two distinct but intersecting lines. Two distinct but parallel lines. Properties The following statements hold in three-dimensional Euclidean space but not in higher dimensions, though they have higher-dimensional analogues: Two distinct planes are either parallel or they intersect in a line. A line is either parallel to a plane, intersects it at a single point, or is contained in the plane. Two distinct lin
https://en.wikipedia.org/wiki/Continuum
Continuum may refer to: Continuum (measurement), theories or models that explain gradual transitions from one condition to another without abrupt changes Mathematics Continuum (set theory), the real line or the corresponding cardinal number Linear continuum, any ordered set that shares certain properties of the real line Continuum (topology), a nonempty compact connected metric space (sometimes Hausdorff space) Continuum hypothesis, the hypothesis that no infinite sets are larger than the integers but smaller than the real numbers Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers Science Continuum morphology, in plant morphology, underlining the continuum between morphological categories Continuum concept, in psychology Continuum mechanics, in physics, deals with continuous matter Space-time continuum, any mathematical model that combines space and time into a single continuum Continuum theory of specific heats of solids, see Debye model Triune continuum, trinity of continual representations in general system modeling defined in the theory of triune continuum, used in the Triune continuum paradigm Continuous spectrum, referred to simply as the continuum in contrast to discrete spectral lines Arts and entertainment Film and television Continuum (film), or I'll Follow You Down, a 2013 Canadian film Stargate: Continuum, a 2008 direct-to-DVD film in the Stargate franchise Continuum (TV series), a 2012–2015 Canadian science fiction series "Continuum" (American Horror Story), a 2013 television episode Q Continuum, an extended universe in the fictional Star Trek universe Games Continuum (game client), a game client for the SubSpace computer game Continuum (role-playing game), a time travel role-playing game The Continuum, a browser-based fantasy collectible wargame Alpha Waves, released in North America as Continuum, a 1990 3D video game Command & Conquer: Continuum, a cancelled MMORPG in the Command & Conquer series Music Continuum (Ligeti), a composition for harpsichord by György Ligeti, 1968 Continuum Fingerboard, a continuous pitch performance controller developed by Haken Audio Performers Continuum (chamber ensemble), an American classical chamber music ensemble Continuum (jazz group), with Slide Hampton, Jimmy Heath, Ron Carter, Art Taylor, Kenny Barron Continuum (music project), a collaboration between Steven Wilson and Dirk Serries Albums Continuum (The Components album), 2018 Continuum (Fly to the Sky album), 2014 Continuum (John Mayer album), 2006 Continuum (Mentallo & The Fixer album) or the title song, 1995 Continuum (Nik Bärtsch album), 2016 Continuum (Prototype album), 2006 Continuum (Rainer Brüninghaus album) or the title song, 1983 Continuum (Ray Drummond album), 1994 The Continuum (album), by Ethnic Heritage Ensemble, or the title song, 1997 Songs "Continuum", by At the Drive-In from In•ter a•li•a, 2017 "Continuum", by Defecation from Int
https://en.wikipedia.org/wiki/Toma%C5%BE%20Pisanski
Tomaž (Tomo) Pisanski (born 24 May 1949 in Ljubljana, Yugoslavia, which is now in Slovenia) is a Slovenian mathematician working mainly in discrete mathematics and graph theory. He is considered by many Slovenian mathematicians to be the "father of Slovenian discrete mathematics." Biography As a high school student, Pisanski competed in the 1966 and 1967 International Mathematical Olympiads as a member of the Yugoslav team, winning a bronze medal in 1967. He studied at the University of Ljubljana where he obtained a B.Sc, M.Sc and PhD in mathematics. His 1981 PhD thesis in topological graph theory was written under the guidance of Torrence Parsons. He also obtained an M.Sc. in computer science from Pennsylvania State University in 1979. Currently, Pisanski is a professor of discrete and computational mathematics and Head of the Department of Information Sciences and Technology at University of Primorska in Koper. In addition, he is a professor at the University of Ljubljana Faculty of Mathematics and Physics (FMF). He has been a member of the Institute of Mathematics, Physics and Mechanics (IMFM) in Ljubljana since 1980, and the leader of several IMFM research projects. In 1991 he established the Department of Theoretical Computer Science at IMFM, of which he has served as both head and deputy head. He has taught undergraduate and graduate courses in mathematics and computer science at the University of Ljubljana, University of Zagreb, University of Udine, University of Leoben, California State University, Chico, Simon Fraser University, University of Auckland and Colgate University. Pisanski has been an adviser for M.Sc and PhD students in both mathematics and computer science. Notable students include John Shawe-Taylor (B.Sc in Ljubljana), Vladimir Batagelj, Bojan Mohar, Sandi Klavžar, and Sandra Sattolo (M.Sc in Udine). Research Pisanski’s research interests span several areas of discrete and computational mathematics, including combinatorial configurations, abstract polytopes, maps on surfaces, chemical graph theory, and the history of mathematics and science. In 1980 he calculated the genus of the Cartesian product of any pair of connected, bipartite, d-valent graphs using a method that was later called the White–Pisanski method. In 1982 Vladimir Batagelj and Pisanski proved that the Cartesian product of a tree and a cycle is Hamiltonian if and only if no degree of the tree exceeds the length of the cycle. They also proposed a conjecture concerning cyclic Hamiltonicity of graphs. Their conjecture was proved in 2005. With Brigitte Servatius he is the co-author of the book Configurations from a Graphical Viewpoint (2013). Selected publications Pisanski, T. Genus of Cartesian products of regular bipartite graphs, Journal of Graph Theory 4 (1), 1980, 31-42. doi:10.1002/jgt.3190040105 Graovac, A., T. Pisanski. On the Wiener index of a graph, Journal of Mathematical Chemistry 8 (1),1991, 53-62. doi:10.1007/BF01166923 Boben, M., B. Gr
https://en.wikipedia.org/wiki/Magnitude
Magnitude may refer to: Mathematics Euclidean vector, a quantity defined by both its magnitude and its direction Magnitude (mathematics), the relative size of an object Norm (mathematics), a term for the size or length of a vector Order of magnitude, the class of scale having a fixed value ratio to the preceding class Scalar (mathematics), a quantity defined only by its magnitude Astronomy Absolute magnitude, the brightness of a celestial object corrected to a standard luminosity distance Apparent magnitude, the calibrated apparent brightness of a celestial object Instrumental magnitude, the uncalibrated apparent magnitude of a celestial object Limiting magnitude, the faintest apparent magnitude of a celestial body that is detectable or detected by a given instrument. Magnitude (astronomy), a measure of brightness and brightness differences used in astronomy Magnitude of eclipse or geometric magnitude, the size of the eclipsed part of the Sun during a solar eclipse or the Moon during a lunar eclipse Photographic magnitude, the brightness of a celestial object corrected for photographic sensitivity, symbol mpg Visual magnitude, the brightness of a celestial object in visible, symbol mv Seismology Seismic magnitude scales, the energy in an earthquake, measures include: Moment magnitude scale, based on seismic moment, supersedes the Richter scale Richter magnitude scale, the energy of an earthquake, superseded by Moment scale Surface-wave magnitude, based on Rayleigh surface wave measurement through heat conduction Seismic intensity scales, the local severity of a quake Arts and media Magnitude (Community), a recurring character from the television series Community
https://en.wikipedia.org/wiki/Vladimir%20Batagelj
Vladimir Batagelj (born June 14, 1948 in Idrija, Yugoslavia) is a Slovenian mathematician and an emeritus professor of mathematics at the University of Ljubljana. He is known for his work in discrete mathematics and combinatorial optimization, particularly analysis of social networks and other large networks (blockmodeling). Education and career Vladimir Batagelj completed his Ph.D. at the University of Ljubljana in 1986 under the direction of Tomaž Pisanski. He stayed at the University of Ljubljana as a professor until his retirement, where he was a professor of sociology and statistics, while also being a chair of the Department of Sociology of the Faculty of Social Sciences. As visiting professor, he was taught at the University of Pittsburgh (1990-91) and at the University of Konstanz (2002). He was also a member of editorial boards of two journals: Informatica and Journal of Social Structure. His work has been cited over 11000 times. His book Exploratory Social Network Analysis with Pajek on blockmodeling, coauthored with Wouter de Nooy and Andrej Mrvar, is Batagelj's most cited work and has over 3300 citations. The book was translated into Chinese and Japanese. The revised and expanded third edition has been published by Cambridge University Press. He is particularly known for his work on Pajek, a freely available software for analysis and visualization of large networks. Batagelj began work on Pajek in 1996 with Andrej Mrvar, who was then his PhD student. In 1975, 11 years before completing his PhD, Batagelj published a solo paper in Communications of the ACM. Batagelj authored more than 20 textbooks in Slovenian, covering topics like TeX, combinatorics and discrete mathematics. He has also written extensively in the Slovenian popular science journal Presek. Batagelj has advised 9 Ph.D. students. Awards and honors First prizes for contributions (with Andrej Mrvar) to Graph Drawing Contests in years: 1995, 1996, 1997, 1998, 1999, 2000 and 2005 / Graph Drawing Hall of Fame. In 2007 the book Generalized blockmodeling was awarded the Harrison White Outstanding Book Award by the Mathematical Sociology Section of American Sociological Association In 2007 he was awarded (together with Anuška Ferligoj) the Simmel Award by INSNA. In 2013, Vladimir Batagelj and Andrej Mrvar received the INSNA's William D. Richards Software award for their work on Pajek. Selected bibliography Vladimir Batagelj, Social Network Analysis, Large-Scale . in R.A. Meyers, ed., Encyclopedia of Complexity and Systems Science, Springer 2009: 8245–8265. Vladimir Batagelj, Complex Networks, Visualization of . in R.A. Meyers, ed., Encyclopedia of Complexity and Systems Science, Springer 2009: 1253–1268. Wouter de Nooy, Andrej Mrvar, Vladimir Batagelj, Mark Granovetter (Series Editor), Exploratory Social Network Analysis with Pajek (Structural Analysis in the Social Sciences), Cambridge University Press 2005 (). ESNA in Japanese, TDU, 2010. Patrick Doreian,
https://en.wikipedia.org/wiki/Kite%20%28geometry%29
In Euclidean geometry, a kite is a quadrilateral with reflection symmetry across a diagonal. Because of this symmetry, a kite has two equal angles and two pairs of adjacent equal-length sides. Kites are also known as deltoids, but the word deltoid may also refer to a deltoid curve, an unrelated geometric object sometimes studied in connection with quadrilaterals. A kite may also be called a dart, particularly if it is not convex. Every kite is an orthodiagonal quadrilateral (its diagonals are at right angles) and, when convex, a tangential quadrilateral (its sides are tangent to an inscribed circle). The convex kites are exactly the quadrilaterals that are both orthodiagonal and tangential. They include as special cases the right kites, with two opposite right angles; the rhombi, with two diagonal axes of symmetry; and the squares, which are also special cases of both right kites and rhombi. The quadrilateral with the greatest ratio of perimeter to diameter is a kite, with 60°, 75°, and 150° angles. Kites of two shapes (one convex and one non-convex) form the prototiles of one of the forms of the Penrose tiling. Kites also form the faces of several face-symmetric polyhedra and tessellations, and have been studied in connection with outer billiards, a problem in the advanced mathematics of dynamical systems. Definition and classification A kite is a quadrilateral with reflection symmetry across one of its diagonals. Equivalently, it is a quadrilateral whose four sides can be grouped into two pairs of adjacent equal-length sides. A kite can be constructed from the centers and crossing points of any two intersecting circles. Kites as described here may be either convex or concave, although some sources restrict kite to mean only convex kites. A quadrilateral is a kite if and only if any one of the following conditions is true: The four sides can be split into two pairs of adjacent equal-length sides. One diagonal crosses the midpoint of the other diagonal at a right angle, forming its perpendicular bisector. (In the concave case, the line through one of the diagonals bisects the other.) One diagonal is a line of symmetry. It divides the quadrilateral into two congruent triangles that are mirror images of each other. One diagonal bisects both of the angles at its two ends. Kite quadrilaterals are named for the wind-blown, flying kites, which often have this shape and which are in turn named for a hovering bird and the sound it makes. According to Olaus Henrici, the name "kite" was given to these shapes by James Joseph Sylvester. Quadrilaterals can be classified hierarchically, meaning that some classes of quadrilaterals include other classes, or partitionally, meaning that each quadrilateral is in only one class. Classified hierarchically, kites include the rhombi (quadrilaterals with four equal sides) and squares. All equilateral kites are rhombi, and all equiangular kites are squares. When classified partitionally, rhombi and squares would not
https://en.wikipedia.org/wiki/HCN
HCN may refer to: Science and mathematics HCN channel, a cellular ion channel Highly composite number, a type of integer Hydrogen cyanide Transportation Halcyonair, a Cape Verdean airline Headcorn railway station, in England Hengchun Airport, in Taiwan Other Health Communication Network, an Australian software company High Country News, an American newspaper
https://en.wikipedia.org/wiki/Lah%20number
In mathematics, the (signed and unsigned) Lah numbers are coefficients expressing rising factorials in terms of falling factorials and vice versa. They were discovered by Ivo Lah in 1954. Explicitly, the unsigned Lah numbers are given by the formula involving the binomial coefficient for . Unsigned Lah numbers have an interesting meaning in combinatorics: they count the number of ways a set of elements can be partitioned into nonempty linearly ordered subsets. Lah numbers are related to Stirling numbers. For , the Lah number is equal to the factorial in the interpretation above, the only partition of into 1 set can have its set ordered in 6 ways: is equal to 6, because there are six partitions of into two ordered parts: is always 1 because the only way to partition into non-empty subsets results in subsets of size 1, that can only be permuted in one way. In the more recent literature, Karamata–Knuth style notation has taken over. Lah numbers are now often written as Table of values Below is a table of values for the Lah numbers: The row sums are . Rising and falling factorials Let represent the rising factorial and let represent the falling factorial . The Lah numbers are the coefficients that express each of these families of polynomials in terms of the other. Explicitly,andFor example,and where the coefficients 6, 6, and 1 are exactly the Lah numbers , , and . Identities and relations The Lah numbers satisfy a variety of identities and relations. In Karamata–Knuth notation for Stirling numberswhere are the Stirling numbers of the first kind and are the Stirling numbers of the second kind. , for . Recurrence relations The Lah numbers satisfy the recurrence relationswhere , the Kronecker delta, and for all . Exponential generating function Ordinary generating function Derivative of exp(1/x) The n-th derivative of the function can be expressed with the Lah numbers, as followsFor example, Link to Laguerre polynomials Generalized Laguerre polynomials are linked to Lah numbers upon setting This formula is the default Laguerre polynomial in Umbral calculus convention. Practical application In recent years, Lah numbers have been used in steganography for hiding data in images. Compared to alternatives such as DCT, DFT and DWT, it has lower complexity of calculation——of their integer coefficients. The Lah and Laguerre transforms naturally arise in the perturbative description of the chromatic dispersion. In Lah-Laguerre optics, such an approach tremendously speeds up optimization problems. See also Stirling numbers Pascal matrix References External links The signed and unsigned Lah numbers are respectively and Factorial and binomial topics Integer sequences Triangles of numbers
https://en.wikipedia.org/wiki/Winding%20number
In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that curve travels counterclockwise around the point, i.e., the curve's number of turns. For certain open plane curves, the number of turns may be non-integer. The winding number depends on the orientation of the curve, and it is negative if the curve travels around the point clockwise. Winding numbers are fundamental objects of study in algebraic topology, and they play an important role in vector calculus, complex analysis, geometric topology, differential geometry, and physics (such as in string theory). Intuitive description Suppose we are given a closed, oriented curve in the xy plane. We can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the total number of counterclockwise turns that the object makes around the origin. When counting the total number of turns, counterclockwise motion counts as positive, while clockwise motion counts as negative. For example, if the object first circles the origin four times counterclockwise, and then circles the origin once clockwise, then the total winding number of the curve is three. Using this scheme, a curve that does not travel around the origin at all has winding number zero, while a curve that travels clockwise around the origin has negative winding number. Therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3: Formal definition Let be a continuous closed path on the plane minus one point. The winding number of around is the integer where is the path written in polar coordinates, i.e. the lifted path through the covering map The winding number is well defined because of the existence and uniqueness of the lifted path (given the starting point in the covering space) and because all the fibers of are of the form (so the above expression does not depend on the choice of the starting point). It is an integer because the path is closed. Alternative definitions Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above: Alexander numbering A simple combinatorial rule for defining the winding number was proposed by August Ferdinand Möbius in 1865 and again independently by James Waddell Alexander II in 1928. Any curve partitions the plane into several connected regions, one of which is unbounded. The winding numbers of the curve around two points in the same region are equal. The winding number around (any point in) the unbounded region is zero. Finally, the winding numbers for any two adjacent regions differ by exactly 1; the region with the larger winding number appears on the left side of the curve (with respect to motion down
https://en.wikipedia.org/wiki/Dann
Dann is an English surname. It is a toponymic surname which came from Middle English and Old English , "valley". Variant spellings include Dan and Dane. According to statistics compiled by Patrick Hanks on the basis of the 2011 United Kingdom census and the Census of Ireland 2011, 2,666 people on the island of Great Britain and 54 on the island of Ireland bore the surname Dann as of 2011. In the 1881 United Kingdom census there had been 1,858 people with the surname Dann, primarily at Kent, Sussex, London, and Norfolk. The 2010 United States Census found 3,735 people with the surname Dann, making it the 8,775th-most-common name in the country. This represented a decrease from 4,062 (7,550th most-common) in the 2000 Census. In both censuses, nearly nine-tenths of the bearers of the surname identified as non-Hispanic white, and about six percent as non-Hispanic Black or African American. People with this surname include: Artists and musicians Hollis Dann (1861–1939), American music educator and choral director Stan Dann (1931–2008), American wood carver Georgie Dann (1940–2021), French singer Larry Dann (born 1941), British film and television actor Steven Dann (born 1953), Canadian violist Penny Dann (1964–2014), British children's book illustrator Sophie-Louise Dann (born 1969), British musical theatre actress Lance Dann (), British sound and radio artist Sportspeople Reg Dann (1916–1948), English football midfielder Gordon Dann (born 1944), Australian rules footballer Donald Dann (1949–2005), Australian Paralympic athlete and table tennis player Kevin Dann (1958–2021), Australian rugby league footballer Scott Dann (boxer) (born 1974), English amateur boxer Scott Dann (born 1987), English football centre-back Thomas Dann (born 1981), English cricketer Walter Dann (), Canadian Paralympic athlete Writers George Landen Dann (1904–1977), Australian playwright, writer, and draftsman Colin Dann (born 1943), British writer of children's books Jack Dann (born 1945) American science fiction writer Trevor Dann (born 1951), British writer and broadcaster Patty Dann (born 1953), American novelist and nonfiction writer Others Christian Adam Dann (1758–1837), German Lutheran pastor Wallace Dann (1847–1934), American local politician in Norwalk, Connecticut Alf Dann (1893–1953), British trade union leader Belinda Dann (1900–2007), Indigenous Australian woman known as a member of the Stolen Generation Bob Dann (1914–2008), Anglican Archbishop of Melbourne, Australia Michael Dann (1921–2016), American television executive Mary Dann and Carrie Dann (respectively 1923–2005 and 1931–2021), Native American activists Laurie Dann (1957–1988), American murderer Marc Dann (born 1962), American politician Tim Dann, British voice actor See also Dan (disambiguation) Danu (disambiguation) Wimm-Bill-Dann Foods, a Russian food producer References English-language surnames
https://en.wikipedia.org/wiki/Glottochronology
Glottochronology (from Attic Greek γλῶττα tongue, language and χρόνος time) is the part of lexicostatistics which involves comparative linguistics and deals with the chronological relationship between languages. The idea was developed by Morris Swadesh in the 1950s in his article on Salish internal relationships. He developed the idea under two assumptions: there indeed exists a relatively stable basic vocabulary (referred to as Swadesh lists) in all languages of the world; and, any replacements happen in a way analogous to radioactive decay in a constant percentage per time elapsed. Using mathematics and statistics, Swadesh developed an equation to determine when languages separated and give an approximate time of when the separation occurred. His methods aimed to aid linguistic anthropologists by giving them a definitive way to determine a separation date between two languages. The formula provides an approximate number of centuries since two languages were supposed to have separated from a singular common ancestor. His methods also purported to provide information on when ancient languages may have existed. Despite multiple studies and literature containing the information of glottochronology, it is not widely used today and is surrounded with controversy. Glottochronology tracks language separation from thousands of years ago but many linguists are skeptical of the concept because it is more of a 'probability' rather than a 'certainty.' On the other hand, some linguists may say that glottochronology is gaining traction because of its relatedness to archaeological dates. Glottochronology is not as accurate as archaeological data, but some linguists still believe that it can provide a solid estimate. Over time many different extensions of the Swadesh method evolved; however, Swadesh's original method is so well known that 'glottochronology' is usually associated with him. Methodology Word list The original method of glottochronology presumed that the core vocabulary of a language is replaced at a constant (or constant average) rate across all languages and cultures and so can be used to measure the passage of time. The process makes use of a list of lexical terms and morphemes which are similar to multiple languages. Lists were compiled by Morris Swadesh and assumed to be resistant against borrowing (originally designed in 1952 as a list of 200 items, but the refined 100-word list in Swadesh (1955) is much more common among modern day linguists). The core vocabulary was designed to encompass concepts common to every human language such as personal pronouns, body parts, heavenly bodies and living beings, verbs of basic actions, numerals, basic adjectives, kin terms, and natural occurrences and events. Through a basic word list, one eliminates concepts that are specific to a particular culture or time period. It has been found through differentiating word lists that the ideal is really impossible and that the meaning set may need to be tail
https://en.wikipedia.org/wiki/Snake%20lemma
The snake lemma is a tool used in mathematics, particularly homological algebra, to construct long exact sequences. The snake lemma is valid in every abelian category and is a crucial tool in homological algebra and its applications, for instance in algebraic topology. Homomorphisms constructed with its help are generally called connecting homomorphisms. Statement In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram: where the rows are exact sequences and 0 is the zero object. Then there is an exact sequence relating the kernels and cokernels of a, b, and c: where d is a homomorphism, known as the connecting homomorphism. Furthermore, if the morphism f is a monomorphism, then so is the morphism , and if g''' is an epimorphism, then so is . The cokernels here are: , , . Explanation of the name To see where the snake lemma gets its name, expand the diagram above as follows: and then the exact sequence that is the conclusion of the lemma can be drawn on this expanded diagram in the reversed "S" shape of a slithering snake. Construction of the maps The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a connecting homomorphism d exists which completes the exact sequence. In the case of abelian groups or modules over some ring, the map d can be constructed as follows: Pick an element x in ker c and view it as an element of C; since g is surjective, there exists y in B with g(y) = x. Because of the commutativity of the diagram, we have g'(b(y)) = c(g(y)) = c(x) = 0 (since x is in the kernel of c), and therefore b(y) is in the kernel of g' . Since the bottom row is exact, we find an element z in A' with f '(z) = b(y). z is unique by injectivity of f '. We then define d(x) = z + im(a). Now one has to check that d is well-defined (i.e., d(x) only depends on x and not on the choice of y), that it is a homomorphism, and that the resulting long sequence is indeed exact. One may routinely verify the exactness by diagram chasing (see the proof of Lemma 9.1 in ). Once that is done, the theorem is proven for abelian groups or modules over a ring. For the general case, the argument may be rephrased in terms of properties of arrows and cancellation instead of elements. Alternatively, one may invoke Mitchell's embedding theorem. Naturality In the applications, one often needs to show that long exact sequences are "natural" (in the sense of natural transformations). This follows from the naturality of the sequence produced by the snake lemma. If is a commutative diagram with exact rows, then the snake lemma can be applied twice, to the "front" and to the "back",
https://en.wikipedia.org/wiki/Connectedness
In mathematics, connectedness is used to refer to various properties meaning, in some sense, "all one piece". When a mathematical object has such a property, we say it is connected; otherwise it is disconnected. When a disconnected object can be split naturally into connected pieces, each piece is usually called a component (or connected component). Connectedness in topology A topological space is said to be connected if it is not the union of two disjoint nonempty open sets. A set is open if it contains no point lying on its boundary; thus, in an informal, intuitive sense, the fact that a space can be partitioned into disjoint open sets suggests that the boundary between the two sets is not part of the space, and thus splits it into two separate pieces. Other notions of connectedness Fields of mathematics are typically concerned with special kinds of objects. Often such an object is said to be connected if, when it is considered as a topological space, it is a connected space. Thus, manifolds, Lie groups, and graphs are all called connected if they are connected as topological spaces, and their components are the topological components. Sometimes it is convenient to restate the definition of connectedness in such fields. For example, a graph is said to be connected if each pair of vertices in the graph is joined by a path. This definition is equivalent to the topological one, as applied to graphs, but it is easier to deal with in the context of graph theory. Graph theory also offers a context-free measure of connectedness, called the clustering coefficient. Other fields of mathematics are concerned with objects that are rarely considered as topological spaces. Nonetheless, definitions of connectedness often reflect the topological meaning in some way. For example, in category theory, a category is said to be connected if each pair of objects in it is joined by a sequence of morphisms. Thus, a category is connected if it is, intuitively, all one piece. There may be different notions of connectedness that are intuitively similar, but different as formally defined concepts. We might wish to call a topological space connected if each pair of points in it is joined by a path. However this condition turns out to be stronger than standard topological connectedness; in particular, there are connected topological spaces for which this property does not hold. Because of this, different terminology is used; spaces with this property are said to be path connected. While not all connected spaces are path connected, all path connected spaces are connected. Terms involving connected are also used for properties that are related to, but clearly different from, connectedness. For example, a path-connected topological space is simply connected if each loop (path from a point to itself) in it is contractible; that is, intuitively, if there is essentially only one way to get from any point to any other point. Thus, a sphere and a disk are each simply connecte
https://en.wikipedia.org/wiki/Naive%20Bayes%20classifier
In statistics, naive Bayes classifiers are a family of linear "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method. Introduction Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible correlations between the color, roundness, and diameter features. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods. Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests. An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification. Probabilistic model Abstractly, naive Bayes is a conditional probability model: it assigns probabilities for each of the possible outcomes or classes given a problem instance to be classified, represented by a vector encoding some features (independent variables). The pr
https://en.wikipedia.org/wiki/Graph%20of%20a%20function
In mathematics, the graph of a function is the set of ordered pairs , where In the common case where and are real numbers, these pairs are Cartesian coordinates of points in two-dimensional space and thus form a subset of this plane. In the case of functions of two variables, that is functions whose domain consists of pairs the graph usually refers to the set of ordered triples where instead of the pairs as in the definition above. This set is a subset of three-dimensional space; for a continuous real-valued function of two real variables, it is a surface. In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes. In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details. A graph of a function is a special case of a relation. In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph. However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective. Definition Given a mapping in other words a function together with its domain and codomain the graph of the mapping is the set which is a subset of . In the abstract definition of a function, is actually equal to One can observe that, if, then the graph is a subset of (strictly speaking it is but one can embed it with the natural isomorphism). Examples Functions of one variable The graph of the function defined by is the subset of the set From the graph, the domain is recovered as the set of first component of each pair in the graph . Similarly, the range can be recovered as . The codomain , however, cannot be determined from the graph alone. The graph of the cubic polynomial on the real line is If this set is plotted on a Cartesian plane, the result is a curve (see figure). Functions of two variables The graph of the trigonometric function is If this set is plotted on a three dimensional Cartesian coordinate system, the result is a surface (see figure). Oftentimes it is helpful to show with the graph, the gradient of the function and several level curves. The level curves can be mapped on the function surface or can be projected on the bottom plane. The second figure shows such a drawing of the graph of the function: See also Asymptote Chart Concave function Convex function Contour plot Critical point Derivative Epigraph Normal to a graph Slope Stationary point Tetraview Vertical translation y-intercept Referen
https://en.wikipedia.org/wiki/Additive%20function
In number theory, an additive function is an arithmetic function f(n) of the positive integer variable n such that whenever a and b are coprime, the function applied to the product ab is the sum of the values of the function applied to a and b: Completely additive An additive function f(n) is said to be completely additive if holds for all positive integers a and b, even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If f is a completely additive function then f(1) = 0. Every completely additive function is additive, but not vice versa. Examples Examples of arithmetic functions which are completely additive are: The restriction of the logarithmic function to The multiplicity of a prime factor p in n, that is the largest exponent m for which pm divides n. a0(n) – the sum of primes dividing n counting multiplicity, sometimes called sopfr(n), the potency of n or the integer logarithm of n . For example: a0(4) = 2 + 2 = 4 a0(20) = a0(22 · 5) = 2 + 2 + 5 = 9 a0(27) = 3 + 3 + 3 = 9 a0(144) = a0(24 · 32) = a0(24) + a0(32) = 8 + 6 = 14 a0(2000) = a0(24 · 53) = a0(24) + a0(53) = 8 + 15 = 23 a0(2003) = 2003 a0(54,032,858,972,279) = 1240658 a0(54,032,858,972,302) = 1780417 a0(20,802,650,704,327,415) = 1240681 The function Ω(n), defined as the total number of prime factors of n, counting multiple factors multiple times, sometimes called the "Big Omega function" . For example; Ω(1) = 0, since 1 has no prime factors Ω(4) = 2 Ω(16) = Ω(2·2·2·2) = 4 Ω(20) = Ω(2·2·5) = 3 Ω(27) = Ω(3·3·3) = 3 Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6 Ω(2000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7 Ω(2001) = 3 Ω(2002) = 4 Ω(2003) = 1 Ω(54,032,858,972,279) = Ω(11 ⋅ 19932 ⋅ 1236661) = 4  ; Ω(54,032,858,972,302) = Ω(2 ⋅ 72 ⋅ 149 ⋅ 2081 ⋅ 1778171) = 6 Ω(20,802,650,704,327,415) = Ω(5 ⋅ 7 ⋅ 112 ⋅ 19932 ⋅ 1236661) = 7. Examples of arithmetic functions which are additive but not completely additive are: ω(n), defined as the total number of distinct prime factors of n . For example: ω(4) = 1 ω(16) = ω(24) = 1 ω(20) = ω(22 · 5) = 2 ω(27) = ω(33) = 1 ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2 ω(2000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2 ω(2001) = 3 ω(2002) = 4 ω(2003) = 1 ω(54,032,858,972,279) = 3 ω(54,032,858,972,302) = 5 ω(20,802,650,704,327,415) = 5 a1(n) – the sum of the distinct primes dividing n, sometimes called sopf(n) . For example: a1(1) = 0 a1(4) = 2 a1(20) = 2 + 5 = 7 a1(27) = 3 a1(144) = a1(24 · 32) = a1(24) + a1(32) = 2 + 3 = 5 a1(2000) = a1(24 · 53) = a1(24) + a1(53) = 2 + 5 = 7 a1(2001) = 55 a1(2002) = 33 a1(2003) = 2003 a1(54,032,858,972,279) = 1238665 a1(54,032,858,972,302) = 1780410 a1(20,802,650,704,327,415) = 1238677 Multiplicative functions From any additive function it is possible to create a related which is a function with the property that whenever and are coprime then: One such example is Summatory functions Given an additive function , let
https://en.wikipedia.org/wiki/List%20of%20U.S.%20states%20and%20territories%20by%20population
The states and territories included in the United States Census Bureau's statistics for the United States population, ethnicity, religion, and most other categories include the 50 states and Washington, D.C. Separate statistics are maintained for the five permanently inhabited territories of the United States: Puerto Rico, Guam, the U.S. Virgin Islands, American Samoa, and the Northern Mariana Islands. As of April 1, 2010, the date of the 2010 United States Census, the nine most populous U.S. states contain slightly more than half of the total population. The 25 least populous states contain less than one-sixth of the total population. California, the most populous state, contains more people than the 21 least populous states combined, and Wyoming, the least populous state, has a population less than any of the 31 most populous U.S. cities. Method The United States Census counts the persons residing in the United States including citizens, non-citizen permanent residents and non-citizen long-term visitors. Civilian and military federal employees serving abroad and their dependents are counted in their home state. Electoral apportionment Every 10 years, the U.S. Census Bureau is charged with making an actual count of all residents by state and territory. The accuracy of this count is then tested after the fact, and sometimes statistically significant undercounts or overcounts occur. For example, for the 2020 decennial census, 14 states had significant miscounts ranging from 1.5% to 6.6%. While these adjustments may be reflected in government programs over the following decade, the 10-year representative apportionments discussed below are not changed to reflect the miscount. House of Representatives Based on this decennial census, each state is allocated a portion of the 435 fixed seats in the United States House of Representatives (until the early 20th century, the apportionment process generally increased the size of the House based on the results of the census until the size of the House was capped by the Reapportionment Act of 1929), with each state guaranteed at least one Representative. The allocation is based on each state's proportion of the combined population of the fifty states (not including the District of Columbia, Guam, American Samoa, the Northern Mariana Islands, Puerto Rico, or the United States Virgin Islands). Electoral College The Electoral College, every four years, elects the President and Vice President of the United States based on the popular vote in each state and the District of Columbia. Each state's number of votes in the Electoral College is equal to its number of members in the Senate plus members in the House of Representatives. The Twenty-third Amendment to the United States Constitution additionally grants the District of Columbia (D.C.), which is not part of any state, as many Electoral College votes as it would have if it were a state, while having no more votes than the least populous state (currentl
https://en.wikipedia.org/wiki/G.%20H.%20Hardy
Godfrey Harold Hardy (7 February 1877 – 1 December 1947) was an English mathematician, known for his achievements in number theory and mathematical analysis. In biology, he is known for the Hardy–Weinberg principle, a basic principle of population genetics. G. H. Hardy is usually known by those outside the field of mathematics for his 1940 essay A Mathematician's Apology, often considered one of the best insights into the mind of a working mathematician written for the layperson. Starting in 1914, Hardy was the mentor of the Indian mathematician Srinivasa Ramanujan, a relationship that has become celebrated. Hardy almost immediately recognised Ramanujan's extraordinary albeit untutored brilliance, and Hardy and Ramanujan became close collaborators. In an interview by Paul Erdős, when Hardy was asked what his greatest contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan. In a lecture on Ramanujan, Hardy said that "my association with him is the one romantic incident in my life". Early life and career G. H. Hardy was born on 7 February 1877, in Cranleigh, Surrey, England, into a teaching family. His father was Bursar and Art Master at Cranleigh School; his mother had been a senior mistress at Lincoln Training College for teachers. Both of his parents were mathematically inclined, though neither had a university education. Hardy's own natural affinity for mathematics was perceptible at an early age. When just two years old, he wrote numbers up to millions, and when taken to church he amused himself by factorising the numbers of the hymns. After schooling at Cranleigh, Hardy was awarded a scholarship to Winchester College for his mathematical work. In 1896, he entered Trinity College, Cambridge. After only two years of preparation under his coach, Robert Alfred Herman, Hardy was fourth in the Mathematics Tripos examination. Years later, he sought to abolish the Tripos system, as he felt that it was becoming more an end in itself than a means to an end. While at university, Hardy joined the Cambridge Apostles, an elite, intellectual secret society. Hardy cited as his most important influence his independent study of Cours d'analyse de l'École Polytechnique by the French mathematician Camille Jordan, through which he became acquainted with the more precise mathematics tradition in continental Europe. In 1900 he passed part II of the Tripos, and in the same year he was elected to a Prize Fellowship at Trinity College. In 1903 he earned his M.A., which was the highest academic degree at English universities at that time. When his Prize Fellowship expired in 1906 he was appointed to the Trinity staff as a lecturer in mathematics, where teaching six hours per week left him time for research. In 1919 he left Cambridge to take the Savilian Chair of Geometry (and thus become a Fellow of New College) at Oxford in the aftermath of the Bertrand Russell affair during World War I. Hardy spent the academic year
https://en.wikipedia.org/wiki/Ratio
In mathematics, a ratio () shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ratio 4:3). Similarly, the ratio of lemons to oranges is 6:8 (or 3:4) and the ratio of oranges to the total amount of fruit is 8:14 (or 4:7). The numbers in a ratio may be quantities of any kind, such as counts of people or objects, or such as measurements of lengths, weights, time, etc. In most contexts, both numbers are restricted to be positive. A ratio may be specified either by giving both constituting numbers, written as "a to b" or "a:b", or by giving just the value of their quotient Equal quotients correspond to equal ratios. A statement expressing the equality of two ratios is called a proportion. Consequently, a ratio may be considered as an ordered pair of numbers, a fraction with the first number in the numerator and the second in the denominator, or as the value denoted by this fraction. Ratios of counts, given by (non-zero) natural numbers, are rational numbers, and may sometimes be natural numbers. A more specific definition adopted in physical sciences (especially in metrology) for ratio is the dimensionless quotient between two physical quantities measured with the same unit. A quotient of two quantities that are measured with units may be called a rate. Notation and terminology The ratio of numbers A and B can be expressed as: the ratio of A to B A:B A is to B (when followed by "as C is to D"; see below) a fraction with A as numerator and B as denominator that represents the quotient (i.e., A divided by B, or ). This can be expressed as a simple or a decimal fraction, or as a percentage, etc. When a ratio is written in the form A:B, the two-dot character is sometimes the colon punctuation mark. In Unicode, this is , although Unicode also provides a dedicated ratio character, . The numbers A and B are sometimes called terms of the ratio, with A being the antecedent and B being the consequent. A statement expressing the equality of two ratios A:B and C:D is called a proportion, written as A:B = C:D or A:B∷C:D. This latter form, when spoken or written in the English language, is often expressed as (A is to B) as (C is to D). A, B, C and D are called the terms of the proportion. A and D are called its extremes, and B and C are called its means. The equality of three or more ratios, like A:B = C:D = E:F, is called a continued proportion. Ratios are sometimes used with three or even more terms, e.g., the proportion for the edge lengths of a "two by four" that is ten inches long is therefore (unplaned measurements; the first two numbers are reduced slightly when the wood is planed smooth) a good concrete mix (in volume units) is sometimes quoted as For a (rather dry) mixture of 4/1 parts in volume of cement to water, it could be said that the ratio of cement to water is 4:1, th
https://en.wikipedia.org/wiki/Sharkovskii%27s%20theorem
In mathematics, Sharkovskii's theorem (also spelled Sharkovsky's theorem, Sharkovskiy's theorem, Šarkovskii's theorem or Sarkovskii's theorem), named after Oleksandr Mykolayovych Sharkovsky, who published it in 1964, is a result about discrete dynamical systems. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. Statement For some interval , suppose that is a continuous function. The number is called a periodic point of period if , where denotes the iterated function obtained by composition of copies of . The number is said to have least period if, in addition, for all . Sharkovskii's theorem concerns the possible least periods of periodic points of . Consider the following ordering of the positive integers, sometimes called the Sharkovskii ordering: It consists of: the odd numbers in increasing order, 2 times the odd numbers in increasing order, 4 times the odd numbers in increasing order, 8 times the odd numbers , etc. finally, the powers of two in decreasing order. This ordering is a total order: every positive integer appears exactly once somewhere on this list. However, it is not a well-order. In a well-order, every subset would have an earliest element, but in this order there is no earliest power of two. Sharkovskii's theorem states that if has a periodic point of least period , and precedes in the above ordering, then has also a periodic point of least period . One consequence is that if has only finitely many periodic points, then they must all have periods that are powers of two. Furthermore, if there is a periodic point of period three, then there are periodic points of all other periods. Sharkovskii's theorem does not state that there are stable cycles of those periods, just that there are cycles of those periods. For systems such as the logistic map, the bifurcation diagram shows a range of parameter values for which apparently the only cycle has period 3. In fact, there must be cycles of all periods there, but they are not stable and therefore not visible on the computer-generated picture. The assumption of continuity is important. Without this assumption, the discontinuous piecewise linear function defined as: for which every value has period 3, would be a counterexample. Similarly essential is the assumption of being defined on an interval. Otherwise , which is defined on real numbers except the one: and for which every non-zero value has period 3, would be a counterexample. Generalizations and related results Sharkovskii also proved the converse theorem: every upper set of the above order is the set of periods for some continuous function from an interval to itself. In fact all such sets of periods are achieved by the family of functions , for , except for the empty set of periods which is achieved by , . On the other hand, with additional information on the
https://en.wikipedia.org/wiki/Safe%20and%20Sophie%20Germain%20primes
In number theory, a prime number p is a Sophie Germain prime if 2p + 1 is also prime. The number 2p + 1 associated with a Sophie Germain prime is called a safe prime. For example, 11 is a Sophie Germain prime and 2 × 11 + 1 = 23 is its associated safe prime. Sophie Germain primes are named after French mathematician Sophie Germain, who used them in her investigations of Fermat's Last Theorem. One attempt by Germain to prove Fermat’s Last Theorem was to let p be a prime number of the form 8k + 7 and to let n = p – 1. In this case, is unsolvable. Germain’s proof, however, remained unfinished. Through her attempts to solve Fermat's Last Theorem, Germain developed a result now known as Germain's Theorem which states that if p is an odd prime and 2p + 1 is also prime, then p must divide x, y, or z. Otherwise, . This case where p does not divide x, y, or z is called the first case. Sophie Germain’s work was the most progress achieved on Fermat’s last theorem at that time. Later work by Kummer and others always divided the problem into first and second cases. Sophie Germain primes and safe primes have applications in public key cryptography and primality testing. It has been conjectured that there are infinitely many Sophie Germain primes, but this remains unproven. Individual numbers The first few Sophie Germain primes (those less than 1000) are 2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, 173, 179, 191, 233, 239, 251, 281, 293, 359, 419, 431, 443, 491, 509, 593, 641, 653, 659, 683, 719, 743, 761, 809, 911, 953, ... Hence, the first few safe primes are 5, 7, 11, 23, 47, 59, 83, 107, 167, 179, 227, 263, 347, 359, 383, 467, 479, 503, 563, 587, 719, 839, 863, 887, 983, 1019, 1187, 1283, 1307, 1319, 1367, 1439, 1487, 1523, 1619, 1823, 1907, ... In cryptography much larger Sophie Germain primes like 1,846,389,521,368 + 11600 are required. Two distributed computing projects, PrimeGrid and Twin Prime Search, include searches for large Sophie Germain primes. Some of the largest known Sophie Germain primes are given in the following table. On 2 Dec 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé, and Paul Zimmermann announced the computation of a discrete logarithm modulo the 240-digit (795 bit) prime RSA-240 + 49204 (the first safe prime above RSA-240) using a number field sieve algorithm; see Discrete logarithm records. Properties There is no special primality test for safe primes the way there is for Fermat primes and Mersenne primes. However, Pocklington's criterion can be used to prove the primality of 2p + 1 once one has proven the primality of p. Just as every term except the last one of a Cunningham chain of the first kind is a Sophie Germain prime, so every term except the first of such a chain is a safe prime. Safe primes ending in 7, that is, of the form 10n + 7, are the last terms in such chains when they occur, since 2(10n + 7) + 1 = 20n + 15 is divisible by 5. Modular restrictions With the
https://en.wikipedia.org/wiki/Curve
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight. Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that appeared more than 2000 years ago in Euclid's Elements: "The [curved] line is […] the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width." This definition of a curve has been formalized in modern mathematics as: A curve is the image of an interval to a topological space by a continuous function. In some contexts, the function that defines the curve is called a parametrization, and the curve is a parametric curve. In this article, these curves are sometimes called topological curves to distinguish them from more constrained curves such as differentiable curves. This definition encompasses most curves that are studied in mathematics; notable exceptions are level curves (which are unions of curves and isolated points), and algebraic curves (see below). Level curves and algebraic curves are sometimes called implicit curves, since they are generally defined by implicit equations. Nevertheless, the class of topological curves is very broad, and contains some curves that do not look as one may expect for a curve, or even cannot be drawn. This is the case of space-filling curves and fractal curves. For ensuring more regularity, the function that defines a curve is often supposed to be differentiable, and the curve is then said to be a differentiable curve. A plane algebraic curve is the zero set of a polynomial in two indeterminates. More generally, an algebraic curve is the zero set of a finite set of polynomials, which satisfies the further condition of being an algebraic variety of dimension one. If the coefficients of the polynomials belong to a field , the curve is said to be defined over . In the common case of a real algebraic curve, where is the field of real numbers, an algebraic curve is a finite union of topological curves. When complex zeros are considered, one has a complex algebraic curve, which, from the topological point of view, is not a curve, but a surface, and is often called a Riemann surface. Although not being curves in the common sense, algebraic curves defined over other fields have been widely studied. In particular, algebraic curves over a finite field are widely used in modern cryptography. History Interest in curves began long before they were the subject of mathematical study. This can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric times. Curves, or at least their graphical representations, are simple to create, for example with a stick on the sand on a beach. Historically, the term was used in place of the more modern term
https://en.wikipedia.org/wiki/Frustum
In geometry, a ; (: frusta or frustums) is the portion of a solid (normally a pyramid or a cone) that lies between two parallel planes cutting the solid. In the case of a pyramid, the base faces are polygonal and the side faces are trapezoidal. A right frustum is a right pyramid or a right cone truncated perpendicularly to its axis; otherwise, it is an oblique frustum. In a truncated cone or truncated pyramid, the truncation plane is necessarily parallel to the cone's base, as in a frustum. If all its edges are forced to become of the same length, then a frustum becomes a prism (possibly oblique or/and with irregular bases). Elements, special cases, and related concepts A frustum's axis is that of the original cone or pyramid. A frustum is circular if it has circular bases; it is right if the axis is perpendicular to both bases, and oblique otherwise. The height of a frustum is the perpendicular distance between the planes of the two bases. Cones and pyramids can be viewed as degenerate cases of frusta, where one of the cutting planes passes through the apex (so that the corresponding base reduces to a point). The pyramidal frusta are a subclass of prismatoids. Two frusta with two congruent bases joined at these congruent bases make a bifrustum. Formulas Volume The formula for the volume of a pyramidal square frustum was introduced by the ancient Egyptian mathematics in what is called the Moscow Mathematical Papyrus, written in the 13th dynasty (): where and are the base and top side lengths, and is the height. The Egyptians knew the correct formula for the volume of such a truncated square pyramid, but no proof of this equation is given in the Moscow papyrus. The volume of a conical or pyramidal frustum is the volume of the solid before slicing its "apex" off, minus the volume of this "apex": where and are the base and top areas, and and are the perpendicular heights from the apex to the base and top planes. Considering that the formula for the volume can be expressed as the third of the product of this proportionality, , and of the difference of the cubes of the heights and only: By using the identity , one gets: where is the height of the frustum. Distributing and substituting from its definition, the Heronian mean of areas and is obtained: the alternative formula is therefore: Heron of Alexandria is noted for deriving this formula, and with it, encountering the imaginary unit: the square root of negative one. In particular: The volume of a circular cone frustum is: where and are the base and top radii. The volume of a pyramidal frustum whose bases are regular -gons is: where and are the base and top side lengths. Surface area For a right circular conical frustum and where r1 and r2 are the base and top radii respectively, and s is the slant height of the frustum. The surface area of a right frustum whose bases are similar regular n-sided polygons is where a1 and a2 are the sides of the two bases.
https://en.wikipedia.org/wiki/Inequation
In mathematics, an inequation is a statement that an inequality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are: In some cases, the term "inequation" can be considered synonymous to the term "inequality", while in other cases, an inequation is reserved only for statements whose inequality relation is "not equal to" (≠). Chains of inequations A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain is shorthand for which also implies that and . In rare cases, chains without such implications about distant terms are used. For example is shorthand for , which does not imply Similarly, is shorthand for , which does not imply any order of and . Solving inequations Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. For example, is a conjunction of inequations, partly written as chains (where can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example. Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming. Combinations of meanings Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation is logically equivalent to the following three inequations combined: See also Apartness relation — a form of inequality in constructive mathematics Equation Equals sign Inequality (mathematics) Relational operator References Elementary algebra Mathematical terminology
https://en.wikipedia.org/wiki/Inequality%20%28mathematics%29
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. There are several different notations used to represent different kinds of inequalities: The notation a < b means that a is less than b. The notation a > b means that a is greater than b. In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: The notation a ≤ b or a ⩽ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b). The notation a ≥ b or a ⩾ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b). The relation not greater than can also be represented by a ≯ b, the symbol for "greater than" bisected by a slash, "not". The same is true for not less than and a ≮ b. The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude. The notation a ≪ b means that a is much less than b. The notation a ≫ b means that a is much greater than b. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc. Properties on the number line Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to strictly monotonic functions. Converse The relations ≤ and ≥ are each other's converse, meaning that for any real numbers a and b: Transitivity The transitive property of inequality states that for any real numbers a, b, c: If either of the premises is a strict inequality, then the conclusion is a strict inequality: Addition and subtraction A common constant c may be added to or subtracted from both sides of an inequality. So, for any real numbers a, b, c: In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition. Multiplication and division The properties that deal with multiplication and division state that for any real numbers, a, b and non-zero c: In other words, the inequality relation is
https://en.wikipedia.org/wiki/Primitive
Primitive may refer to: Mathematics Primitive element (field theory) Primitive element (finite field) Primitive cell (crystallography) Primitive notion, axiomatic systems Primitive polynomial (disambiguation), one of two concepts Primitive function or antiderivative, = f Primitive permutation group Primitive root of unity; See Root of unity Primitive triangle, an integer triangle whose sides have no common prime factor Sciences Primitive (phylogenetics), characteristic of an early stage of development or evolution Primitive equations, a set of nonlinear differential equations that are used to approximate atmospheric flow Primitive change, a general term encompassing a number of basic molecular alterations in the course of a chemical reaction Computing Cryptographic primitives, low-level cryptographic algorithms frequently used to build computer security systems Geometric primitive, the simplest kinds of figures in computer graphics Language primitive, the simplest element provided by a programming language Primitive data type, a datatype provided by a programming language Art and entertainment Naïve art, created by untrained artists Neo-primitivism, an early 20th-century Russian art movement that looks to early human history, folk art and non-Western or children's art for inspiration Primitivism, an early 20th-century art movement that looks to early human history, folk art and non-Western or children's art for inspiration Primitive decorating, a style of decorating using primitive folk art style that is characteristic of a historic or early Americana time period Primitive, a novel by J. F. Gonzalez Music The Primitives, a British indie rock band Primitive Radio Gods, an American alternative rock band Albums Primitive (Neil Diamond album), by Neil Diamond 1984 Primitive (Soulfly album), by Soulfly 2000 Songs "Primitive", by Accept from the 1996 album Predator "Primitive", by The Groupies and covered by The Cramps "Primitive", by Killing Joke from the 1980 album Killing Joke "Primitive", by Cyndi Lauper from A Night to Remember "Primitive", by Annie Lennox from the 1992 album Diva "Primitive", by Róisín Murphy from the 2007 album Overpowered Religion Primitive Church, another name for early Christianity Restorationism, also described as Christian primitivism, is the belief that Christianity should be restored along the lines of what is known about the apostolic early church Primitive Baptist, a religious movement seeking to retain or restore early Christian practices Primitive Methodism Other uses Primitive (philately) Anarcho-primitivism, an anarchist critique of the origins and progress of civilization Noble savage, a particular stock character in literature, i.e., a person uncorrupted by the influences of civilization Pre-industrial society Primitive communism, a pre-agrarian form of communism according to Karl Marx and Friedrich Engels Primitive Culture, an 1871 book by Edward Burnett Tylor.
https://en.wikipedia.org/wiki/Equality%20%28mathematics%29
In mathematics, equality is a relationship between two quantities or, more generally two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. The equality between and is written , and pronounced " equals ". The symbol "" is called an "equals sign". Two objects that are not equal are said to be distinct. For example: means that and denote the same object. The identity means that if is any number, then the two expressions have the same value. This may also be interpreted as saying that the two sides of the equals sign represent the same function. if and only if This assertion, which uses set-builder notation, means that if the elements satisfying the property are the same as the elements satisfying then the two uses of the set-builder notation define the same set. This property is often expressed as "two sets that have the same elements are equal." It is one of the usual axioms of set theory, called axiom of extensionality. Etymology The etymology of the word is from the Latin aequālis ("equal", "like", "comparable", "similar") from aequus ("equal", "level", "fair", "just"). Basic properties These last three properties make equality an equivalence relation. They were originally included among the Peano axioms for natural numbers. Although the symmetric and transitive properties are often seen as fundamental, they can be deduced from substitution and reflexive properties. Equality as predicate When A and B are not fully specified or depend on some variables, equality is a proposition, which may be true for some values and false for other values. Equality is a binary relation (i.e., a two-argument predicate) which may produce a truth value (false or true) from its arguments. In computer programming, its computation from the two expressions is known as comparison. Identities When A and B may be viewed as functions of some variables, then A = B means that A and B define the same function. Such an equality of functions is sometimes called an identity. An example is Sometimes, but not always, an identity is written with a triple bar: Equations An equation is a problem of finding values of some variables, called , for which the specified equality is true. The term "equation" may also refer to an equality relation that is satisfied only for the values of the variables that one is interested in. For example, is the of the unit circle. There is no standard notation that distinguishes an equation from an identity, or other use of the equality relation: one has to guess an appropriate interpretation from the semantics of expressions and the context. An identity is to be true for all values of variables in a given domain. An "equation" may sometimes mean an identity, but more often than not, it a subset of the variable space to be the subset where the equation is true. Approximate equality There are some logic systems that do not have any notio
https://en.wikipedia.org/wiki/Geodesic
In geometry, a geodesic () is a curve representing in some sense the shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line". The noun geodesic and the adjective geodetic come from geodesy, the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph. In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion. Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles. Introduction A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function f from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from f(s) to f(t) along the curve equals |s−t|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic. It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic. A contiguous segment of a geodesic is again a geodesic. In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with "const
https://en.wikipedia.org/wiki/Angle%20trisection
Angle trisection is a classical problem of straightedge and compass construction of ancient Greek mathematics. It concerns construction of an angle equal to one third of a given arbitrary angle, using only two tools: an unmarked straightedge and a compass. In 1837, Pierre Wantzel proved that the problem, as stated, is impossible to solve for arbitrary angles. However, some special angles can be trisected: for example, it is trivial to trisect a right angle (that is, to construct an angle of 30 degrees). It is possible to trisect an arbitrary angle by using tools other than straightedge and compass. For example, neusis construction, also known to ancient Greeks, involves simultaneous sliding and rotation of a marked straightedge, which cannot be achieved with the original tools. Other techniques were developed by mathematicians over the centuries. Because it is defined in simple terms, but complex to prove unsolvable, the problem of angle trisection is a frequent subject of pseudomathematical attempts at solution by naive enthusiasts. These "solutions" often involve mistaken interpretations of the rules, or are simply incorrect. Background and problem statement Using only an unmarked straightedge and a compass, Greek mathematicians found means to divide a line into an arbitrary set of equal segments, to draw parallel lines, to bisect angles, to construct many polygons, and to construct squares of equal or twice the area of a given polygon. Three problems proved elusive, specifically, trisecting the angle, doubling the cube, and squaring the circle. The problem of angle trisection reads: Construct an angle equal to one-third of a given arbitrary angle (or divide it into three equal angles), using only two tools: an unmarked straightedge, and a compass. Proof of impossibility Pierre Wantzel published a proof of the impossibility of classically trisecting an arbitrary angle in 1837. Wantzel's proof, restated in modern terminology, uses the concept of field extensions, a topic now typically combined with Galois theory. However, Wantzel published these results earlier than Évariste Galois (whose work, written in 1830, was published only in 1846) and did not use the concepts introduced by Galois. The problem of constructing an angle of a given measure is equivalent to constructing two segments such that the ratio of their length is . From a solution to one of these two problems, one may pass to a solution of the other by a compass and straightedge construction. The triple-angle formula gives an expression relating the cosines of the original angle and its trisection:  = . It follows that, given a segment that is defined to have unit length, the problem of angle trisection is equivalent to constructing a segment whose length is the root of a cubic polynomial. This equivalence reduces the original geometric problem to a purely algebraic problem. Every rational number is constructible. Every irrational number that is constructible in a sin
https://en.wikipedia.org/wiki/Fermat%20number
In mathematics, a Fermat number, named after Pierre de Fermat, the first known to have studied them, is a positive integer of the form where n is a non-negative integer. The first few Fermat numbers are: 3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, ... . If 2k + 1 is prime and , then k itself must be a power of 2, so is a Fermat number; such primes are called Fermat primes. , the only known Fermat primes are , , , , and ; heuristics suggest that there are no more. Basic properties The Fermat numbers satisfy the following recurrence relations: for n ≥ 1, for . Each of these relations can be proved by mathematical induction. From the second equation, we can deduce Goldbach's theorem (named after Christian Goldbach): no two Fermat numbers share a common integer factor greater than 1. To see this, suppose that and Fi and Fj have a common factor . Then a divides both and Fj; hence a divides their difference, 2. Since , this forces . This is a contradiction, because each Fermat number is clearly odd. As a corollary, we obtain another proof of the infinitude of the prime numbers: for each Fn, choose a prime factor pn; then the sequence is an infinite sequence of distinct primes. Further properties No Fermat prime can be expressed as the difference of two pth powers, where p is an odd prime. With the exception of F0 and F1, the last digit of a Fermat number is 7. The sum of the reciprocals of all the Fermat numbers is irrational. (Solomon W. Golomb, 1963) Primality Fermat numbers and Fermat primes were first studied by Pierre de Fermat, who conjectured that all Fermat numbers are prime. Indeed, the first five Fermat numbers F0, ..., F4 are easily shown to be prime. Fermat's conjecture was refuted by Leonhard Euler in 1732 when he showed that Euler proved that every factor of Fn must have the form (later improved to by Lucas) for . That 641 is a factor of F5 can be deduced from the equalities 641 = 27 × 5 + 1 and 641 = 24 + 54. It follows from the first equality that 27 × 5 ≡ −1 (mod 641) and therefore (raising to the fourth power) that 228 × 54 ≡ 1 (mod 641). On the other hand, the second equality implies that 54 ≡ −24 (mod 641). These congruences imply that 232 ≡ −1 (mod 641). Fermat was probably aware of the form of the factors later proved by Euler, so it seems curious that he failed to follow through on the straightforward calculation to find the factor. One common explanation is that Fermat made a computational mistake. There are no other known Fermat primes Fn with , but little is known about Fermat numbers for large n. In fact, each of the following is an open problem: Is Fn composite for all ? Are there infinitely many Fermat primes? (Eisenstein 1844) Are there infinitely many composite Fermat numbers? Does a Fermat number exist that is not square-free? , it is known that Fn is composite for , although of these, complete factorizations of Fn are known only for , and there are no known prime factors for and
https://en.wikipedia.org/wiki/Andrey%20Kolmogorov
Andrey Nikolaevich Kolmogorov (, 25 April 1903 – 20 October 1987) was a Soviet mathematician who contributed to the mathematics of probability theory, topology, intuitionistic logic, turbulence, classical mechanics, algorithmic information theory and computational complexity. Biography Early life Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. His unmarried mother, Maria Yakovlevna Kolmogorova, died giving birth to him. Andrey was raised by two of his aunts in Tunoshna (near Yaroslavl) at the estate of his grandfather, a well-to-do nobleman. Little is known about Andrey's father. He was supposedly named Nikolai Matveyevich Katayev and had been an agronomist. Katayev had been exiled from Saint Petersburg to the Yaroslavl province after his participation in the revolutionary movement against the tsars. He disappeared in 1919 and was presumed to have been killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Vera's village school, and his earliest literary efforts and mathematical papers were printed in the school journal "The Swallow of Spring". Andrey (at the age of five) was the "editor" of the mathematical section of this journal. Kolmogorov's first mathematical discovery was published in this journal: at the age of five he noticed the regularity in the sum of the series of odd numbers: etc. In 1910, his aunt adopted him, and they moved to Moscow, where he graduated from high school in 1920. Later that same year, Kolmogorov began to study at Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time: "I arrived at Moscow University with a fair knowledge of mathematics. I knew in particular the beginning of set theory. I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles." Kolmogorov gained a reputation for his wide-ranging erudition. While an undergraduate student in college, he attended the seminars of the Russian historian S. V. Bakhrushin, and he published his first research paper on the fifteenth and sixteenth centuries' landholding practices in the Novgorod Republic. During the same period (1921–22), Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. Adulthood In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere. Around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from Moscow State University and began to study under the supervision of Nikolai Luzin. He formed a lifelong close friendship with Pavel Alexandrov, a fellow student of Luzin; indeed, several researchers have concluded that the two friends were involved in a homosexual relationship, although neither acknowledged this openly during their lifetimes. Kolmogorov (together with Al
https://en.wikipedia.org/wiki/Bartel%20Leendert%20van%20der%20Waerden
Bartel Leendert van der Waerden (; 2 February 1903 – 12 January 1996) was a Dutch mathematician and historian of mathematics. Biography Education and early career Van der Waerden learned advanced mathematics at the University of Amsterdam and the University of Göttingen, from 1919 until 1926. He was much influenced by Emmy Noether at Göttingen, Germany. Amsterdam awarded him a Ph.D. for a thesis on algebraic geometry, supervised by Hendrick de Vries. Göttingen awarded him the habilitation in 1928. In that year, at the age of 25, he accepted a professorship at the University of Groningen. In his 27th year, Van der Waerden published his Moderne Algebra, an influential two-volume treatise on abstract algebra, still cited, and perhaps the first treatise to treat the subject as a comprehensive whole. This work systematized an ample body of research by Emmy Noether, David Hilbert, Richard Dedekind, and Emil Artin. In the following year, 1931, he was appointed professor at the University of Leipzig. In July 1929 he married the sister of mathematician Franz Rellich, Camilla Juliana Anna, and they had three children. Nazi Germany After the Nazis seized power, and through World War II, Van der Waerden remained at Leipzig, and passed up opportunities to leave Nazi Germany for Princeton and Utrecht. However, he was critical of the Nazis and refused to give up his Dutch nationality, both of which led to difficulties for him. Postwar career Following the war, Van der Waerden was repatriated to the Netherlands rather than returning to Leipzig (then under Soviet control), but struggled to find a position in the Dutch academic system, in part because his time in Germany made his politics suspect and in part due to Brouwer's opposition to Hilbert's school of mathematics. After a year visiting Johns Hopkins University and two years as a part-time professor, in 1950, Van der Waerden filled the chair in mathematics at the University of Amsterdam. In 1951, he moved to the University of Zurich, where he spent the rest of his career, supervising more than 40 Ph.D. students. In 1949, Van der Waerden became member of the Royal Netherlands Academy of Arts and Sciences, in 1951 this was changed to a foreign membership. In 1973 he received the Pour le Mérite. Contributions Van der Waerden is mainly remembered for his work on abstract algebra. He also wrote on algebraic geometry, topology, number theory, geometry, combinatorics, analysis, probability and statistics, and quantum mechanics (he and Heisenberg had been colleagues at Leipzig). In later years, he turned to the history of mathematics and science. His historical writings include Ontwakende wetenschap (1950), which was translated into English as Science Awakening (1954), Sources of Quantum Mechanics (1967), Geometry and Algebra in Ancient Civilizations (1983), and A History of Algebra (1985). Van der Waerden has over 1000 academic descendants, most of them through three of his students, David van Dantzig (Ph
https://en.wikipedia.org/wiki/Linearity
In mathematics, the term linear is used in two distinct senses for two different properties: linearity of a function (or mapping); linearity of a polynomial. An example of a linear function is the function defined by that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables and is Linearity of a mapping is closely related to proportionality. Examples in physics include the linear relationship of voltage and current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are nonlinear. Generalized for functions in more than one dimension, linearity means the property of a function of being compatible with addition and scaling, also known as the superposition principle. Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line. In the term "linear equation", the word refers to the linearity of the polynomials involved. Because a function such as is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from the context. The word linear comes from Latin linearis, "pertaining to or resembling a line". In mathematics Linear maps In mathematics, a linear map or linear function f(x) is a function that satisfies the two properties: Additivity: . Homogeneity of degree 1: for all α. These properties are known as the superposition principle. In this definition, x is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below). Additivity alone implies homogeneity for rational α, since implies for any natural number n by mathematical induction, and then implies . The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear. The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions. Linear polynomials In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a stra
https://en.wikipedia.org/wiki/Lambert%20W%20function
In mathematics, the Lambert function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function , where is any complex number and is the exponential function. For each integer there is one branch, denoted by , which is a complex-valued function of one complex argument. is known as the principal branch. These functions have the following property: if and are any complex numbers, then holds if and only if When dealing with real numbers only, the two branches and suffice: for real numbers and the equation can be solved for only if ; we get if and the two values and if . The Lambert relation cannot be expressed in terms of elementary functions. It is useful in combinatorics, for instance, in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as . In biochemistry, and in particular enzyme kinetics, an opened-form solution for the time-course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert function. Terminology The Lambert function is named after Johann Heinrich Lambert. The principal branch is denoted in the Digital Library of Mathematical Functions, and the branch is denoted there. The notation convention chosen here (with and ) follows the canonical reference on the Lambert function by Corless, Gonnet, Hare, Jeffrey and Knuth. The name "product logarithm" can be understood as this: Since the inverse function of is called the logarithm, it makes sense to call the inverse "function" of the product as "product logarithm". (Technical note: like the complex logarithm, it is multivalued and thus W is described as the converse relation rather than inverse function.) It is related to the Omega constant, which is equal to . History Lambert first considered the related Lambert's Transcendental Equation in 1758, which led to an article by Leonhard Euler in 1783 that discussed the special case of . The equation Lambert considered was Euler transformed this equation into the form Both authors derived a series solution for their equations. Once Euler had solved this equation, he considered the case . Taking limits, he derived the equation He then put and obtained a convergent series solution for the resulting equation, expressing x in terms of c. After taking derivatives with respect to and some manipulation, the standard form of the Lambert function is obtained. In 1993, it was reported that the Lambert function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics. Prompted by this, Rob Corless and developers of the Maple computer algebra system realized that "the Lambert W function has been widely used in many fields,
https://en.wikipedia.org/wiki/Omar%20Khayyam
Ghiyāth al-Dīn Abū al-Fatḥ ʿUmar ibn Ibrāhīm Nīsābūrī (18 May 1048 – 4 December 1131), commonly known as Omar Khayyam ( ; ), was a polymath, known for his contributions to mathematics, astronomy, philosophy, and poetry. He was born in Nishapur, the initial capital of the Seljuk Empire. As a scholar, he was contemporary with the rule of the Seljuk dynasty around the time of the First Crusade. As a mathematician, he is most notable for his work on the classification and solution of cubic equations, where he provided geometric solutions by the intersection of conics. Khayyam also contributed to the understanding of the parallel axiom. As an astronomer, he calculated the duration of the solar year with remarkable precision and accuracy, and designed the Jalali calendar, a solar calendar with a very precise 33-year intercalation cycle, that provided the basis for the Persian calendar that is still in use after nearly a millennium. There is a tradition of attributing poetry to Omar Khayyam, written in the form of quatrains (rubāʿiyāt ). This poetry became widely known to the English-reading world in a translation by Edward FitzGerald (Rubaiyat of Omar Khayyam, 1859), which enjoyed great success in the Orientalism of the fin de siècle. Life Omar Khayyam was born in Nishapur—a metropolis in Khorasan province, of Persian ancestry, in 1048. In medieval Persian texts he is usually simply called Omar Khayyam. Although open to doubt, it has often been assumed that his forebears followed the trade of tent-making, since Khayyam means tent-maker in Arabic. The historian Bayhaqi, who was personally acquainted with Khayyam, provides the full details of his horoscope: "he was Gemini, the sun and Mercury being in the ascendant[...]". This was used by modern scholars to establish his date of birth as 18 May 1048. Khayyam's boyhood was spent in Nishapur, a leading metropolis under the Great Seljuq Empire, and it had been a major center of the Zoroastrian religion. His full name, as it appears in the Arabic sources, was Abu’l Fath Omar ibn Ibrahim al-Khayyam. His gifts were recognized by his early tutors who sent him to study under Imam Muwaffaq Nishaburi, the greatest teacher of the Khorasan region who tutored the children of the highest nobility, and Khayyam developed a firm friendship with him through the years. Khayyam Khayyam might have met and studied with Bahmanyar, a disciple of Avicenna. After studying science, philosophy, mathematics and astronomy at Nishapur, about the year 1068 he traveled to the province of Bukhara, where he frequented the renowned library of the Ark. In about 1070 he moved to Samarkand, where he started to compose his famous Treatise on Algebra under the patronage of Abu Tahir Abd al-Rahman ibn ʿAlaq, the governor and chief judge of the city. Khayyam was kindly received by the Karakhanid ruler Shams al-Mulk Nasr, who according to Bayhaqi, would "show him the greatest honour, so much so that he would seat [Khayyam] beside him on his
https://en.wikipedia.org/wiki/Solid%20angle
In geometry, a solid angle (symbol: ) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The point from which the object is viewed is called the apex of the solid angle, and the object is said to subtend its solid angle at that point. In the International System of Units (SI), a solid angle is expressed in a dimensionless unit called a steradian (symbol: sr). One steradian corresponds to one unit of area on the unit sphere surrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the total surface area of the unit sphere, . Solid angles can also be measured in squares of angular measures such as degrees, minutes, and seconds. A small object nearby may subtend the same solid angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, it is also much closer to Earth. Indeed, as viewed from any point on Earth, both objects have approximately the same solid angle (and therefore apparent size). This is evident during a solar eclipse. Definition and properties An object's solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the apex, that the object covers. Giving the area of a segment of a unit sphere in steradians is analogous to giving the length of an arc of a unit circle in radians. Just like a planar angle in radians is the ratio of the length of an arc to its radius, a solid angle in steradians is the ratio of the area covered on a sphere by an object to the area given by the square of the radius of said sphere. The formula is where is the spherical surface area and is the radius of the considered sphere. Solid angles are often used in astronomy, physics, and in particular astrophysics. The solid angle of an object that is very far away is roughly proportional to the ratio of area to squared distance. Here "area" means the area of the object when projected along the viewing direction. The solid angle of a sphere measured from any point in its interior is 4 sr, and the solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2/3  sr. Solid angles can also be measured in square degrees (1 sr = 2 square degrees), in square arc-minutes and square arc-seconds, or in fractions of the sphere (1 sr = fractional area), also known as spat (1 sp = 4 sr). In spherical coordinates there is a formula for the differential, where is the colatitude (angle from the North Pole) and is the longitude. The solid angle for an arbitrary oriented surface subtended at a point is equal to the solid angle of the projection of the surface to the unit sphere with center , which can be calculated as the surface integral: where is the unit vector corresponding to , the position vector of an infinitesimal area of surface with respect to point
https://en.wikipedia.org/wiki/Stirling%20number
In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book Methodus differentialis (1730). They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782. Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them. A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of n elements into k non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear). Notation Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted: Unsigned Stirling numbers of the first kind, which count the number of permutations of n elements with k disjoint cycles, are denoted: Stirling numbers of the second kind, which count the number of ways to partition a set of n elements into k nonempty subsets: Abramowitz and Stegun use an uppercase and a blackletter , respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth. (The bracket notation conflicts with a common notation for Gaussian coefficients.) The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions. Another infrequent notation is and . Expansions of falling and rising factorials Stirling numbers express coefficients in expansions of falling and rising factorials (also known as the Pochhammer symbol) as polynomials. That is, the falling factorial, defined as , is a polynomial in x of degree n whose expansion is with (signed) Stirling numbers of the first kind as coefficients. Note that (x)0 = 1 because it is an empty product. The notations for the falling factorial and for the rising factorial are also often used. (Confusingly, the Pochhammer symbol that many use for falling factorials is used in special functions for rising factorials.) Similarly, the rising factorial, defined as , is a polynomial in x of degree n whose expansion is with unsigned Stirling numbers of the first kind as coefficients. One of these expansions can be derived from the other by observing that . Stirling numbers of the second kind express the reverse relations: and As change of basis coefficients Considering the set of polynomia
https://en.wikipedia.org/wiki/Maxima%20%28software%29
Maxima () is a computer algebra system (CAS) based on a 1982 version of Macsyma. It is written in Common Lisp and runs on all POSIX platforms such as macOS, Unix, BSD, and Linux, as well as under Microsoft Windows and Android. It is free software released under the terms of the GNU General Public License (GPL). History Maxima is based on a 1982 version of Macsyma, which was developed at MIT with funding from the United States Department of Energy and other government agencies. A version of Macsyma was maintained by Bill Schelter from 1982 until his death in 2001. In 1998, Schelter obtained permission from the Department of Energy to release his version under the GPL. That version, now called Maxima, is maintained by an independent group of users and developers. Maxima does not include any of the many modifications and enhancements made to the commercial version of Macsyma during 1982–1999. Though the core functionality remains similar, code depending on these enhancements may not work on Maxima, and bugs which were fixed in Macsyma may still be present in Maxima, and vice versa. Maxima participated in Google Summer of Code in 2019 under International Neuroinformatics Coordinating Facility. Symbolic calculations Like most computer algebra systems, Maxima supports a variety of ways of reorganizing symbolic algebraic expressions, such as polynomial factorization, polynomial greatest common divisor calculation, expansion, separation into real and imaginary parts, and transformation of trigonometric functions to exponential and vice versa. It has a variety of techniques for simplifying algebraic expressions involving trigonometric functions, roots, and exponential functions. It can calculate symbolic antiderivatives ("indefinite integrals"), definite integrals, and limits. It can derive closed-form series expansions as well as terms of Taylor-Maclaurin-Laurent series. It can perform matrix manipulations with symbolic entries. Maxima is a general-purpose system, and special-case calculations such as factorization of large numbers, manipulation of extremely large polynomials, etc. are sometimes better done in specialized systems. Numeric calculations Maxima specializes in symbolic operations, but it also offers numerical capabilities such as arbitrary-precision integer, rational number, and floating-point numbers, limited only by space and time constraints. Programming Maxima includes a complete programming language with ALGOL-like syntax but Lisp-like semantics. It is written in Common Lisp and can be accessed programmatically and extended, as the underlying Lisp can be called from Maxima. It uses gnuplot for drawing. For calculations using floating point and arrays heavily, Maxima has translators from the Maxima language to other programming languages (notably Fortran), which may execute more efficiently. Interfaces Various graphical user interfaces (GUIs) are available for Maxima: wxMaxima is a graphical front-end using wxWidgets. There is
https://en.wikipedia.org/wiki/Isaac%20Barrow
Isaac Barrow (October 1630 – 4 May 1677) was an English Christian theologian and mathematician who is generally given credit for his early role in the development of infinitesimal calculus; in particular, for proof of the fundamental theorem of calculus. His work centered on the properties of the tangent; Barrow was the first to calculate the tangents of the kappa curve. He is also notable for being the inaugural holder of the prestigious Lucasian Professorship of Mathematics, a post later held by his student, Isaac Newton. Life Early life and education Barrow was born in London. He was the son of Thomas Barrow, a linen draper by trade. In 1624, Thomas married Ann, daughter of William Buggin of North Cray, Kent and their son Isaac was born in 1630. It appears that Barrow was the only child of this union—certainly the only child to survive infancy. Ann died around 1634, and the widowed father sent the lad to his grandfather, Isaac, the Cambridgeshire J.P., who resided at Spinney Abbey. Within two years, however, Thomas remarried; the new wife was Katherine Oxinden, sister of Henry Oxinden of Maydekin, Kent. From this marriage, he had at least one daughter, Elizabeth (born 1641), and a son, Thomas, who apprenticed to Edward Miller, skinner, and won his release in 1647, emigrating to Barbados in 1680. Early career Isaac went to school first at Charterhouse (where he was so turbulent and pugnacious that his father was heard to pray that if it pleased God to take any of his children he could best spare Isaac), and subsequently to Felsted School, where he settled and learned under the brilliant puritan Headmaster Martin Holbeach who ten years previously had educated John Wallis. Having learnt Greek, Hebrew, Latin and logic at Felsted, in preparation for university studies, he continued his education at Trinity College, Cambridge; he enrolled there because of an offer of support from an unspecified member of the Walpole family, "an offer that was perhaps prompted by the Walpoles' sympathy for Barrow's adherence to the Royalist cause." His uncle and namesake Isaac Barrow, afterwards Bishop of St Asaph, was a Fellow of Peterhouse. He took to hard study, distinguishing himself in classics and mathematics; after taking his degree in 1648, he was elected to a fellowship in 1649. Barrow received an MA from Cambridge in 1652 as a student of James Duport; he then resided for a few years in college, and became candidate for the Greek Professorship at Cambridge, but in 1655 having refused to sign the Engagement to uphold the Commonwealth, he obtained travel grants to go abroad. Travel He spent the next four years travelling across France, Italy, Smyrna and Constantinople, and after many adventures returned to England in 1659. He was known for his courageousness. Particularly noted is the occasion of his having saved the ship he was upon, by the merits of his own prowess, from capture by pirates. He is described as "low in stature, lean, and of a pale c
https://en.wikipedia.org/wiki/Jules%20Richard%20%28mathematician%29
Jules Richard (12 August 1862 – 14 October 1956) was a French mathematician who worked mainly in geometry but his name is most commonly associated with Richard's paradox. Life and works Richard was born in Blet, in the Cher département. He taught at the lycées of Tours, Dijon and Châteauroux. He obtained his doctorate, at age of 39, from the Faculté des Sciences in Paris. His thesis of 126 pages concerns Fresnel's wave-surface. Richard worked mainly on the foundations of mathematics and geometry, relating to works by Hilbert, von Staudt and Méray. In a more philosophical treatise about the nature of axioms of geometry Richard discusses and rejects the following basic principles: Geometry is founded on arbitrarily chosen axioms - there are infinitely many equally true geometries. Experience provides the axioms of geometry, the basis is experimental, the development deductive. The axioms of geometry are definitions (in contrast to (1)). Axioms are neither experimental nor arbitrary, they force themselves on us since without them experience is not possible. The latter approach was essentially that proposed by Kant. Richard arrived at the result that the notion of identity of two objects and the invariability of an object are too vague and need to be specified more precisely. This should be done by axioms. Further according to Richard, it is the aim of science to explain the material universe. And although non-Euclidean geometry had not found any applications (Albert Einstein finished his general theory of relativity only in 1915), Richard already stated clairvoyantly: Richard corresponded with Giuseppe Peano and Henri Poincaré. He became known to more than a small group of specialists by formulating his paradox which was extensively use by Poincaré to attack set theory whereupon the advocates of set theory had to refute these attacks. He died in 1956 in Châteauroux, in the Indre département, at the age of 94. Richard's paradox The paradox was first stated in 1905 in a letter to Louis Olivier, director of the Revue générale des sciences pures et appliquées. It was published in 1905 in the article Les Principes des mathématiques et le problème des ensembles. The Principia Mathematica by Alfred North Whitehead and Bertrand Russell quote it together with six other paradoxes concerning the problem of self-reference. In one of the most important compendia of mathematical logic, compiled by Jean van Heijenoort, Richard's article is translated into English. The paradox can be interpreted as an application of Cantor's diagonal argument. It inspired Kurt Gödel and Alan Turing to their famous works. Kurt Gödel considered his incompleteness theorem as analogous to Richard's paradox which, in the original version runs as follows: Let E be the set of real numbers that can be defined by a finite number of words. This set is denumerable. Let p be the nth decimal of the nth number of the set E; we form a number N having zero for the integral part a
https://en.wikipedia.org/wiki/Richard%27s%20paradox
In logic, Richard's paradox is a semantical antinomy of set theory and natural language first described by the French mathematician Jules Richard in 1905. The paradox is ordinarily used to motivate the importance of distinguishing carefully between mathematics and metamathematics. Kurt Gödel specifically cites Richard's antinomy as a semantical analogue to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The paradox was also a motivation for the development of predicative mathematics. Description The original statement of the paradox, due to Richard (1905), is strongly related to Cantor's diagonal argument on the uncountability of the set of real numbers. The paradox begins with the observation that certain expressions of natural language define real numbers unambiguously, while other expressions of natural language do not. For example, "The real number the integer part of which is 17 and the nth decimal place of which is 0 if n is even and 1 if n is odd" defines the real number 17.1010101... = 1693/99, whereas the phrase "the capital of England" does not define a real number, nor the phrase "the smallest positive integer not definable in under sixty letters" (see Berry's paradox). There is an infinite list of English phrases (such that each phrase is of finite length, but the list itself is of infinite length) that define real numbers unambiguously. We first arrange this list of phrases by increasing length, then order all phrases of equal length lexicographically, so that the ordering is canonical. This yields an infinite list of the corresponding real numbers: r1, r2, ... . Now define a new real number r as follows. The integer part of r is 0, the nth decimal place of r is 1 if the nth decimal place of rn is not 1, and the nth decimal place of r is 2 if the nth decimal place of rn is 1. The preceding paragraph is an expression in English that unambiguously defines a real number r. Thus r must be one of the numbers rn. However, r was constructed so that it cannot equal any of the rn (thus, r is an undefinable number). This is the paradoxical contradiction. Analysis and relationship with metamathematics Richard's paradox results in an untenable contradiction, which must be analyzed to find an error. The proposed definition of the new real number r clearly includes a finite sequence of characters, and hence it seems at first to be a definition of a real number. However, the definition refers to definability-in-English itself. If it were possible to determine which English expressions actually do define a real number, and which do not, then the paradox would go through. Thus the resolution of Richard's paradox is that there is not any way to unambiguously determine exactly which English sentences are definitions of real numbers (see Good 1966). That is, there is not any way to describe in a finite number of words how to tell whet
https://en.wikipedia.org/wiki/Row%20and%20column%20spaces
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation. Let be a field. The column space of an matrix with components from is a linear subspace of the m-space . The dimension of the column space is called the rank of the matrix and is at most . A definition for matrices over a ring is also possible. The row space is defined similarly. The row space and the column space of a matrix are sometimes denoted as and respectively. This article considers matrices of real numbers. The row and column spaces are subspaces of the real spaces and respectively. Overview Let be an -by- matrix. Then , = number of pivots in any echelon form of , = the maximum number of linearly independent rows or columns of . If one considers the matrix as a linear transformation from to , then the column space of the matrix equals the image of this linear transformation. The column space of a matrix is the set of all linear combinations of the columns in . If , then . The concept of row space generalizes to matrices over the field of complex numbers, or over any field. Intuitively, given a matrix , the action of the matrix on a vector will return a linear combination of the columns of weighted by the coordinates of as coefficients. Another way to look at this is that it will (1) first project into the row space of , (2) perform an invertible transformation, and (3) place the resulting vector in the column space of . Thus the result must reside in the column space of . See singular value decomposition for more details on this second interpretation. Example Given a matrix : the rows are , , , . Consequently, the row space of is the subspace of spanned by . Since these four row vectors are linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonal to the vector , so it can be deduced that the row space consists of all vectors in that are orthogonal to . Column space Definition Let be a field of scalars. Let be an matrix, with column vectors . A linear combination of these vectors is any vector of the form where are scalars. The set of all possible linear combinations of is called the column space of . That is, the column space of is the span of the vectors . Any linear combination of the column vectors of a matrix can be written as the product of with a column vector: Therefore, the column space of consists of all possible products , for . This is the same as the image (or range) of the corresponding matrix transformation. Example If , then the column vectors are and . A linear combination of v1 and v2 is any vector of the form The set of all such vectors is the column space of . In this case, the column space is precisely the set of vectors satisfying the
https://en.wikipedia.org/wiki/Flexagon
In geometry, flexagons are flat models, usually constructed by folding strips of paper, that can be flexed or folded in certain ways to reveal faces besides the two that were originally on the back and front. Flexagons are usually square or rectangular (tetraflexagons) or hexagonal (hexaflexagons). A prefix can be added to the name to indicate the number of faces that the model can display, including the two faces (back and front) that are visible before flexing. For example, a hexaflexagon with a total of six faces is called a hexahexaflexagon. In hexaflexagon theory (that is, concerning flexagons with six sides), flexagons are usually defined in terms of pats. Two flexagons are equivalent if one can be transformed to the other by a series of pinches and rotations. Flexagon equivalence is an equivalence relation. History Discovery and introduction Of The Hexaflexagon The discovery of the first flexagon, a trihexaflexagon, is credited to the British mathematician Arthur H. Stone, while a student at Princeton University in the United States in 1939. His new American paper would not fit in his English binder so he cut off the ends of the paper and began folding them into different shapes. One of these formed a trihexaflexagon. Stone's colleagues Bryant Tuckerman, Richard Feynman, and John Tukey became interested in the idea and formed the Princeton Flexagon Committee. Tuckerman worked out a topological method, called the Tuckerman traverse, for revealing all the faces of a flexagon. Tuckerman traverses are shown as a diagram. Flexagons were introduced to the general public by Martin Gardner in the December 1956 issue of Scientific American in an article so well-received that it launched Gardner's "Mathematical Games" column which then ran in that magazine for the next twenty-five years. In 1974, the magician Doug Henning included a construct-your-own hexaflexagon with the original cast recording of his Broadway show The Magic Show. Attempted commercial development In 1955, Russell Rogers and Leonard D'Andrea of Homestead Park, Pennsylvania applied for a patent, and in 1959 they were granted U.S. Patent number 2,883,195 for the hexahexaflexagon, under the title "Changeable Amusement Devices and the Like." Their patent imagined possible applications of the device "as a toy, as an advertising display device, or as an educational geometric device." A few such novelties were produced by the Herbick & Held Printing Company, the printing company in Pittsburgh where Rogers worked, but the device, marketed as the "Hexmo", failed to catch on. Varieties Tetraflexagons Tritetraflexagon The tritetraflexagon is the simplest tetraflexagon (flexagon with square sides). The "tri" in the name means it has three faces, two of which are visible at any given time if the flexagon is pressed flat. The construction of the tritetraflexagon is similar to the mechanism used in the traditional Jacob's Ladder children's toy, in Rubik's Magic and in the magic wal
https://en.wikipedia.org/wiki/Magma%20%28computer%20algebra%20system%29
Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma. It runs on Unix-like operating systems, as well as Windows. Introduction Magma is produced and distributed by the Computational Algebra Group within the School of Mathematics and Statistics at the University of Sydney. In late 2006, the book Discovering Mathematics with Magma was published by Springer as volume 19 of the Algorithms and Computations in Mathematics series. The Magma system is used extensively within pure mathematics. The Computational Algebra Group maintain a list of publications that cite Magma, and as of 2010 there are about 2600 citations, mostly in pure mathematics, but also including papers from areas as diverse as economics and geophysics. History The predecessor of the Magma system was named Cayley (1982–1993), after Arthur Cayley. Magma was officially released in August 1993 (version 1.0). Version 2.0 of Magma was released in June 1996 and subsequent versions of 2.X have been released approximately once per year. In 2013, the Computational Algebra Group finalized an agreement with the Simons Foundation, whereby the Simons Foundation will underwrite all costs of providing Magma to all U.S. nonprofit, non-governmental scientific research or educational institutions. All students, researchers and faculty associated with a participating institution will be able to access Magma for free, through that institution. Mathematical areas covered by the system Group theory Magma includes permutation, matrix, finitely presented, soluble, abelian (finite or infinite), polycyclic, braid and straight-line program groups. Several databases of groups are also included. Number theory Magma contains asymptotically fast algorithms for all fundamental integer and polynomial operations, such as the Schönhage–Strassen algorithm for fast multiplication of integers and polynomials. Integer factorization algorithms include the Elliptic Curve Method, the Quadratic sieve and the Number field sieve. Algebraic number theory Magma includes the KANT computer algebra system for comprehensive computations in algebraic number fields. A special type also allows one to compute in the algebraic closure of a field. Module theory and linear algebra Magma contains asymptotically fast algorithms for all fundamental dense matrix operations, such as Strassen multiplication. Sparse matrices Magma contains the structured Gaussian elimination and Lanczos algorithms for reducing sparse systems which arise in index calculus methods, while Magma uses Markowitz pivoting for several other sparse linear algebra problems. Lattices and the LLL algorithm Magma has a provable implementation of fpLLL, which is an LLL algorithm for integer matrices which uses floating point numbers for the Gram–Schmidt coefficients, but such that the result is rigorously proven to be LLL-reduced. Commutative algebr
https://en.wikipedia.org/wiki/Coset
In mathematics, specifically group theory, a subgroup of a group may be used to decompose the underlying set of into disjoint, equal-size subsets called cosets. There are left cosets and right cosets. Cosets (both left and right) have the same number of elements (cardinality) as does . Furthermore, itself is both a left coset and a right coset. The number of left cosets of in is equal to the number of right cosets of in . This common value is called the index of in and is usually denoted by . Cosets are a basic tool in the study of groups; for example, they play a central role in Lagrange's theorem that states that for any finite group , the number of elements of every subgroup of divides the number of elements of . Cosets of a particular type of subgroup (a normal subgroup) can be used as the elements of another group called a quotient group or factor group. Cosets also appear in other areas of mathematics such as vector spaces and error-correcting codes. Definition Let be a subgroup of the group whose operation is written multiplicatively (juxtaposition denotes the group operation). Given an element of , the left cosets of in are the sets obtained by multiplying each element of by a fixed element of (where is the left factor). In symbols these are, The right cosets are defined similarly, except that the element is now a right factor, that is, As varies through the group, it would appear that many cosets (right or left) would be generated. Nevertheless, it turns out that any two left cosets (respectively right cosets) are either disjoint or are identical as sets. If the group operation is written additively, as is often the case when the group is abelian, the notation used changes to or , respectively. First example Let be the dihedral group of order six. Its elements may be represented by . In this group, and . This is enough information to fill in the entire Cayley table: Let be the subgroup . The (distinct) left cosets of are: , , and . Since all the elements of have now appeared in one of these cosets, generating any more can not give new cosets, since a new coset would have to have an element in common with one of these and therefore be identical to one of these cosets. For instance, . The right cosets of are: , , and . In this example, except for , no left coset is also a right coset. Let be the subgroup . The left cosets of are and . The right cosets of are and . In this case, every left coset of is also a right coset of . Let be a subgroup of a group and suppose that , . The following statements are equivalent: Properties The disjointness of non-identical cosets is a result of the fact that if belongs to then . For if then there must exist an such that . Thus . Moreover, since is a group, left multiplication by is a bijection, and . Thus every element of belongs to exactly one left coset of the subgroup , and is itself a left coset (and the one that contains the identity). Two
https://en.wikipedia.org/wiki/Spanning%20Tree%20Protocol
The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include backup links providing fault tolerance if an active link fails. As the name suggests, STP creates a spanning tree that characterizes the relationship of nodes within a network of connected layer-2 bridges, and disables those links that are not part of the spanning tree, leaving a single active path between any two network nodes. STP is based on an algorithm that was invented by Radia Perlman while she was working for Digital Equipment Corporation. In 2001, the IEEE introduced Rapid Spanning Tree Protocol (RSTP) as 802.1w. RSTP provides significantly faster recovery in response to network changes or failures, introducing new convergence behaviors and bridge port roles to do this. RSTP was designed to be backwards-compatible with standard STP. STP was originally standardized as IEEE 802.1D but the functionality of spanning tree (802.1D), rapid spanning tree (802.1w), and multiple spanning tree (802.1s) has since been incorporated into IEEE 802.1Q-2014. While STP is still in use today, in most modern networks its primary use is as a loop-protection mechanism rather than a fault tolerance mechanism. Link aggregation protocols such as LACP will bond two or more links to provide fault tolerance while simultaneously increasing overall link capacity. Protocol operation The need for the Spanning Tree Protocol (STP) arose because switches in local area networks (LANs) are often interconnected using redundant links to improve resilience should one connection fail. However, this connection configuration creates a switching loop resulting in broadcast radiations and MAC table instability. If redundant links are used to connect switches, then switching loops need to be avoided. To avoid the problems associated with redundant links in a switched LAN, STP is implemented on switches to monitor the network topology. Every link between switches, and in particular redundant links, are catalogued. The spanning-tree algorithm then blocks forwarding on redundant links by setting up one preferred link between switches in the LAN. This preferred link is used for all Ethernet frames unless it fails, in which case a non-preferred redundant link is enabled. When implemented in a network, STP designates one layer-2 switch as root bridge. All switches then select their best connection towards the root bridge for forwarding and block other redundant links. All switches constantly communicate with their neighbors in the LAN using bridge protocol data units (BPDUs). Provided there is more than one link between two switches, the STP root bridge calculates the cost of each path based on bandwidth. STP will select the path with the lowest cost, that is the highest bandwidth, as the preferred link. S
https://en.wikipedia.org/wiki/Lagrange%20%28disambiguation%29
Joseph-Louis Lagrange was an Italian mathematician, physicist and astronomer. Lagrange or La Grange may also refer to: Lagrange (surname), list of people with this name Mathematics and physics Lagrange multiplier, a mathematical technique Lagrange's theorem (group theory), or Lagrange's lemma, an important result in Group theory Lagrange's theorem (number theory), about prime numbers Lagrangian point, in physics and astronomy Lagrange polynomial Lagrangian mechanics Places Australia Lagrange Bay, Western Australia La Grange (originally La Grange Mission), a dual name for Bidyadanga Community, Western Australia France La Grange, Doubs Lagrange, Hautes-Pyrénées Lagrange, Landes Lagrange, Territoire de Belfort United States LaGrange, Arkansas La Grange, California La Grange (Glasgow, Delaware) LaGrange, Georgia LaGrange Commercial Historic District, in the National Register of Historic Places listings in Troup County, Georgia La Grange, Illinois, village in Cook County La Grange Village Historic District, in the National Register of Historic Places listings in Cook County, Illinois LaGrange, Indiana LaGrange, Tippecanoe County, Indiana La Grange, Kentucky Lagrange, Maine La Grange (La Plata, Maryland) LaGrange (Cambridge, Maryland) La Grange, Missouri LaGrange, New York La Grange, North Carolina La Grange Historic District (North Carolina) LaGrange (Harris Crossroads, North Carolina) LaGrange, Ohio La Grange, Tennessee La Grange, Texas, a county seat Lagrange, Virginia La Grange, Monroe County, Wisconsin, a town La Grange, Walworth County, Wisconsin, a town La Grange (community), Wisconsin, an unincorporated community La Grange, Wyoming, a town LaGrange County, Indiana Lagrange Township, Bond County, Illinois LaGrange Township, Michigan LaGrange Township, Lorain County, Ohio La Grange, U.S. Virgin Islands Moon Lagrange (crater) Other uses Lagrange: The Flower of Rin-ne, a 2011 mecha anime La Grange expedition, Australia "La Grange" (song), released on the 1973 ZZ Top album Tres Hombres Château Lagrange, a wine from Bordeaux, France See also Fond La Grange, a village in Haiti Château de la Grange-Bléneau, a castle in France Lagrangian (disambiguation) Grange (disambiguation) La Grange Historic District (disambiguation)
https://en.wikipedia.org/wiki/J%C3%A1nos%20Bolyai
János Bolyai (; 15 December 1802 – 27 January 1860) or Johann Bolyai, was a Hungarian mathematician who developed absolute geometry—a geometry that includes both Euclidean geometry and hyperbolic geometry. The discovery of a consistent alternative geometry that might correspond to the structure of the universe helped to free mathematicians to study abstract concepts irrespective of any possible connection with the physical world. Early life Bolyai was born in the Transylvanian town of Kolozsvár, Grand Principality of Transylvania (now Cluj-Napoca in Romania), the son of Zsuzsanna Benkő and the well-known mathematician Farkas Bolyai. By the age of 13, he had mastered calculus and other forms of analytical mechanics, receiving instruction from his father. He studied at the Imperial and Royal Military Academy (TherMilAk) in Vienna from 1818 to 1822, and Bolyai received his commission as sub-lieutenant. At the age of 21, he was already a lieutenant, at the age of 22, a first lieutenant and at the age of 24, a captain. Career Bolyai became so obsessed with Euclid's parallel postulate that his father, who had pursued the same subject for many years, wrote to him in 1820: "You must not attempt this approach to parallels. I know this way to the very end. I have traversed this bottomless night, which extinguished all light and joy in my life. I entreat you, leave the science of parallels alone...Learn from my example." János, however, persisted in his quest and eventually came to the conclusion that the postulate is independent of the other axioms of geometry and that different consistent geometries can be constructed on its negation. In 1823, he wrote to his father: "I have discovered such wonderful things that I was amazed...out of nothing I have created a strange new universe." Between 1820 and 1823 he had prepared a treatise on parallel lines that he called absolute geometry. Bolyai's work was published in 1832 as an appendix to a mathematics textbook by his father. Carl Friedrich Gauss, on reading the Appendix, wrote to a friend saying "I regard this young geometer Bolyai as a genius of the first order." To Farkas Bolyai, however, Gauss wrote: "To praise it would amount to praising myself. For the entire content of the work...coincides almost exactly with my own meditations which have occupied my mind for the past thirty or thirty-five years." In 1848 Bolyai learned that Nikolai Ivanovich Lobachevsky had published a similar piece of work in 1829. Though Lobachevsky published his work a few years earlier than Bolyai, it contained only hyperbolic geometry. Working independently, Bolyai and Lobachevsky pioneered the investigation of non-Euclidean geometry. In addition to his work in geometry, Bolyai developed a rigorous geometric concept of complex numbers as ordered pairs of real numbers. Although he never published more than the 24 pages of the Appendix, he left more than 20,000 pages of mathematical manuscripts when he died. These can now be f
https://en.wikipedia.org/wiki/Algebraic%20integer
In algebraic number theory, an algebraic integer is a complex number which is integral over the integers. That is, an algebraic integer is a complex root of some monic polynomial (a polynomial whose leading coefficient is 1) whose coefficients are integers. The set of all algebraic integers is closed under addition, subtraction and multiplication and therefore is a commutative subring of the complex numbers. The ring of integers of a number field , denoted by , is the intersection of and : it can also be characterised as the maximal order of the field . Each algebraic integer belongs to the ring of integers of some number field. A number is an algebraic integer if and only if the ring is finitely generated as an abelian group, which is to say, as a -module. Definitions The following are equivalent definitions of an algebraic integer. Let be a number field (i.e., a finite extension of , the field of rational numbers), in other words, for some algebraic number by the primitive element theorem. is an algebraic integer if there exists a monic polynomial such that . is an algebraic integer if the minimal monic polynomial of over is in . is an algebraic integer if is a finitely generated -module. is an algebraic integer if there exists a non-zero finitely generated -submodule such that . Algebraic integers are a special case of integral elements of a ring extension. In particular, an algebraic integer is an integral element of a finite extension . Examples The only algebraic integers which are found in the set of rational numbers are the integers. In other words, the intersection of and is exactly . The rational number is not an algebraic integer unless divides . Note that the leading coefficient of the polynomial is the integer . As another special case, the square root of a nonnegative integer is an algebraic integer, but is irrational unless is a perfect square. If is a square-free integer then the extension is a quadratic field of rational numbers. The ring of algebraic integers contains since this is a root of the monic polynomial . Moreover, if , then the element is also an algebraic integer. It satisfies the polynomial where the constant term is an integer. The full ring of integers is generated by or respectively. See Quadratic integer for more. The ring of integers of the field , , has the following integral basis, writing for two square-free coprime integers and : If is a primitive th root of unity, then the ring of integers of the cyclotomic field is precisely . If is an algebraic integer then is another algebraic integer. A polynomial for is obtained by substituting in the polynomial for . Non-example If is a primitive polynomial which has integer coefficients but is not monic, and is irreducible over , then none of the roots of are algebraic integers (but are algebraic numbers). Here primitive is used in the sense that the highest common factor of the coefficients of is 1; this i
https://en.wikipedia.org/wiki/Shape%20of%20the%20universe
In physical cosmology, the shape of the universe refers to both its local and global geometry. Local geometry is defined primarily by its curvature, while the global geometry is characterised by its topology (which itself is constrained by curvature). General relativity explains how spatial curvature (local geometry) is constrained by gravity. The global topology of the universe cannot be deduced from measurements of curvature inferred from observations within the family of homogeneous general relativistic models alone, due to the existence of locally indistinguishable spaces with varying global topological characteristics. For example; a multiply connected space like a 3 torus has everywhere zero curvature but is finite in extent, whereas a flat simply connected space is infinite in extent (euclidean space). Current observational evidence (WMAP, BOOMERanG, and Planck for example) imply that the observable universe is flat to within a 0.4% margin of error of the curvature density parameter with an unknown global topology. It is currently unknown if the universe is simply connected like euclidean space or multiply connected like a torus. To date, no compelling evidence has been found suggesting the universe has a non-trivial (i.e; not simply connected) topology, though it has not been ruled out by astronomical observations. Shape of the observable universe The universe's structure can be examined from two angles: Local geometry: This relates to the curvature of the universe, primarily concerning what we can observe. Global geometry: This pertains to the universe's overall shape and structure. The observable universe is like a sphere extending 46.5 billion light-years in all directions from any observer. It appears older and more redshifted the deeper we look into space. In theory, we could look all the way back to the Big Bang, but in practice, we can only see up to the cosmic microwave background (CMB) as anything beyond that is opaque. Studies show that the observable universe is isotropic and homogeneous on the largest scales. If the observable universe encompasses the entire universe, we might determine its structure through observation. However, if the observable universe is smaller, we can only grasp a portion of it, making it impossible to deduce the global geometry through observation. Different mathematical models of the universe's global geometry can be constructed, all consistent with current observations. Hence, it is unclear whether the observable universe matches the entire universe or is significantly smaller. The universe may be compact in some dimensions and not in others, similar to how a cuboid is longer in one dimension than the others. Scientists test these models by looking for novel implications – phenomena not yet observed but necessary if the model is accurate. For instance, a small closed universe would produce multiple images of the same object in the sky, though not necessarily of the same age. As of 2023 cur
https://en.wikipedia.org/wiki/Convergent
Convergent is an adjective for things that converge. It is commonly used in mathematics and may refer to: Convergent boundary, a type of plate tectonic boundary Convergent (continued fraction) Convergent evolution Convergent series Convergent may also refer to: Convergent Books, an imprint of Crown Publishing Group Convergent Technologies, a computer company Convergents, a Catalan political party See also Convergence (disambiguation)
https://en.wikipedia.org/wiki/Exponentiation
In mathematics, exponentiation is an operation involving two numbers, the base and the exponent or power. Exponentiation is written as , where is the base and is the power; this is pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, is the product of multiplying bases: The exponent is usually shown as a superscript to the right of the base. In that case, is called "b raised to the nth power", "b (raised) to the power of n", "the nth power of b", "b to the nth power", or most briefly as "b to the nth". Starting from the basic fact stated above that, for any positive integer , is occurrences of all multiplied by each other, several other properties of exponentiation directly follow. In particular: In other words, when multiplying a base raised to one exponent by the same base raised to another exponent, the exponents add. From this basic rule that exponents add, we can derive that must be equal to 1 for any , as follows. For any , . Dividing both sides by gives . The fact that can similarly be derived from the same rule. For example, . Taking the cube root of both sides gives . The rule that multiplying makes exponents add can also be used to derive the properties of negative integer exponents. Consider the question of what should mean. In order to respect the "exponents add" rule, it must be the case that . Dividing both sides by gives , which can be more simply written as , using the result from above that . By a similar argument, . The properties of fractional exponents also follow from the same rule. For example, suppose we consider and ask if there is some suitable exponent, which we may call , such that . From the definition of the square root, we have that . Therefore, the exponent must be such that . Using the fact that multiplying makes exponents add gives . The on the right-hand side can also be written as , giving . Equating the exponents on both sides, we have . Therefore, , so . The definition of exponentiation can be extended to allow any real or complex exponent. Exponentiation by integer exponents can also be defined for a wide variety of algebraic structures, including matrices. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. Etymology The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth". The term power () is a mistranslation of the ancient Greek δύναμις (dúnamis, here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. History Antiquity The Sand Reckoner In The Sand Reckoner, Archimedes proved the law of exponents, , necessary to manipulate powers of . He then used powers
https://en.wikipedia.org/wiki/Presentation%20of%20a%20group
In mathematics, a presentation is one method of specifying a group. A presentation of a group G comprises a set S of generators—so that every element of the group can be written as a product of powers of some of these generators—and a set R of relations among those generators. We then say G has presentation Informally, G has the above presentation if it is the "freest group" generated by S subject only to the relations R. Formally, the group G is said to have the above presentation if it is isomorphic to the quotient of a free group on S by the normal subgroup generated by the relations R. As a simple example, the cyclic group of order n has the presentation where 1 is the group identity. This may be written equivalently as thanks to the convention that terms that do not include an equals sign are taken to be equal to the group identity. Such terms are called relators, distinguishing them from the relations that do include an equals sign. Every group has a presentation, and in fact many different presentations; a presentation is often the most compact way of describing the structure of the group. A closely related but different concept is that of an absolute presentation of a group. Background A free group on a set S is a group where each element can be uniquely described as a finite length product of the form: where the si are elements of S, adjacent si are distinct, and ai are non-zero integers (but n may be zero). In less formal terms, the group consists of words in the generators and their inverses, subject only to canceling a generator with an adjacent occurrence of its inverse. If G is any group, and S is a generating subset of G, then every element of G is also of the above form; but in general, these products will not uniquely describe an element of G. For example, the dihedral group D8 of order sixteen can be generated by a rotation, r, of order 8; and a flip, f, of order 2; and certainly any element of D8 is a product of rs and fs. However, we have, for example, , , etc., so such products are not unique in D8. Each such product equivalence can be expressed as an equality to the identity, such as , , or . Informally, we can consider these products on the left hand side as being elements of the free group , and can consider the subgroup R of F which is generated by these strings; each of which would also be equivalent to 1 when considered as products in D8. If we then let N be the subgroup of F generated by all conjugates x−1Rx of R, then it follows by definition that every element of N is a finite product x1−1r1x1 ... xm−1rm xm of members of such conjugates. It follows that each element of N, when considered as a product in D8, will also evaluate to 1; and thus that N is a normal subgroup of F. Thus D8 is isomorphic to the quotient group . We then say that D8 has presentation Here the set of generators is }, and the set of relations is . We often see R abbreviated, giving the presentation An even shorter form drops t
https://en.wikipedia.org/wiki/Hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ambient space. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperplanes are the 1-dimensional lines. This notion can be used in any general space in which the concept of the dimension of a subspace is defined. In different settings, hyperplanes may have different properties. For instance, a hyperplane of an -dimensional affine space is a flat subset with dimension and it separates the space into two half spaces. While a hyperplane of an -dimensional projective space does not have this property. The difference in dimension between a subspace and its ambient space is known as the codimension of with respect to . Therefore, a necessary and sufficient condition for to be a hyperplane in is for to have codimension one in . Technical description In geometry, a hyperplane of an n-dimensional space V is a subspace of dimension n − 1, or equivalently, of codimension 1 in V. The space V may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly since the definition of subspace differs in these settings; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension 1" constraint) algebraic equation of degree 1. If V is a vector space, one distinguishes "vector hyperplanes" (which are linear subspaces, and therefore must pass through the origin) and "affine hyperplanes" (which need not pass through the origin; they can be obtained by translation of a vector hyperplane). A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces. Special types of hyperplanes Several specific types of hyperplanes are defined with properties that are well suited for particular purposes. Some of these specializations are described here. Affine hyperplanes An affine hyperplane is an affine subspace of codimension 1 in an affine space. In Cartesian coordinates, such a hyperplane can be described with a single linear equation of the following form (where at least one of the s is non-zero and is an arbitrary constant): In the case of a real affine space, in other words when the coordinates are real numbers, this affine space separates the space into two half-spaces, which are the connected components of the complement of the hyperplane, and are given by the inequalities and As an example, a point is a hyperplane in 1-dimensional space, a line is a hyperplane in 2-dimensional space, and a plane is a hyperplane in 3-dimensional space. A line in 3-dimensional space is not a hyperplane, and does not separate the space into two parts (the complement of such a line is connected). Any hyperplane of a Euclidean space has exactly two unit nor
https://en.wikipedia.org/wiki/Generating%20set%20of%20a%20group
In abstract algebra, a generating set of a group is a subset of the group set such that every element of the group can be expressed as a combination (under the group operation) of finitely many elements of the subset and their inverses. In other words, if is a subset of a group , then , the subgroup generated by , is the smallest subgroup of containing every element of , which is equal to the intersection over all subgroups containing the elements of ; equivalently, is the subgroup of all elements of that can be expressed as the finite product of elements in and their inverses. (Note that inverses are only needed if the group is infinite; in a finite group, the inverse of an element can be expressed as a power of that element.) If , then we say that generates , and the elements in are called generators or group generators. If is the empty set, then is the trivial group , since we consider the empty product to be the identity. When there is only a single element in , is usually written as . In this case, is the cyclic subgroup of the powers of , a cyclic group, and we say this group is generated by . Equivalent to saying an element generates a group is saying that equals the entire group . For finite groups, it is also equivalent to saying that has order . A group may need an infinite number of generators. For example the additive group of rational numbers is not finitely generated. It is generated by the inverses of all the integers, but any finite number of these generators can be removed from the generating set without it ceasing to be a generating set. In a case like this, all the elements in a generating set are nevertheless "non-generating elements", as are in fact all the elements of the whole group − see Frattini subgroup below. If is a topological group then a subset of is called a set of topological generators if is dense in , i.e. the closure of is the whole group . Finitely generated group If is finite, then a group is called finitely generated. The structure of finitely generated abelian groups in particular is easily described. Many theorems that are true for finitely generated groups fail for groups in general. It has been proven that if a finite group is generated by a subset , then each group element may be expressed as a word from the alphabet of length less than or equal to the order of the group. Every finite group is finitely generated since . The integers under addition are an example of an infinite group which is finitely generated by both 1 and −1, but the group of rationals under addition cannot be finitely generated. No uncountable group can be finitely generated. For example, the group of real numbers under addition, . Different subsets of the same group can be generating subsets. For example, if and are integers with , then also generates the group of integers under addition by Bézout's identity. While it is true that every quotient of a finitely generated group is finitely generated
https://en.wikipedia.org/wiki/Dihedral%20group
In mathematics, a dihedral group is the group of symmetries of a regular polygon, which includes rotations and reflections. Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry. The notation for the dihedral group differs in geometry and abstract algebra. In geometry, or refers to the symmetries of the -gon, a group of order . In abstract algebra, refers to this same dihedral group. This article uses the geometric convention, . Definition Elements A regular polygon with sides has different symmetries: rotational symmetries and reflection symmetries. Usually, we take here. The associated rotations and reflections make up the dihedral group . If is odd, each axis of symmetry connects the midpoint of one side to the opposite vertex. If is even, there are axes of symmetry connecting the midpoints of opposite sides and axes of symmetry connecting opposite vertices. In either case, there are axes of symmetry and elements in the symmetry group. Reflecting in one axis of symmetry followed by reflecting in another axis of symmetry produces a rotation through twice the angle between the axes. The following picture shows the effect of the sixteen elements of on a stop sign: The first row shows the effect of the eight rotations, and the second row shows the effect of the eight reflections, in each case acting on the stop sign with the orientation as shown at the top left. Group structure As with any geometric object, the composition of two symmetries of a regular polygon is again a symmetry of this object. With composition of symmetries to produce another as the binary operation, this gives the symmetries of a polygon the algebraic structure of a finite group. The following Cayley table shows the effect of composition in the group D3 (the symmetries of an equilateral triangle). r0 denotes the identity; r1 and r2 denote counterclockwise rotations by 120° and 240° respectively, and s0, s1 and s2 denote reflections across the three lines shown in the adjacent picture. For example, , because the reflection s1 followed by the reflection s2 results in a rotation of 120°. The order of elements denoting the composition is right to left, reflecting the convention that the element acts on the expression to its right. The composition operation is not commutative. In general, the group Dn has elements r0, ..., rn−1 and s0, ..., sn−1, with composition given by the following formulae: In all cases, addition and subtraction of subscripts are to be performed using modular arithmetic with modulus n. Matrix representation If we center the regular polygon at the origin, then elements of the dihedral group act as linear transformations of the plane. This lets us represent elements of Dn as matrices, with composition being matrix multiplication. This is an example of a (2-dimensional) group representation. For example, the elements of the group D4 can be repres
https://en.wikipedia.org/wiki/List%20of%20group%20theory%20topics
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography. Structures and operations Central extension Direct product of groups Direct sum of groups Extension problem Free abelian group Free group Free product Generating set of a group Group cohomology Group extension Presentation of a group Product of group subsets Schur multiplier Semidirect product Sylow theorems Hall subgroup Wreath product Basic properties of groups Butterfly lemma Center of a group Centralizer and normalizer Characteristic subgroup Commutator Composition series Conjugacy class Conjugate closure Conjugation of isometries in Euclidean space Core (group) Coset Derived group Euler's theorem Fitting subgroup Generalized Fitting subgroup Hamiltonian group Identity element Lagrange's theorem Multiplicative inverse Normal subgroup Perfect group p-core Schreier refinement theorem Subgroup Transversal (combinatorics) Torsion subgroup Zassenhaus lemma Group homomorphisms Automorphism Automorphism group Factor group Fundamental theorem on homomorphisms Group homomorphism Group isomorphism Homomorphism Isomorphism theorem Inner automorphism Order automorphism Outer automorphism group Quotient group Basic types of groups Examples of groups Abelian group Cyclic group Rank of an abelian group Dicyclic group Dihedral group Divisible group Finitely generated abelian group Group representation Klein four-group List of small groups Locally cyclic group Nilpotent group Non-abelian group Solvable group P-group Pro-finite group Simple groups and their classification Classification of finite simple groups Alternating group Borel subgroup Chevalley group Conway group Feit–Thompson theorem Fischer group General linear group Group of Lie type Group scheme HN group Janko group Lie group Simple Lie group Linear algebraic group List of finite simple groups Mathieu group Monster group Baby Monster group Bimonster Parabolic subgroup Projective group Reductive group Simple group Quasisimple group Special linear group Symmetric group Thompson group (finite) Tits group Weyl group Permutation and symme
https://en.wikipedia.org/wiki/Legendre%20polynomials
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications. Closely related to the Legendre polynomials are associated Legendre polynomials, Legendre functions, Legendre functions of the second kind, Big q-Legendre polynomials, and associated Legendre functions. Definition by construction as an orthogonal system In this approach, the polynomials are defined as an orthogonal system with respect to the weight function over the interval . That is, is a polynomial of degree , such that With the additional standardization condition , all the polynomials can be uniquely determined. We then start the construction process: is the only correctly standardized polynomial of degree 0. must be orthogonal to , leading to , and is determined by demanding orthogonality to and , and so on. is fixed by demanding orthogonality to all with . This gives conditions, which, along with the standardization fixes all coefficients in . With work, all the coefficients of every polynomial can be systematically determined, leading to the explicit representation in powers of given below. This definition of the 's is the simplest one. It does not appeal to the theory of differential equations. Second, the completeness of the polynomials follows immediately from the completeness of the powers 1, . Finally, by defining them via orthogonality with respect to the most obvious weight function on a finite interval, it sets up the Legendre polynomials as one of the three classical orthogonal polynomial systems. The other two are the Laguerre polynomials, which are orthogonal over the half line , and the Hermite polynomials, orthogonal over the full line , with weight functions that are the most natural analytic functions that ensure convergence of all integrals. Definition via generating function The Legendre polynomials can also be defined as the coefficients in a formal expansion in powers of of the generating function The coefficient of is a polynomial in of degree with . Expanding up to gives Expansion to higher orders gets increasingly cumbersome, but is possible to do systematically, and again leads to one of the explicit forms given below. It is possible to obtain the higher 's without resorting to direct expansion of the Taylor series, however. Equation  is differentiated with respect to on both sides and rearranged to obtain Replacing the quotient of the square root with its definition in Eq. , and equating the coefficients of powers of in the resulting expansion gives Bonnet’s recursion formula This relation, along with the first two polynomials and , allows all the rest to be generated
https://en.wikipedia.org/wiki/Khan%20Yunis
Khan Yunis (, also spelled Khan Younis or Khan Yunus; translation: "Caravansary [of] Jonah") is a city in the southern Gaza Strip. According to the Palestinian Central Bureau of Statistics, Khan Yunis had a population of 205,125 in 2017. Khan Yunis, which lies only 4 kilometers (about 2.5 miles) east of the Mediterranean Sea, has a semi-arid climate with temperature of 30 degrees Celsius maximum in summer and 10 degrees Celsius maximum in winter, with an annual rainfall of approximately . The Constituency of Khan Yunis had five members on the Palestinian Legislative Council. Following the 2006 Palestinian legislative election, there were three Hamas members, including Yunis al-Astal; and two Fatah members, including Mohammed Dahlan. The city is now under the Hamas administration of Gaza. History Establishment by Mamluks Before the 14th century, Khan Yunis was a village known as "Salqah". To protect caravans, pilgrims and travellers a vast caravan serai was constructed there by the emir Yunus al-Noorzai Khan in 1387–88, an Afghan/Pashtun of Noorzai tribe ruler in the Ottoman Empire. The growing town surrounding it was named "Khan Yunis" after him. In 1389 Yunus was killed in battle. Yunus ibn Abdallah an-Noorzai ad-Dawadar was the executive secretary, one of the high-ranking officials of the Mamluk sultan Barquq. The town became an important center for trade and its weekly Thursday market drew traders from neighboring regions. The khan served as resting stop for couriers of the barid, the Mamluk postal network in Palestine and Syria. Ottoman period In late 1516 Khan Yunis was the site of a minor battle in which the Egypt-based Mamluks were defeated by Ottoman forces under the leadership of Sinan Pasha. The Ottoman sultan Selim I then arrived in the area where he led the Ottoman army across the Sinai Peninsula to conquer Egypt. During the 17th and 18th centuries the Ottomans assigned an Asappes garrison associated with the Cairo Citadel to guard the fortress at Khan Yunis. Pierre Jacotin named the village Kan Jounes on his map from 1799, while in 1838, Robinson noted Khan Yunas as a Muslim village located in the Gaza district. In 1863 French explorer Victor Guérin visited Khan Yunis. He found it had about a thousand inhabitants, and that many fruit trees, especially apricots were planted in the vicinity. At the end of the 19th-century the Ottomans established a municipal council to administer the affairs of Khan Yunis, which had become the second largest town in the Gaza District after Gaza itself. British Mandate In the 1922 census of Palestine conducted by the British Mandate authorities, Khan Yunis had a population of 3890 inhabitants (3866 Muslims, 23 Christians, and one Jew), decreasing in the 1931 census to 3811 (3767 Muslims, 41 Christians, and three Jews), in 717 houses in the urban area and 3440 (3434 Muslims and 6 Christians) in 566 houses in the suburbs. In the 1945 statistics Khan Yunis had a population of 11,220 (11,180 Mu
https://en.wikipedia.org/wiki/Euler%E2%80%93Jacobi%20pseudoprime
In number theory, an odd integer n is called an Euler–Jacobi probable prime (or, more commonly, an Euler probable prime) to base a, if a and n are coprime, and where is the Jacobi symbol. If n is an odd composite integer that satisfies the above congruence, then n is called an Euler–Jacobi pseudoprime (or, more commonly, an Euler pseudoprime) to base a. Properties The motivation for this definition is the fact that all prime numbers n satisfy the above equation, as explained in the Euler's criterion article. The equation can be tested rather quickly, which can be used for probabilistic primality testing. These tests are over twice as strong as tests based on Fermat's little theorem. Every Euler–Jacobi pseudoprime is also a Fermat pseudoprime and an Euler pseudoprime. There are no numbers which are Euler–Jacobi pseudoprimes to all bases as Carmichael numbers are. Solovay and Strassen showed that for every composite n, for at least n/2 bases less than n, n is not an Euler–Jacobi pseudoprime. The smallest Euler–Jacobi pseudoprime base 2 is 561. There are 11347 Euler–Jacobi pseudoprimes base 2 that are less than 25·109 (see ) (page 1005 of ). In the literature (for example,), an Euler–Jacobi pseudoprime as defined above is often called simply an Euler pseudoprime. Implementation in Lua function EulerJacobiTest(k) a = 2 if k == 1 then return false elseif k == 2 then return true else if(modPow(a,(k-1)/2,k) == Jacobi(a,k)) then return true else return false end end end See also Probable prime References Pseudoprimes
https://en.wikipedia.org/wiki/Dirichlet%27s%20theorem%20on%20arithmetic%20progressions
In number theory, Dirichlet's theorem, also called the Dirichlet prime number theorem, states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d. The numbers of the form a + nd form an arithmetic progression and Dirichlet's theorem states that this sequence contains infinitely many prime numbers. The theorem, named after Peter Gustav Lejeune Dirichlet, extends Euclid's theorem that there are infinitely many prime numbers. Stronger forms of Dirichlet's theorem state that for any such arithmetic progression, the sum of the reciprocals of the prime numbers in the progression diverges and that different such arithmetic progressions with the same modulus have approximately the same proportions of primes. Equivalently, the primes are evenly distributed (asymptotically) among the congruence classes modulo d containing a'''s coprime to d. Examples The primes of the form 4n + 3 are 3, 7, 11, 19, 23, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 127, 131, 139, 151, 163, 167, 179, 191, 199, 211, 223, 227, 239, 251, 263, 271, 283, ... They correspond to the following values of n: 0, 1, 2, 4, 5, 7, 10, 11, 14, 16, 17, 19, 20, 25, 26, 31, 32, 34, 37, 40, 41, 44, 47, 49, 52, 55, 56, 59, 62, 65, 67, 70, 76, 77, 82, 86, 89, 91, 94, 95, ... The strong form of Dirichlet's theorem implies that is a divergent series. Sequences dn + a with odd d are often ignored because half the numbers are even and the other half is the same numbers as a sequence with 2d, if we start with n = 0. For example, 6n + 1 produces the same primes as 3n + 1, while 6n + 5 produces the same as 3n + 2 except for the only even prime 2. The following table lists several arithmetic progressions with infinitely many primes and the first few ones in each of them. Distribution Since the primes thin out, on average, in accordance with the prime number theorem, the same must be true for the primes in arithmetic progressions. It is natural to ask about the way the primes are shared between the various arithmetic progressions for a given value of d (there are d of those, essentially, if we do not distinguish two progressions sharing almost all their terms). The answer is given in this form: the number of feasible progressions modulo d — those where a and d do not have a common factor > 1 — is given by Euler's totient function Further, the proportion of primes in each of those is For example, if d is a prime number q, each of the q − 1 progressions (all except ) contains a proportion 1/(q − 1) of the primes. When compared to each other, progressions with a quadratic nonresidue remainder have typically slightly more elements than those with a quadratic residue remainder (Chebyshev's bias). History In 1737, Euler related the study of prime numbers to what is known now as the Riemann zeta function: he showed that the value
https://en.wikipedia.org/wiki/MMT
MMT may refer to: Economics Modern Monetary Theory, a branch of economic theory Geography 4QMMT (or MMT), one of the Dead Sea Scrolls Myanmar Standard Time (UTC+6:30) Mathematics MacMahon Master theorem, a result in enumerative combinatorics and linear algebra Technology MMT (Eclipse), a software project Multimode manual transmission, in a motor vehicle MPEG media transport, a digital container standard Science Methylcyclopentadienyl manganese tricarbonyl, an organomanganese compound MMT Observatory, an astronomical observatory in Arizona, USA MMt, one million metric tons Montmorillonite, a clay Television Miyagi Television Broadcasting, a Japanese commercial broadcaster Transportation Marshalltown Municipal Transit, in Marshalltown, Iowa
https://en.wikipedia.org/wiki/Diophantine%20set
In mathematics, a Diophantine equation is an equation of the form P(x1, ..., xj, y1, ..., yk) = 0 (usually abbreviated P(, ) = 0) where P(, ) is a polynomial with integer coefficients, where x1, ..., xj indicate parameters and y1, ..., yk indicate unknowns. A Diophantine set is a subset S of , the set of all j-tuples of natural numbers, so that for some Diophantine equation P(, ) = 0, That is, a parameter value is in the Diophantine set S if and only if the associated Diophantine equation is satisfiable under that parameter value. The use of natural numbers both in S and the existential quantification merely reflects the usual applications in computability theory and model theory. It does not matter whether natural numbers refer to the set of nonnegative integers or positive integers since the two definitions for Diophantine sets are equivalent. We can also equally well speak of Diophantine sets of integers and freely replace quantification over natural numbers with quantification over the integers. Also it is sufficient to assume P is a polynomial over and multiply P by the appropriate denominators to yield integer coefficients. However, whether quantification over rationals can also be substituted for quantification over the integers is a notoriously hard open problem. The MRDP theorem (so named for the initials of the four principal contributors to its solution) states that a set of integers is Diophantine if and only if it is computably enumerable. A set of integers S is computably enumerable if and only if there is an algorithm that, when given an integer, halts if that integer is a member of S and runs forever otherwise. This means that the concept of general Diophantine set, apparently belonging to number theory, can be taken rather in logical or computability-theoretic terms. This is far from obvious, however, and represented the culmination of some decades of work. Matiyasevich's completion of the MRDP theorem settled Hilbert's tenth problem. Hilbert's tenth problem was to find a general algorithm that can decide whether a given Diophantine equation has a solution among the integers. While Hilbert's tenth problem is not a formal mathematical statement as such, the nearly universal acceptance of the (philosophical) identification of a decision algorithm with a total computable predicate allows us to use the MRDP theorem to conclude that the tenth problem is unsolvable. Examples In the following examples, the natural numbers refer to the set of positive integers. The equation is an example of a Diophantine equation with a parameter x and unknowns y1 and y2. The equation has a solution in y1 and y2 precisely when x can be expressed as a product of two integers greater than 1, in other words x is a composite number. Namely, this equation provides a Diophantine definition of the set {4, 6, 8, 9, 10, 12, 14, 15, 16, 18, ...} consisting of the composite numbers. Other examples of Diophantine definitions are as follows: The equati
https://en.wikipedia.org/wiki/Degenerate%20distribution
In mathematics, a degenerate distribution is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter definition, it is a deterministic distribution and takes only a single value. Examples include a two-headed coin and rolling a die whose sides all show the same number. This distribution satisfies the definition of "random variable" even though it does not appear random in the everyday sense of the word; hence it is considered degenerate. In the case of a real-valued random variable, the degenerate distribution is a one-point distribution, localized at a point k0 on the real line. The probability mass function equals 1 at this point and 0 elsewhere. The degenerate univariate distribution can be viewed as the limiting case of a continuous distribution whose variance goes to 0 causing the probability density function to be a delta function at k0, with infinite height there but area equal to 1. The cumulative distribution function of the univariate degenerate distribution is: Constant random variable In probability theory, a constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero. Constant and almost surely constant random variables, which have a degenerate distribution, provide a way to deal with constant values in a probabilistic framework. Let  X: Ω → R  be a random variable defined on a probability space  (Ω, P). Then  X  is an almost surely constant random variable if there exists such that and is furthermore a constant random variable if A constant random variable is almost surely constant, but not necessarily vice versa, since if  X  is almost surely constant then there may exist  γ ∈ Ω  such that  X(γ) ≠ k0  (but then necessarily Pr({γ}) = 0, in fact Pr(X ≠ k0) = 0). For practical purposes, the distinction between  X  being constant or almost surely constant is unimportant, since the cumulative distribution function  F(x)  of  X  does not depend on whether  X  is constant or 'merely' almost surely constant. In either case, The function  F(x)  is a step function; in particular it is a translation of the Heaviside step function. Higher dimensions Degeneracy of a multivariate distribution in n random variables arises when the support lies in a space of dimension less than n. This occurs when at least one of the variables is a deterministic function of the others. For example, in the 2-variable case suppose that Y = aX + b for scalar random variables X and Y and scalar constants a ≠ 0 and b; here knowing the value of one of X or Y gives exact knowledge of the value of the other. All the possible points (x, y) fall on the one-dimensional line y = ax + b. In general when one or m
https://en.wikipedia.org/wiki/Perturbation%20theory
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines. Description Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ), like the following: In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction Some authors use big O notation to indicate the order of the error in the approximate solution: If the power series in converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers or negative powers ) then the perturbation problem i
https://en.wikipedia.org/wiki/Orthogonality
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity. Orthogonality is also used with various meanings that are often weakly related or not related at all with the mathematical meanings. Etymology The word comes from the Ancient Greek (), meaning "upright", and (), meaning "angle". The Ancient Greek () and Classical Latin originally denoted a rectangle. Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. Mathematics Physics Optics In optics, polarization states are said to be orthogonal when they propagate independently of each other, as in vertical and horizontal linear polarization or right- and left-handed circular polarization. Special relativity In special relativity, a time axis determined by a rapidity of motion is hyperbolic-orthogonal to a space axis of simultaneous events, also determined by the rapidity. The theory features relativity of simultaneity. Hyperbolic orthogonality Quantum mechanics In quantum mechanics, a sufficient (but not necessary) condition that two eigenstates of a Hermitian operator, and , are orthogonal is that they correspond to different eigenvalues. This means, in Dirac notation, that if and correspond to different eigenvalues. This follows from the fact that Schrödinger's equation is a Sturm–Liouville equation (in Schrödinger's formulation) or that observables are given by Hermitian operators (in Heisenberg's formulation). Art In art, the perspective (imaginary) lines pointing to the vanishing point are referred to as "orthogonal lines". The term "orthogonal line" often has a quite different meaning in the literature of modern art criticism. Many works by painters such as Piet Mondrian and Burgoyne Diller are noted for their exclusive use of "orthogonal lines" — not, however, with reference to perspective, but rather referring to lines that are straight and exclusively horizontal or vertical, forming right angles where they intersect. For example, an essay at the web site of the Thyssen-Bornemisza Museum states that "Mondrian ... dedicated his entire oeuvre to the investigation of the balance between orthogonal lines and primary colours." Computer science Orthogonality in programming language design is the ability to use various language features in arbitrary combinations with consistent results. This usage was introduced by Van Wijngaarden in the design of Algol 68: The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied “orthogonally” in order to maximize the expressive power of the language while trying to avoid deleterious superfluities. Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propa
https://en.wikipedia.org/wiki/Log-normal%20distribution
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a normal distribution. Equivalently, if has a normal distribution, then the exponential function of , , has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics). The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton. The log-normal distribution has also been associated with other names, such as McAlister, Gibrat and Cobb–Douglas. A log-normal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain (sometimes called Gibrat's law). The log-normal distribution is the maximum entropy probability distribution for a random variate —for which the mean and variance of are specified. Definitions Generation and parameters Let be a standard normal variable, and let and be two real numbers. Then, the distribution of the random variable is called the log-normal distribution with parameters and . These are the expected value (or mean) and standard deviation of the variable's natural logarithm, not the expectation and standard deviation of itself. This relationship is true regardless of the base of the logarithmic or exponential function: if is normally distributed, then so is for any two positive numbers . Likewise, if is log-normally distributed, then so is , where . In order to produce a distribution with desired mean and variance , one uses and Alternatively, the "multiplicative" or "geometric" parameters and can be used. They have a more direct interpretation: is the median of the distribution, and is useful for determining "scatter" intervals, see below. Probability density function A positive random variable X is log-normally distributed (i.e., ), if the natural logarithm of X is normally distributed with mean and variance : Let and be respectively the cumulative probability distribution function and the probability density function of the N(0,1) distribution, then we have that Cumulative distribution function The cumulative distribution function is where is the cumulative distribution function of the standard normal distribution (i.e., N(0,1)). This may also be expressed as follows: where erfc is the complementary error function. Multivariate log-normal If is a multivariate normal distribution, then has a multivariate log-normal distribution. The exponential is applied element
https://en.wikipedia.org/wiki/John%20Forbes%20Nash%20Jr.
John Forbes Nash, Jr. (June 13, 1928 – May 23, 2015), known and published as John Nash, was an American mathematician who made fundamental contributions to game theory, real algebraic geometry, differential geometry, and partial differential equations. Nash and fellow game theorists John Harsanyi and Reinhard Selten were awarded the 1994 Nobel Memorial Prize in Economics. In 2015, he and Louis Nirenberg were awarded the Abel Prize for their contributions to the field of partial differential equations. As a graduate student in the Princeton University Department of Mathematics, Nash introduced a number of concepts (including Nash equilibrium and the Nash bargaining solution) which are now considered central to game theory and its applications in various sciences. In the 1950s, Nash discovered and proved the Nash embedding theorems by solving a system of nonlinear partial differential equations arising in Riemannian geometry. This work, also introducing a preliminary form of the Nash–Moser theorem, was later recognized by the American Mathematical Society with the Leroy P. Steele Prize for Seminal Contribution to Research. Ennio De Giorgi and Nash found, with separate methods, a body of results paving the way for a systematic understanding of elliptic and parabolic partial differential equations. Their De Giorgi–Nash theorem on the smoothness of solutions of such equations resolved Hilbert's nineteenth problem on regularity in the calculus of variations, which had been a well-known open problem for almost sixty years. In 1959, Nash began showing clear signs of mental illness, and spent several years at psychiatric hospitals being treated for schizophrenia. After 1970, his condition slowly improved, allowing him to return to academic work by the mid-1980s. Nash was biographed in Sylvia Nasar's 1998 book A Beautiful Mind, and his struggles with his illness and his recovery became the basis for a film of the same name directed by Ron Howard, in which Nash was portrayed by Russell Crowe. Early life and education John Forbes Nash Jr. was born on June 13, 1928, in Bluefield, West Virginia. His father and namesake, John Forbes Nash Sr., was an electrical engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia (née Martin) Nash, had been a schoolteacher before she was married. He was baptized in the Episcopal Church. He had a younger sister, Martha (born November 16, 1930). Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents. Nash's parents pursued opportunities to supplement their son's education, and arranged for him to take advanced mathematics courses at a local community college (Bluefield College) during his final year of high school. He attended Carnegie Institute of Technology (which later became Carnegie Mellon University) through a full benefit of the George Westinghouse Scholarship, initially majoring in chemical engineering. He switched to a chemistry
https://en.wikipedia.org/wiki/Bernoulli%20process
In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables Xi are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin (but with consistent unfairness). Every variable Xi in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes (such as the process for a six-sided die); this generalization is known as the Bernoulli scheme. The problem of determining the process, given only a limited sample of Bernoulli trials, may be called the problem of checking whether a coin is fair. Definition A Bernoulli process is a finite or infinite sequence of independent random variables X1, X2, X3, ..., such that for each i, the value of Xi is either 0 or 1; for all values of , the probability p that Xi = 1 is the same. In other words, a Bernoulli process is a sequence of independent identically distributed Bernoulli trials. Independence of the trials implies that the process is memoryless. Given that the probability p is known, past outcomes provide no information about future outcomes. (If p is unknown, however, the past informs about the future indirectly, through inferences about p.) If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property. Interpretation The two possible values of each Xi are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the ith "trial". Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables Xi may be called Bernoulli trials with parameter p. In many applications time passes between trials, as the index i increases. In effect, the trials X1, X2, ... Xi, ... happen at "points in time" 1, 2, ..., i, .... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any Xi and Xj in the process are simply two from a set of random variables indexed by {1, 2, ..., n}, the finite cases, or by {1, 2, 3, ...}, the infinite cases. One experiment with only two possible outcomes, often referred to as "success" and "failure", usually encoded as 1 and 0, can be modeled as a Bernoulli distribution. Several random variables and probability distributions beside the Bernoullis may be derived from the Bernoulli process: The number of successes in the first n trials, which has a binomial distribution B(n, p) The number of failures needed to get r successes, which has a negative bino
https://en.wikipedia.org/wiki/Bernoulli%20trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his (1713). The mathematical formalisation of the Bernoulli trial is known as the Bernoulli process. This article offers an elementary introduction to the concept, whereas the article on the Bernoulli process offers a more advanced treatment. Since a Bernoulli trial has only two possible outcomes, it can be framed as some "yes or no" question. For example: Is the top card of a shuffled deck an ace? Was the newborn child a girl? (See human sex ratio.) Therefore, success and failure are merely labels for the two outcomes, and should not be construed literally. The term "success" in this sense consists in the result meeting specified conditions; it is not a value judgement. More generally, given any probability space, for any event (set of outcomes), one can define a Bernoulli trial, corresponding to whether the event occurred or not (event or complementary event). Examples of Bernoulli trials include: Flipping a coin. In this context, obverse ("heads") conventionally denotes success and reverse ("tails") denotes failure. A fair coin has the probability of success 0.5 by definition. In this case, there are exactly two possible outcomes. Rolling a , where a six is "success" and everything else a "failure". In this case, there are six possible outcomes, and the event is a six; the complementary event "not a six" corresponds to the other five possible outcomes. In conducting a political opinion poll, choosing a voter at random to ascertain whether that voter will vote "yes" in an upcoming referendum. Definition Independent repeated trials of an experiment with exactly two possible outcomes are called Bernoulli trials. Call one of the outcomes "success" and the other outcome "failure". Let be the probability of success in a Bernoulli trial, and be the probability of failure. Then the probability of success and the probability of failure sum to one, since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. Thus, one has the following relations: Alternatively, these can be stated in terms of odds: given probability of success and of failure, the odds for are and the odds against are These can also be expressed as numbers, by dividing, yielding the odds for, , and the odds against, : These are multiplicative inverses, so they multiply to 1, with the following relations: In the case that a Bernoulli trial is representing an event from finitely many equally likely outcomes, where of the outcomes are success and of the outcomes are failure, the odds for are and the odds against are This yields the following formulas for probability and odds: Here the odds
https://en.wikipedia.org/wiki/Graded%20ring
In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups such that . The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading. A graded module is defined similarly (see below for the precise definition). It generalizes graded vector spaces. A graded module that is also a graded ring is called a graded algebra. A graded ring could also be viewed as a graded -algebra. The associativity is not important (in fact not used at all) in the definition of a graded ring; hence, the notion applies to non-associative algebras as well; e.g., one can consider a graded Lie algebra. First properties Generally, the index set of a graded ring is assumed to be the set of nonnegative integers, unless otherwise explicitly specified. This is the case in this article. A graded ring is a ring that is decomposed into a direct sum of additive groups, such that for all nonnegative integers and . A nonzero element of is said to be homogeneous of degree . By definition of a direct sum, every nonzero element of can be uniquely written as a sum where each is either 0 or homogeneous of degree . The nonzero are the homogeneous components of . Some basic properties are: is a subring of ; in particular, the multiplicative identity is an homogeneous element of degree zero. For any , is a two-sided -module, and the direct sum decomposition is a direct sum of -modules. is an associative -algebra. An ideal is homogeneous, if for every , the homogeneous components of also belong to (Equivalently, if it is a graded submodule of ; see .) The intersection of a homogeneous ideal with is an -submodule of called the homogeneous part of degree of . A homogeneous ideal is the direct sum of its homogeneous parts. If is a two-sided homogeneous ideal in , then is also a graded ring, decomposed as where is the homogeneous part of degree of . Basic examples Any (non-graded) ring R can be given a gradation by letting , and for i ≠ 0. This is called the trivial gradation on R. The polynomial ring is graded by degree: it is a direct sum of consisting of homogeneous polynomials of degree i. Let S be the set of all nonzero homogeneous elements in a graded integral domain R. Then the localization of R with respect to S is a -graded ring. If I is an ideal in a commutative ring R, then is a graded ring called the associated graded ring of R along I; geometrically, it is the coordinate ring of the normal cone along the subvariety defined by I. Let X be a topological space, H&hairsp;i(X; R) the ith cohomology group with coefficients in a ring R. Then H *(X; R), the cohomology ring of X with coefficients in R, is a graded ring whose underlying group is with the multiplicative structure given by the cup product. Graded module The corresponding idea in mod
https://en.wikipedia.org/wiki/Outer%20product
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. The outer product contrasts with: The dot product (a special case of "inner product"), which takes a pair of coordinate vectors as input and produces a scalar The Kronecker product, which takes a pair of matrices as input and produces a block matrix Standard matrix multiplication Definition Given two vectors of size and respectively their outer product, denoted is defined as the matrix obtained by multiplying each element of by each element of : Or, in index notation: Denoting the dot product by if given an vector then If given a vector then If and are vectors of the same dimension bigger than 1, then . The outer product is equivalent to a matrix multiplication provided that is represented as a column vector and as a column vector (which makes a row vector). For instance, if and then For complex vectors, it is often useful to take the conjugate transpose of denoted or : Contrast with Euclidean inner product If then one can take the matrix product the other way, yielding a scalar (or matrix): which is the standard inner product for Euclidean vector spaces, better known as the dot product. The dot product is the trace of the outer product. Unlike the dot product, the outer product is not commutative. Multiplication of a vector by the matrix can be written in terms of the inner product, using the relation . The outer product of tensors Given two tensors with dimensions and , their outer product is a tensor with dimensions and entries For example, if is of order 3 with dimensions and is of order 2 with dimensions then their outer product is of order 5 with dimensions If has a component and has a component , then the component of formed by the outer product is . Connection with the Kronecker product The outer product and Kronecker product are closely related; in fact the same symbol is commonly used to denote both operations. If and , we have: In the case of column vectors, the Kronecker product can be viewed as a form of vectorization (or flattening) of the outer product. In particular, for two column vectors and , we can write: (The order of the vectors is reversed on the right side of the equation.) Another similar identity that further highlights the similarity between the operations is where the order of vectors needs not be flipped. The middle expression uses matrix multiplication, where the vectors are considered as column/row matrices. Connection with the matrix product Given a pa
https://en.wikipedia.org/wiki/Distributive%20property
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality is always true in elementary algebra. For example, in elementary arithmetic, one has Therefore, one would say that multiplication distributes over addition. This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted ) and the logical or (denoted ) distributes over the other. Definition Given a set and two binary operators and on the operation is over (or with respect to) if, given any elements of the operation is over if, given any elements of and the operation is over if it is left- and right-distributive. When is commutative, the three conditions above are logically equivalent. Meaning The operators used for examples in this section are those of the usual addition and multiplication If the operation denoted is not commutative, there is a distinction between left-distributivity and right-distributivity: In either case, the distributive property can be described in words as: To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted). If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of . One example of an operation that is "only" right-distributive is division, which is not commutative: In this case, left-distributivity does not apply: The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra. Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products. Examples Real numbers In the following examples, the use of the distributive law on the set of real numbers is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law. Matrices The distributive law is valid for matrix multiplication. More precisely, for all -matrices and -matrices as well as for all -matrices and -matrices Because the commuta
https://en.wikipedia.org/wiki/Vladimir%20Voevodsky
Vladimir Alexandrovich Voevodsky (, ; 4 June 1966 – 30 September 2017) was a Russian-American mathematician. His work in developing a homotopy theory for algebraic varieties and formulating motivic cohomology led to the award of a Fields Medal in 2002. He is also known for the proof of the Milnor conjecture and motivic Bloch–Kato conjectures and for the univalent foundations of mathematics and homotopy type theory. Early life and education Vladimir Voevodsky's father, Aleksander Voevodsky, was head of the Laboratory of High Energy Leptons in the Institute for Nuclear Research at the Russian Academy of Sciences. His mother Tatyana was a chemist. Voevodsky attended Moscow State University for a while, but was forced to leave without a diploma for refusing to attend classes and failing academically. He received his Ph.D. in mathematics from Harvard University in 1992 after being recommended without even applying, following several independent publications; he was advised there by David Kazhdan. While he was a first year undergraduate, he was given a copy of Esquisse d'un Programme (submitted a few months earlier by Alexander Grothendieck to CNRS) by his advisor George Shabat. He learned the French language "with the sole purpose of being able to read this text" and started his research on some of the themes mentioned there. Work Voevodsky's work was in the intersection of algebraic geometry with algebraic topology. Along with Fabien Morel, Voevodsky introduced a homotopy theory for schemes. He also formulated what is now believed to be the correct form of motivic cohomology, and used this new tool to prove Milnor's conjecture relating the Milnor K-theory of a field to its étale cohomology. For the above, he received the Fields Medal at the 24th International Congress of Mathematicians held in Beijing, China. In 1998 he gave a plenary lecture (A1-Homotopy Theory) at the International Congress of Mathematicians in Berlin. He coauthored (with Andrei Suslin and Eric M. Friedlander) Cycles, Transfers and Motivic Homology Theories, which develops the theory of motivic cohomology in some detail. From 2002, Voevodsky was a professor at the Institute for Advanced Study in Princeton, New Jersey. In January 2009, at an anniversary conference in honor of Alexander Grothendieck, held at the Institut des Hautes Études Scientifiques, Voevodsky announced a proof of the full Bloch–Kato conjectures. In 2009, he constructed the univalent model of Martin-Löf type theory in simplicial sets. This led to important advances in type theory and in the development of new univalent foundations of mathematics that Voevodsky worked on in his final years. He worked on a Coq library UniMath using univalent ideas. In April 2016, the University of Gothenburg awarded an honorary doctorate to Voevodsky. Death and legacy Voevodsky died on 30 September 2017 at his home in Princeton, New Jersey, aged 51, from an aneurysm. He was survived by his daughters, Diana Yasmine Voe
https://en.wikipedia.org/wiki/Laurent%20Lafforgue
Laurent Lafforgue (; born 6 November 1966) is a French mathematician. He has made outstanding contributions to Langlands' program in the fields of number theory and analysis, and in particular proved the Langlands conjectures for the automorphism group of a function field. The crucial contribution by Lafforgue to solve this question is the construction of compactifications of certain moduli stacks of shtukas. The proof was the result of more than six years of concentrated efforts. In 2002 at the 24th International Congress of Mathematicians in Beijing, China, he received the Fields Medal together with Vladimir Voevodsky. Biography Laurent Lafforgue has two brothers, Thomas and Vincent, both mathematicians. Thomas is now a teacher in a classe préparatoire aux grandes écoles at Lycée Louis le Grand in Paris and Vincent a CNRS directeur de recherches at the Institut Fourier in Grenoble. He won 2 silver medals at International Mathematical Olympiad (IMO) in 1984 and 1985. He entered the École Normale Supérieure in 1986. In 1994 he received his Ph.D. under the direction of Gérard Laumon in the Arithmetic and Algebraic Geometry team at the Université de Paris-Sud. Currently he is a research director of CNRS. He was detached as permanent professor of mathematics at the Institut des Hautes Études Scientifiques (IHÉS) in Bures-sur-Yvette, France, in 2000-2021. In 2021, he left his IHÉS position and moved to Huawei. Laurent is a devout Catholic and never married. Career He received the Clay Research Award in 2000, and the of the French Academy of Sciences in 2001 and was awarded the Fields Medal in 2002. His younger brother Vincent Lafforgue is also a notable mathematician. On 22 May 2011 Lafforgue was awarded an honorary Doctor of Science from the University of Notre Dame. Views Lafforgue is a critic of what he calls the "pedagogically correct" in France's educational system. In 2005, he was forced to resign from the Haut conseil de l'éducation after he expressed these views in a private letter that he sent to Bruno Racine, president of the HCE, that later was made public. Works Expository articles Lafforgue, L. Chtoucas de Drinfeld et applications. [Drinfelʹd shtukas and applications] Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998). Doc. Math. 1998, Extra Vol. II, 563–570. Lafforgue, Laurent. Chtoucas de Drinfeld, formule des traces d'Arthur-Selberg et correspondance de Langlands. [Drinfelʹd shtukas, Arthur-Selberg trace formula and Langlands correspondence] Proceedings of the International Congress of Mathematicians, Vol. I (Beijing, 2002), 383–400, Higher Ed. Press, Beijing, 2002. Research articles Lafforgue, Laurent. Chtoucas de Drinfeld et correspondance de Langlands. [Drinfelʹd shtukas and Langlands correspondence] Invent. Math. 147 (2002), no. 1, 1–241. Notes References Gérard Laumon, The Work of Laurent Lafforgue, Proceedings of the ICM, Beijing 2002, vol. 1, 91–97 Gérard Laumon, La correspondance
https://en.wikipedia.org/wiki/Partition%20%28number%20theory%29
In number theory and combinatorics, a partition of a non-negative integer , also called an integer partition, is a way of writing as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. (If order matters, the sum becomes a composition.) For example, can be partitioned in five distinct ways: The only partition of zero is the empty sum, having no parts. The order-dependent composition is the same partition as , and the two distinct compositions and represent the same partition as . An individual summand in a partition is called a part. The number of partitions of is given by the partition function . So . The notation means that is a partition of . Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general. Examples The seven partitions of 5 are 5 4 + 1 3 + 2 3 + 1 + 1 2 + 2 + 1 2 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1 Some authors treat a partition as a decreasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as the tuple or in the even more compact form where the superscript indicates the number of repetitions of a part. This multiplicity notation for a partition can be written alternatively as , where is the number of 1's, is the number of 2's, etc. (Components with may be omitted.) For example, in this notation, the partitions of 5 are written , and . Diagrammatic representations of partitions There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named after Norman Macleod Ferrers, and as Young diagrams, named after Alfred Young. Both have several possible conventions; here, we use English notation, with diagrams aligned in the upper-left corner. Ferrers diagram The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram: The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are shown below: Young diagram An alternative visual representation of an integer partition is its Young diagram (often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is while the Ferrers diagram for the same partition is {| |- style="vertical-align:top; text-align:left;" | |- style="vertical-align:top; text-align:center;" |} While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated obj
https://en.wikipedia.org/wiki/Solvable%20group
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup. Motivation Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equation. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable (note this theorem holds only in characteristic 0). This means associated to a polynomial there is a tower of field extensionssuch that where , so is a solution to the equation where contains a splitting field for Example For example, the smallest Galois field extension of containing the elementgives a solvable group. It has associated field extensionsgiving a solvable group of Galois extensions containing the following composition factors: with group action , and minimal polynomial . with group action , and minimal polynomial . with group action , and minimal polynomial containing the 5th roots of unity excluding . with group action , and minimal polynomial . , where is the identity permutation. All of the defining group actions change a single extension while keeping all of the other extensions fixed. For example, an element of this group is the group action . A general element in the group can be written as for a total of 80 elements. It is worthwhile to note that this group is not abelian itself. For example: In fact, in this group, . The solvable group is isometric to , defined using the semidirect product and direct product of the cyclic groups. In the solvable group, is not a normal subgroup. Definition A group G is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups 1 = G0 < G1 < ⋅⋅⋅ < Gk = G such that Gj−1 is normal in Gj, and Gj /Gj−1 is an abelian group, for j = 1, 2, …, k. Or equivalently, if its derived series, the descending normal series where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of G. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the quotient H/N is abelian if and only if N includes the commutator subgroup of H. The least n such that G(n) = 1 is called the derived length of the solvable group G. For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups corresp