id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,109
https://en.wikipedia.org/wiki/Diophantine%20equation
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents. Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century. Examples In the following Diophantine equations, , and are the unknowns and the other letters are given constants: Linear Diophantine equations One equation The simplest linear Diophantine equation takes the form where , and are given integers. The solutions are described by the following theorem: This Diophantine equation has a solution (where and are integers) if and only if is a multiple of the greatest common divisor of and . Moreover, if is a solution, then the other solutions have the form , where is an arbitrary integer, and and are the quotients of and (respectively) by the greatest common divisor of and . Proof: If is this greatest common divisor, Bézout's identity asserts the existence of integers and such that . If is a multiple of , then for some integer , and is a solution. On the other hand, for every pair of integers and , the greatest common divisor of and divides . Thus, if the equation has a solution, then must be a multiple of . If and , then for every solution , we have showing that is another solution. Finally, given two solutions such that one deduces that As and are coprime, Euclid's lemma shows that divides , and thus that there exists an integer such that both Therefore, which completes the proof. Chinese remainder theorem The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let be pairwise coprime integers greater than one, be arbitrary integers, and be the product The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution such that , and that the other solutions are obtained by adding to a multiple of : System of linear Diophantine equations More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written where is an matrix of integers, is an column matrix of unknowns and is an column matrix of integers. The computation of the Smith normal form of provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) and of respective dimensions and , such that the matrix is such that is not zero for not greater than some integer , and all the other entries are zero. The system to be solved may thus be rewritten as Calling the entries of and those of , this leads to the system This system is equivalent to the given one in the following sense: A column matrix of integers is a solution of the given system if and only if for some column matrix of integers such that . It follows that the system has a solution if and only if divides for and for . If this condition is fulfilled, the solutions of the given system are where are arbitrary integers. Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form." Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations. Homogeneous equations A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem As a homogeneous polynomial in indeterminates defines a hypersurface in the projective space of dimension , solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface. Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the th power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for , there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved. For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem). For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation. Degree two Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced. For proving that there is no solution, one may reduce the equation modulo . For example, the Diophantine equation does not have any other solution than the trivial solution . In fact, by dividing , and by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if , and are all even, and are thus not coprime. Thus the only solution is the trivial solution . This shows that there is no rational point on a circle of radius centered at the origin. More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist. If a non-trivial integer solution is known, one may produce all other solutions in the following way. Geometric interpretation Let be a homogeneous Diophantine equation, where is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all are zero. If is a non-trivial integer solution of this equation, then are the homogeneous coordinates of a rational point of the hypersurface defined by . Conversely, if are homogeneous coordinates of a rational point of this hypersurface, where are integers, then is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form where is any integer, and is the greatest common divisor of the It follows that solving the Diophantine equation is completely reduced to finding the rational points of the corresponding projective hypersurface. Parameterization Let now be an integer solution of the equation As is a polynomial of degree two, a line passing through crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through , and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters. More precisely, one may proceed as follows. By permuting the indices, one may suppose, without loss of generality that Then one may pass to the affine case by considering the affine hypersurface defined by which has the rational point If this rational point is a singular point, that is if all partial derivatives are zero at , all lines passing through are contained in the hypersurface, and one has a cone. The change of variables does not change the rational points, and transforms into a homogeneous polynomial in variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables. If the polynomial is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case. In the general case, consider the parametric equation of a line passing through : Substituting this in , one gets a polynomial of degree two in , that is zero for . It is thus divisible by . The quotient is linear in , and may be solved for expressing as a quotient of two polynomials of degree at most two in with integer coefficients: Substituting this in the expressions for one gets, for , where are polynomials of degree at most two with integer coefficients. Then, one can return to the homogeneous case. Let, for , be the homogenization of These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by : A point of the projective hypersurface defined by is rational if and only if it may be obtained from rational values of As are homogeneous polynomials, the point is not changed if all are multiplied by the same rational number. Thus, one may suppose that are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences where, for , where is an integer, are coprime integers, and is the greatest common divisor of the integers One could hope that the coprimality of the , could imply that . Unfortunately this is not the case, as shown in the next section. Example of Pythagorean triples The equation is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples. For retrieving exactly Euclid's formula, we start from the solution , corresponding to the point of the unit circle. A line passing through this point may be parameterized by its slope: Putting this in the circle equation one gets Dividing by , results in which is easy to solve in : It follows Homogenizing as described above one gets all solutions as where is any integer, and are coprime integers, and is the greatest common divisor of the three numerators. In fact, if and are both odd, and if one is odd and the other is even. The primitive triples are the solutions where and . This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that , and are all positive, and does not distinguish between two triples that differ by the exchange of and , Diophantine analysis Typical questions The questions asked in Diophantine analysis include: Are there any solutions? Are there any solutions beyond some that are easily found by inspection? Are there finitely or infinitely many solutions? Can all solutions be found in theory? Can one in practice compute a full list of solutions? These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles. Typical problem The given information is that a father's age is 1 less than twice that of his son, and that the digits making up the father's age are reversed in the son's age (i.e. ). This leads to the equation , thus . Inspection gives the result , , and thus equals 73 years and equals 37 years. One may easily show that there is not any other solution with and positive integers less than 10. Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts. 17th and 18th centuries In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation has no solutions for any higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles. In 1657, Fermat attempted to solve the Diophantine equation (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is , (see Chakravala method). Hilbert's tenth problem In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist. Diophantine geometry Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field , when is not algebraically closed. Modern research The oldest general method for solving a Diophantine equationor for proving that there is no solution is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations. The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist. During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates. This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations. Infinite Diophantine equations An example of an infinite Diophantine equation is: which can be expressed as "How many ways can a given integer be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive . Compare this to: which does not always have a solution for positive . Exponential Diophantine equations If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include: the Ramanujan–Nagell equation, the equation of the Fermat–Catalan conjecture and Beal's conjecture, with inequality restrictions on the exponents the Erdős–Moser equation, A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error. See also Kuṭṭaka, Aryabhata's algorithm for solving linear Diophantine equations in two unknowns Notes References Further reading Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997. Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré" Historia Mathematica 8 (1981), 393–416. Bashmakova, Izabella G., Slavutin, E. I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian]. Bashmakova, Izabella G. "Diophantine Equations and the Evolution of Algebra", American Mathematical Society Translations 147 (2), 1990, pp. 85–100. Translated by A. Shenitzer and H. Grant. Bogdan Grechuk (2024). Polynomial Diophantine Equations: A Systematic Approach, Springer. Rashed, Roshdi, Histoire de l'analyse diophantienne classique : D'Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. External links Diophantine Equation. From MathWorld at Wolfram Research. Dario Alpern's Online Calculator. Retrieved 18 March 2009
Diophantine equation
[ "Mathematics" ]
4,038
[ "Diophantine equations", "Mathematical objects", "Equations", "Number theory" ]
9,110
https://en.wikipedia.org/wiki/Diophantus
Diophantus of Alexandria (born ; died ) was a Greek mathematician, who was the author of two main works: On Polygonal Numbers, which survives incomplete, and the Arithmetica in thirteen books, most of it extant, made up of arithmetical problems that are solved through algebraic equations. His Arithmetica influenced the development of algebra by Arabs, and his equations influenced modern work in both abstract algebra and computer science. The first five books of his work are purely algebraic. Furthermore, recent studies of Diophantus's work have revealed that the method of solution taught in his Arithmetica matches later medieval Arabic algebra in its concepts and overall procedure. Diophantus was among the earliest mathematicians who recognized positive rational numbers as numbers, by allowing fractions for coefficients and solutions. He coined the term παρισότης (parisotēs) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Although not the earliest, the Arithmetica has the best-known use of algebraic notation to solve arithmetical problems coming from Greek antiquity, and some of its problems served as inspiration for later mathematicians working in analysis and number theory. In modern use, Diophantine equations are algebraic equations with integer coefficients for which integer solutions are sought. Diophantine geometry and Diophantine approximations are two other subareas of number theory that are named after him. Biography Diophantus was born into a Greek family and is known to have lived in Alexandria, Egypt, during the Roman era, between AD 200 and 214 to 284 or 298. Much of our knowledge of the life of Diophantus is derived from a 5th-century Greek anthology of number games and puzzles created by Metrodorus. One of the problems (sometimes called his epitaph) states:Here lies Diophantus, the wonder behold. Through art algebraic, the stone tells how old: 'God gave him his boyhood one-sixth of his life, One twelfth more as youth while whiskers grew rife; And then yet one-seventh ere marriage begun; In five years there came a bouncing new son. Alas, the dear child of master and sage After attaining half the measure of his father's life chill fate took him. After consoling his fate by the science of numbers for four years, he ended his life.'This puzzle implies that Diophantus' age can be expressed as which gives a value of 84 years. However, the accuracy of the information cannot be confirmed. In popular culture, this puzzle was the Puzzle No.142 in Professor Layton and Pandora's Box as one of the hardest solving puzzles in the game, which needed to be unlocked by solving other puzzles first. Arithmetica Arithmetica is the major work of Diophantus and the most prominent work on premodern algebra in Greek mathematics. It is a collection of problems giving numerical solutions of both determinate and indeterminate equations. Of the original thirteen books of which Arithmetica consisted only six have survived, though there are some who believe that four Arabic books discovered in 1968 are also by Diophantus. Some Diophantine problems from Arithmetica have been found in Arabic sources. It should be mentioned here that Diophantus never used general methods in his solutions. Hermann Hankel, renowned German mathematician made the following remark regarding Diophantus:Our author (Diophantos) not the slightest trace of a general, comprehensive method is discernible; each problem calls for some special method which refuses to work even for the most closely related problems. For this reason it is difficult for the modern scholar to solve the 101st problem even after having studied 100 of Diophantos's solutions. History Like many other Greek mathematical treatises, Diophantus was forgotten in Western Europe during the Dark Ages, since the study of ancient Greek, and literacy in general, had greatly declined. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the early modern world, copied by, and thus known to, medieval Byzantine scholars. Scholia on Diophantus by the Byzantine Greek scholar John Chortasmenos (1370–1437) are preserved together with a comprehensive commentary written by the earlier Greek scholar Maximos Planudes (1260 – 1305), who produced an edition of Diophantus within the library of the Chora Monastery in Byzantine Constantinople. In addition, some portion of the Arithmetica probably survived in the Arab tradition (see above). In 1463 German mathematician Regiomontanus wrote:No one has yet translated from the Greek into Latin the thirteen books of Diophantus, in which the very flower of the whole of arithmetic lies hidden.Arithmetica was first translated from Greek into Latin by Bombelli in 1570, but the translation was never published. However, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander. The Latin translation of Arithmetica by Bachet in 1621 became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it and made notes in the margins. A later 1895 Latin translation by Paul Tannery was said to be an improvement by Thomas L. Heath, who used it in the 1910 second edition of his English translation. Margin-writing by Fermat and Chortasmenos The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous "Last Theorem" in the margins of his copy: If an integer is greater than 2, then has no solutions in non-zero integers , , and . I have a truly marvelous proof of this proposition which this margin is too narrow to contain.Fermat's proof was never found, and the problem of finding a proof for the theorem went unsolved for centuries. A proof was finally found in 1994 by Andrew Wiles after working on it for seven years. It is believed that Fermat did not actually have the proof he claimed to have. Although the original copy in which Fermat wrote this is lost today, Fermat's son edited the next edition of Diophantus, published in 1670. Even though the text is otherwise inferior to the 1621 edition, Fermat's annotations—including the "Last Theorem"—were printed in this version. Fermat was not the first mathematician so moved to write in his own marginal notes to Diophantus; the Byzantine scholar John Chortasmenos (1370–1437) had written "Thy soul, Diophantus, be with Satan because of the difficulty of your other theorems and particularly of the present theorem" next to the same problem. Other works Diophantus wrote several other books besides Arithmetica, but only a few of them have survived. The Porisms Diophantus himself refers to a work which consists of a collection of lemmas called The Porisms (or Porismata), but this book is entirely lost. Although The Porisms is lost, we know three lemmas contained there, since Diophantus refers to them in the Arithmetica. One lemma states that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any and , with , there exist , all positive and rational, such that . Polygonal numbers and geometric elements Diophantus is also known to have written on polygonal numbers, a topic of great interest to Pythagoras and Pythagoreans. Fragments of a book dealing with polygonal numbers are extant. A book called Preliminaries to the Geometric Elements has been traditionally attributed to Hero of Alexandria. It has been studied recently by Wilbur Knorr, who suggested that the attribution to Hero is incorrect, and that the true author is Diophantus. Influence Diophantus' work has had a large influence in history. Editions of Arithmetica exerted a profound influence on the development of algebra in Europe in the late sixteenth and through the 17th and 18th centuries. Diophantus and his works also influenced Arab mathematics and were of great fame among Arab mathematicians. Diophantus' work created a foundation for work on algebra and in fact much of advanced mathematics is based on algebra. How much he affected India is a matter of debate. Diophantus has been considered "the father of algebra" because of his contributions to number theory, mathematical notations and the earliest known use of syncopated notation in his book series Arithmetica. However this is usually debated, because Al-Khwarizmi was also given the title as "the father of algebra", nevertheless both mathematicians were responsible for paving the way for algebra today. Diophantine analysis Today, Diophantine analysis is the area of study where integer (whole-number) solutions are sought for equations, and Diophantine equations are polynomial equations with integer coefficients to which only integer solutions are sought. It is usually rather difficult to tell whether a given Diophantine equation is solvable. Most of the problems in Arithmetica lead to quadratic equations. Diophantus looked at 3 different types of quadratic equations: , , and . The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers , , to all be positive in each of the three cases above. Diophantus was always satisfied with a rational solution and did not require a whole number which means he accepted fractions as solutions to his problems. Diophantus considered negative or irrational square root solutions "useless", "meaningless", and even "absurd". To give one specific example, he calls the equation 'absurd' because it would lead to a negative value for . One solution was all he looked for in a quadratic equation. There is no evidence that suggests Diophantus even realized that there could be two solutions to a quadratic equation. He also considered simultaneous quadratic equations. Mathematical notation Diophantus made important advances in mathematical notation, becoming the first person known to use algebraic notation and symbolism. Before him everyone wrote out equations completely. Diophantus introduced an algebraic symbolism that used an abridged notation for frequently occurring operations, and an abbreviation for the unknown and for the powers of the unknown. Mathematical historian Kurt Vogel states:The symbolism that Diophantus introduced for the first time, and undoubtedly devised himself, provided a short and readily comprehensible means of expressing an equation... Since an abbreviation is also employed for the word 'equals', Diophantus took a fundamental step from verbal algebra towards symbolic algebra.Although Diophantus made important advances in symbolism, he still lacked the necessary notation to express more general methods. This caused his work to be more concerned with particular problems rather than general situations. Some of the limitations of Diophantus' notation are that he only had notation for one unknown and, when problems involved more than a single unknown, Diophantus was reduced to expressing "first unknown", "second unknown", etc. in words. He also lacked a symbol for a general number . Where we would write , Diophantus has to resort to constructions like: "... a sixfold number increased by twelve, which is divided by the difference by which the square of the number exceeds three". Algebra still had a long way to go before very general problems could be written down and solved succinctly. See also Erdős–Diophantine graph Diophantus II.VIII Polynomial Diophantine equation Notes References Sources Allard, A. "Les scolies aux arithmétiques de Diophante d'Alexandrie dans le Matritensis Bibl.Nat.4678 et les Vatican Gr.191 et 304" Byzantion 53. Brussels, 1983: 682–710. Bachet de Méziriac, C.G. Diophanti Alexandrini Arithmeticorum libri sex et De numeris multangulis liber unus. Paris: Lutetiae, 1621. Bashmakova, Izabella G. Diophantos. Arithmetica and the Book of Polygonal Numbers. Introduction and Commentary Translation by I.N. Veselovsky. Moscow: Nauka [in Russian]. Christianidis, J. "Maxime Planude sur le sens du terme diophantien "plasmatikon"", Historia Scientiarum, 6 (1996)37-41. Christianidis, J. "Une interpretation byzantine de Diophante", Historia Mathematica, 25 (1998) 22–28. Czwalina, Arthur. Arithmetik des Diophantos von Alexandria. Göttingen, 1952. Heath, Sir Thomas, Diophantos of Alexandria: A Study in the History of Greek Algebra, Cambridge: Cambridge University Press, 1885, 1910. Robinson, D. C. and Luke Hodgkin. History of Mathematics, King's College London, 2003. Rashed, Roshdi. L’Art de l’Algèbre de Diophante. éd. arabe. Le Caire : Bibliothèque Nationale, 1975. Rashed, Roshdi. Diophante. Les Arithmétiques. Volume III: Book IV; Volume IV: Books V–VII, app., index. Collection des Universités de France. Paris (Société d’Édition "Les Belles Lettres"), 1984. Sesiano, Jacques. The Arabic text of Books IV to VII of Diophantus’ translation and commentary. Thesis. Providence: Brown University, 1975. Sesiano, Jacques. Books IV to VII of Diophantus’ Arithmetica in the Arabic translation attributed to Qusṭā ibn Lūqā, Heidelberg: Springer-Verlag, 1982. , . Σταμάτης, Ευάγγελος Σ. Διοφάντου Αριθμητικά. Η άλγεβρα των αρχαίων Ελλήνων. Αρχαίον κείμενον – μετάφρασις – επεξηγήσεις. Αθήναι, Οργανισμός Εκδόσεως Διδακτικών Βιβλίων, 1963. Tannery, P. L. Diophanti Alexandrini Opera omnia: cum Graecis commentariis, Lipsiae: In aedibus B.G. Teubneri, 1893-1895 (online: vol. 1, vol. 2) Ver Eecke, P. Diophante d’Alexandrie: Les Six Livres Arithmétiques et le Livre des Nombres Polygones, Bruges: Desclée, De Brouwer, 1921. Wertheim, G. Die Arithmetik und die Schrift über Polygonalzahlen des Diophantus von Alexandria. Übersetzt und mit Anmerkungen von G. Wertheim. Leipzig, 1890. Further reading Bashmakova, Izabella G. "Diophante et Fermat", Revue d'Histoire des Sciences 19 (1966), pp. 289–306 Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997. Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré", Historia Mathematica 8 (1981), 393–416. Bashmakova, Izabella G., Slavutin, E.I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian]. Rashed, Roshdi, Houzel, Christian. Les Arithmétiques de Diophante : Lecture historique et mathématique, Berlin, New York : Walter de Gruyter, 2013. Rashed, Roshdi, Histoire de l’analyse diophantienne classique : D’Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. External links Diophantus's Riddle Diophantus' epitaph, by E. Weisstein Norbert Schappacher (2005). Diophantus of Alexandria : a Text and its History. Review of Sesiano's Diophantus Review of J. Sesiano, Books IV to VII of Diophantus' Arithmetica, by Jan P. Hogendijk Latin translation from 1575 by Wilhelm Xylander Scans of Tannery's edition of Diophantus at wilbourhall.org 3rd-century births 3rd-century deaths 3rd-century Greek people 3rd-century Egyptian people Roman-era Alexandrians Diophantus of Alexandria Ancient Greeks in Egypt Egyptian mathematicians Diophantus of Alexandria 3rd-century writers 3rd-century mathematicians
Diophantus
[ "Mathematics" ]
3,740
[ "Number theorists", "Number theory" ]
9,146
https://en.wikipedia.org/wiki/Dolly%20%28sheep%29
Dolly (5 July 1996 – 14 February 2003) was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned. The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells. Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found. Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003. Genesis Dolly was cloned by Keith Campbell, Ian Wilmut and colleagues at the Roslin Institute, part of the University of Edinburgh, Scotland, and the biotechnology company PPL Therapeutics, based near Edinburgh. The funding for Dolly's cloning was provided by PPL Therapeutics and the Ministry of Agriculture. She was born on 5 July 1996. She has been called "the world's most famous sheep" by sources including BBC News and Scientific American. The cell used as the donor for the cloning of Dolly was taken from a mammary gland, and the production of a healthy clone, therefore, proved that a cell taken from a specific part of the body could recreate a whole individual. On Dolly's name, Wilmut stated "Dolly is derived from a mammary gland cell and we couldn't think of a more impressive pair of glands than Dolly Parton's." Birth Dolly was born on 5 July 1996 and had three mothers: one provided the egg, another the DNA, and a third carried the cloned embryo to term. She was created using the technique of somatic cell nuclear transfer, where the cell nucleus from an adult cell is transferred into an unfertilized oocyte (developing egg cell) that has had its cell nucleus removed. The hybrid cell is then stimulated to divide by an electric shock, and when it develops into a blastocyst it is implanted in a surrogate mother. Dolly was the first clone produced from a cell taken from an adult mammal. The production of Dolly showed that genes in the nucleus of such a mature differentiated somatic cell are still capable of reverting to an embryonic totipotent state, creating a cell that can then go on to develop into any part of an animal. Dolly's existence was announced to the public on 22 February 1997. It gained much attention in the media. A commercial with Scottish scientists playing with sheep was aired on TV, and a special report in Time magazine featured Dolly. Science featured Dolly as the breakthrough of the year. Even though Dolly was not the first animal cloned, she received media attention because she was the first cloned from an adult cell. Life Dolly lived her entire life at the Roslin Institute in Midlothian. There she was bred with a Welsh Mountain ram and produced six lambs in total. Her first lamb, named Bonnie, was born in April 1998. The next year, Dolly produced twin lambs Sally and Rosie; further, she gave birth to triplets Lucy, Darcy and Cotton in 2000. In late 2001, at the age of four, Dolly developed arthritis and started to have difficulty walking. This was treated with anti-inflammatory drugs. Death On 14 February 2003, Dolly was euthanised because she had a progressive lung disease and severe arthritis. A Finn Dorset such as Dolly has a life expectancy of around 11 to 12 years, but Dolly lived 6.5 years. A post-mortem examination showed she had a form of lung cancer called ovine pulmonary adenocarcinoma, also known as Jaagsiekte, which is a fairly common disease of sheep and is caused by the retrovirus JSRV. Roslin scientists stated that they did not think there was a connection with Dolly being a clone, and that other sheep in the same flock had died of the same disease. Such lung diseases are a particular danger for sheep kept indoors, and Dolly had to sleep inside for security reasons. Some in the press speculated that a contributing factor to Dolly's death was that she could have been born with a genetic age of six years, the same age as the sheep from which she was cloned. One basis for this idea was the finding that Dolly's telomeres were short, which is typically a result of the aging process. The Roslin Institute stated that intensive health screening did not reveal any abnormalities in Dolly that could have come from advanced aging. In 2016, scientists reported no defects in thirteen cloned sheep, including four from the same cell line as Dolly. The first study to review the long-term health outcomes of cloning, the authors found no evidence of late-onset, non-communicable diseases other than some minor examples of osteoarthritis and concluded "We could find no evidence, therefore, of a detrimental long-term effect of cloning by SCNT on the health of aged offspring among our cohort." After her death Dolly's body was preserved via taxidermy and is currently on display at the National Museum of Scotland in Edinburgh. Legacy After cloning was successfully demonstrated through the production of Dolly, many other large mammals were cloned, including pigs, deer, horses and bulls. The attempt to clone argali (mountain sheep) did not produce viable embryos. The attempt to clone a banteng bull was more successful, as were the attempts to clone mouflon (a form of wild sheep), both resulting in viable offspring. The reprogramming process that cells need to go through during cloning is not perfect and embryos produced by nuclear transfer often show abnormal development. Making cloned mammals was highly inefficientin 1996, Dolly was the only lamb that survived to adulthood from 277 attempts. By 2014, Chinese scientists were reported to have 70–80% success rates cloning pigs, and in 2016, a Korean company, Sooam Biotech, was producing 500 cloned embryos a day. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans. Cloning may have uses in preserving endangered species, and may become a viable tool for reviving extinct species. In January 2009, scientists from the Centre of Food Technology and Research of Aragon in northern Spain announced the cloning of the Pyrenean ibex, a form of wild mountain goat, which was officially declared extinct in 2000. Although the newborn ibex died shortly after birth due to physical defects in its lungs, it is the first time an extinct animal has been cloned, and may open doors for saving endangered and newly extinct species by resurrecting them from frozen tissue. In July 2016, four identical clones of Dolly (Daisy, Debbie, Dianna, and Denise) were alive and healthy at nine years old. Scientific American concluded in 2016 that the main legacy of Dolly has not been cloning of animals but in advances into stem cell research. Gene targeting was added in 2000, when researchers cloned female lamb Diana from sheep DNA altered to contain the human gene for alpha 1-antitrypsin. The human gene was specifically activated in the ewe’s mammary gland, so Diana produced milk containing human alpha 1-antitrypsin. After Dolly, researchers realised that ordinary cells could be reprogrammed to induced pluripotent stem cells, which can be grown into any tissue. The first successful cloning of a primate species was reported in January 2018, using the same method which produced Dolly. Two identical clones of a macaque monkey, Zhong Zhong and Hua Hua, were created by researchers in China and were born in late 2017. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, again using this method, and the gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases. Dolly in popular culture In 2003, the Belgian artist Dominique Goblet published a short comic strip about Dolly the cloned sheep with the title: “2004 Apparition de Dolly dans la campagne anglaise” "Dolly The Sheep" was initially released on November 13, 2012, as a flash game developed by the small game development company Pozirk Games, in which Dolly the cloned sheep is being chased by evil scientists. For some time the game was available to play online as well as on mobile devices. As of June 14, 2023, it is only available online for desktop/laptop computers. See also In re Roslin Institute (Edinburgh) – US court decision that determined that Dolly could not be patented List of cloned animals References External links Dolly the Sheep at the National Museum of Scotland, Edinburgh Cloning Dolly the Sheep Dolly the Sheep and the importance of animal research Animal cloning and Dolly Episode where several items appertaining to Dolly, including wool from a shearing and scientific instruments, were appraised. 1996 animal births 2003 animal deaths 1996 in biotechnology 1996 in Scotland 2003 in Scotland Animal world record holders Cloned sheep Cloning Individual animals in the United Kingdom Collection of National Museums Scotland History of Midlothian Dolly Parton Individual animals in Scotland Individual taxidermy exhibits
Dolly (sheep)
[ "Engineering", "Biology" ]
2,041
[ "Cloning", "Genetic engineering" ]
9,165
https://en.wikipedia.org/wiki/Directed%20set
In mathematics, a directed set (or a directed preorder or a filtered set) is a nonempty set together with a reflexive and transitive binary relation (that is, a preorder), with the additional property that every pair of elements has an upper bound. In other words, for any and in there must exist in with and A directed set's preorder is called a direction. The notion defined above is sometimes called an . A is defined analogously, meaning that every pair of elements is bounded below. Some authors (and this article) assume that a directed set is directed upward, unless otherwise stated. Other authors call a set directed if and only if it is directed both upward and downward. Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets (contrast ordered sets, which need not be directed). Join-semilattices (which are partially ordered sets) are directed sets as well, but not conversely. Likewise, lattices are directed sets both upward and downward. In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and (more generally) category theory. Equivalent definition In addition to the definition above, there is an equivalent definition. A directed set is a set with a preorder such that every finite subset of has an upper bound. In this definition, the existence of an upper bound of the empty subset implies that is nonempty. Examples The set of natural numbers with the ordinary order is one of the most important examples of a directed set. Every totally ordered set is a directed set, including and A (trivial) example of a partially ordered set that is directed is the set in which the only order relations are and A less trivial example is like the following example of the "reals directed towards " but in which the ordering rule only applies to pairs of elements on the same side of (that is, if one takes an element to the left of and to its right, then and are not comparable, and the subset has no upper bound). Product of directed sets Let and be directed sets. Then the Cartesian product set can be made into a directed set by defining if and only if and In analogy to the product order this is the product direction on the Cartesian product. For example, the set of pairs of natural numbers can be made into a directed set by defining if and only if and Directed towards a point If is a real number then the set can be turned into a directed set by defining if (so "greater" elements are closer to ). We then say that the reals have been directed towards This is an example of a directed set that is partially ordered nor totally ordered. This is because antisymmetry breaks down for every pair and equidistant from where and are on opposite sides of Explicitly, this happens when for some real in which case and even though Had this preorder been defined on instead of then it would still form a directed set but it would now have a (unique) greatest element, specifically ; however, it still wouldn't be partially ordered. This example can be generalized to a metric space by defining on or the preorder if and only if Maximal and greatest elements An element of a preordered set is a maximal element if for every implies It is a greatest element if for every Any preordered set with a greatest element is a directed set with the same preorder. For instance, in a poset every lower closure of an element; that is, every subset of the form where is a fixed element from is directed. Every maximal element of a directed preordered set is a greatest element. Indeed, a directed preordered set is characterized by equality of the (possibly empty) sets of maximal and of greatest elements. Subset inclusion The subset inclusion relation along with its dual define partial orders on any given family of sets. A non-empty family of sets is a directed set with respect to the partial order (respectively, ) if and only if the intersection (respectively, union) of any two of its members contains as a subset (respectively, is contained as a subset of) some third member. In symbols, a family of sets is directed with respect to (respectively, ) if and only if for all there exists some such that and (respectively, and ) or equivalently, for all there exists some such that (respectively, ). Many important examples of directed sets can be defined using these partial orders. For example, by definition, a or is a non-empty family of sets that is a directed set with respect to the partial order and that also does not contain the empty set (this condition prevents triviality because otherwise, the empty set would then be a greatest element with respect to ). Every -system, which is a non-empty family of sets that is closed under the intersection of any two of its members, is a directed set with respect to Every λ-system is a directed set with respect to Every filter, topology, and σ-algebra is a directed set with respect to both and Tails of nets By definition, a is a function from a directed set and a sequence is a function from the natural numbers Every sequence canonically becomes a net by endowing with If is any net from a directed set then for any index the set is called the tail of starting at The family of all tails is a directed set with respect to in fact, it is even a prefilter. Neighborhoods If is a topological space and is a point in the set of all neighbourhoods of can be turned into a directed set by writing if and only if contains For every and : since contains itself. if and then and which implies Thus because and since both and we have and Finite subsets The set of all finite subsets of a set is directed with respect to since given any two their union is an upper bound of and in This particular directed set is used to define the sum of a generalized series of an -indexed collection of numbers (or more generally, the sum of elements in an abelian topological group, such as vectors in a topological vector space) as the limit of the net of partial sums that is: Logic Let be a formal theory, which is a set of sentences with certain properties (details of which can be found in the article on the subject). For instance, could be a first-order theory (like Zermelo–Fraenkel set theory) or a simpler zeroth-order theory. The preordered set is a directed set because if and if denotes the sentence formed by logical conjunction then and where If is the Lindenbaum–Tarski algebra associated with then is a partially ordered set that is also a directed set. Contrast with semilattices Directed set is a more general concept than (join) semilattice: every join semilattice is a directed set, as the join or least upper bound of two elements is the desired The converse does not hold however, witness the directed set {1000,0001,1101,1011,1111} ordered bitwise (e.g. holds, but does not, since in the last bit 1 > 0), where {1000,0001} has three upper bounds but no upper bound, cf. picture. (Also note that without 1111, the set is not directed.) Directed subsets The order relation in a directed set is not required to be antisymmetric, and therefore directed sets are not always partial orders. However, the term is also used frequently in the context of posets. In this setting, a subset of a partially ordered set is called a directed subset if it is a directed set according to the same partial order: in other words, it is not the empty set, and every pair of elements has an upper bound. Here the order relation on the elements of is inherited from ; for this reason, reflexivity and transitivity need not be required explicitly. A directed subset of a poset is not required to be downward closed; a subset of a poset is directed if and only if its downward closure is an ideal. While the definition of a directed set is for an "upward-directed" set (every pair of elements has an upper bound), it is also possible to define a downward-directed set in which every pair of elements has a common lower bound. A subset of a poset is downward-directed if and only if its upper closure is a filter. Directed subsets are used in domain theory, which studies directed-complete partial orders. These are posets in which every upward-directed set is required to have a least upper bound. In this context, directed subsets again provide a generalization of convergent sequences. See also Notes Footnotes Works cited Binary relations General topology Order theory
Directed set
[ "Mathematics" ]
1,819
[ "General topology", "Binary relations", "Topology", "Mathematical relations", "Order theory" ]
9,225
https://en.wikipedia.org/wiki/Electronic%20paper
Electronic paper or intelligent paper, is a display device that reflects ambient light, mimicking the appearance of ordinary ink on paper – unlike conventional flat-panel displays which need additional energy to emit their own light. This may make them more comfortable to read, and provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, and newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade. Technologies include Gyricon, electrophoretics, electrowetting, interferometry, and plasmonics. Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. Applications of e-paper include electronic shelf labels and digital signage, bus station time tables, electronic billboards, smartphone displays, and e-readers able to display digital versions of books and magazines. Technologies Gyricon Electronic paper was first developed in the 1970s by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometers across. Each sphere is a Janus particle composed of negatively charged black plastic on one side and positively charged white plastic on the other (each bead is thus a dipole). The spheres are embedded in a transparent silicone sheet, with each sphere suspended in a bubble of oil so that it can rotate freely. The polarity of the voltage applied to each pair of electrodes then determines whether the white or black side is face-up, thus giving the pixel a white or black appearance. At the FPD 2008 exhibition, Japanese company Soken demonstrated a wall with electronic wall-paper using this technology. In 2007, the Estonian company Visitret Displays was developing this kind of display using polyvinylidene fluoride (PVDF) as the material for the spheres, dramatically improving the video speed and decreasing the control voltage needed. Electrophoretic An electrophoretic display (EPD) forms images by rearranging charged pigment particles with an applied electric field. In the simplest implementation of an EPD, titanium dioxide (titania) particles approximately one micrometer in diameter are dispersed in a hydrocarbon oil. A dark-colored dye is also added to the oil, along with surfactants and charging agents that cause the particles to take on an electric charge. This mixture is placed between two parallel, conductive plates separated by a gap of 10 to 100 micrometres. When a voltage is applied across the two plates, the particles migrate electrophoretically to the plate that bears the opposite charge from that on the particles. When the particles are located at the front (viewing) side of the display, it appears white, because the light is scattered back to the viewer by the high-index titania particles. When the particles are located at the rear side of the display, it appears dark, because the light is absorbed by the colored dye. If the rear electrode is divided into a number of small picture elements (pixels), then an image can be formed by applying the appropriate voltage to each region of the display to create a pattern of reflecting and absorbing regions. EPDs are typically addressed using MOSFET-based thin-film transistor (TFT) technology. TFTs are often used to form a high-density image in an EPD. A common application for TFT-based EPDs are e-readers. Electrophoretic displays are considered prime examples of the electronic paper category, because of their paper-like appearance and low power consumption. Examples of commercial electrophoretic displays include the high-resolution active matrix displays used in the Amazon Kindle, Barnes & Noble Nook, Sony Reader, Kobo eReader, and iRex iLiad e-readers. These displays are constructed from an electrophoretic imaging film manufactured by E Ink Corporation. A mobile phone that used the technology is the Motorola Fone. Electrophoretic Display technology has also been developed by SiPix and Bridgestone/Delta. SiPix is now part of E Ink Corporation. The SiPix design uses a flexible 0.15 mm Microcup architecture, instead of E Ink's 0.04 mm diameter microcapsules. Bridgestone Corp.'s Advanced Materials Division cooperated with Delta Optoelectronics Inc. in developing Quick Response Liquid Powder Display technology. Electrophoretic displays can be manufactured using the Electronics on Plastic by Laser Release (EPLaR) process, developed by Philips Research, to enable existing AM-LCD manufacturing plants to create flexible plastic displays. Microencapsulated electrophoretic display In the 1990s another type of electronic ink based on a microencapsulated electrophoretic display was conceived and prototyped by a team of undergraduates at MIT as described in their Nature paper. J.D. Albert, Barrett Comiskey, Joseph Jacobson, Jeremy Rubin and Russ Wilcox co-founded E Ink Corporation in 1997 to commercialize the technology. E Ink subsequently formed a partnership with Philips Components two years later to develop and market the technology. In 2005, Philips sold the electronic paper business as well as its related patents to Prime View International. "It has for many years been an ambition of researchers in display media to create a flexible low-cost system that is the electronic analog of paper. In this context, microparticle-based displays have long intrigued researchers. Switchable contrast in such displays is achieved by the electromigration of highly scattering or absorbing microparticles (in the size range 0.1–5 μm), quite distinct from the molecular-scale properties that govern the behavior of the more familiar liquid-crystal displays. Micro-particle-based displays possess intrinsic bistability, exhibit extremely low power d.c. field addressing and have demonstrated high contrast and reflectivity. These features, combined with a near-lambertian viewing characteristic, result in an 'ink on paper' look. But such displays have to date suffered from short lifetimes and difficulty in manufacture. Here we report the synthesis of an electrophoretic ink based on the microencapsulation of an electrophoretic dispersion. The use of a microencapsulated electrophoretic medium solves the lifetime issues and permits the fabrication of a bistable electronic display solely by means of printing. This system may satisfy the practical requirements of electronic paper." This used tiny microcapsules filled with electrically charged white particles suspended in a colored oil. In early versions, the underlying circuitry controlled whether the white particles were at the top of the capsule (so it looked white to the viewer) or at the bottom of the capsule (so the viewer saw the color of the oil). This was essentially a reintroduction of the well-known electrophoretic display technology, but microcapsules meant the display could be made on flexible plastic sheets instead of glass. One early version of the electronic paper consists of a sheet of very small transparent capsules, each about 40 micrometers across. Each capsule contains an oily solution containing black dye (the electronic ink), with numerous white titanium dioxide particles suspended within. The particles are slightly negatively charged, and each one is naturally white. The screen holds microcapsules in a layer of liquid polymer, sandwiched between two arrays of electrodes, the upper of which is transparent. The two arrays are aligned to divide the sheet into pixels, and each pixel corresponds to a pair of electrodes situated on either side of the sheet. The sheet is laminated with transparent plastic for protection, resulting in an overall thickness of 80 micrometers, or twice that of ordinary paper. The network of electrodes connects to display circuitry, which turns the electronic ink 'on' and 'off' at specific pixels by applying a voltage to specific electrode pairs. A negative charge to the surface electrode repels the particles to the bottom of local capsules, forcing the black dye to the surface and turning the pixel black. Reversing the voltage has the opposite effect. It forces the particles to the surface, turning the pixel white. A more recent implementation of this concept requires only one layer of electrodes beneath the microcapsules. These are commercially referred to as Active Matrix Electrophoretic Displays (AMEPD). Reflective LCD This technology is similar to common LCD while the backlight panel is substituted by a reflective surface. A comparable technology is also obtainable in backlight LCDs by software or hardware deactivating the backlight control. Electrowetting Electrowetting display (EWD) is based on controlling the shape of a confined water/oil interface by an applied voltage. With no voltage applied, the (colored) oil forms a flat film between the water and a hydrophobic (water-repellent) insulating coating of an electrode, resulting in a colored pixel. When a voltage is applied between the electrode and the water, the interfacial tension between the water and the coating changes. As a result, the stacked state is no longer stable, causing the water to move the oil aside. This makes a partly transparent pixel, or, if a reflective white surface is under the switchable element, a white pixel. Because of the small pixel size, the user only experiences the average reflection, which provides a high-brightness, high-contrast switchable element. Displays based on electrowetting provide several attractive features. The switching between white and colored reflection is fast enough to display video content. It is a low-power, low-voltage technology, and displays based on the effect can be made flat and thin. The reflectivity and contrast are better than or equal to other reflective display types and approach the visual qualities of paper. In addition, the technology offers a unique path toward high-brightness full-color displays, leading to displays that are four times brighter than reflective LCDs and twice as bright as other emerging technologies. Instead of using red, green, and blue (RGB) filters or alternating segments of the three primary colors, which effectively result in only one-third of the display reflecting light in the desired color, electrowetting allows for a system in which one sub-pixel can switch two different colors independently. This results in the availability of two-thirds of the display area to reflect light in any desired color. This is achieved by building up a pixel with a stack of two independently controllable colored oil films plus a color filter. The colors are cyan, magenta, and yellow, which is a subtractive system, comparable to the principle used in inkjet printing. Compared to LCD, brightness is gained because no polarisers are required. Electrofluidic Electrofluidic display is a variation of an electrowetting display that place an aqueous pigment dispersion inside a tiny reservoir. The reservoir comprises less than 5-10% of the viewable pixel area and therefore the pigment is substantially hidden from view. Voltage is used to electromechanically pull the pigment out of the reservoir and spread it as a film directly behind the viewing substrate. As a result, the display takes on color and brightness similar to that of conventional pigments printed on paper. When voltage is removed liquid surface tension causes the pigment dispersion to rapidly recoil into the reservoir. The technology can potentially provide greater than 85% white state reflectance for electronic paper. The core technology was invented at the Novel Devices Laboratory at the University of Cincinnati and there are working prototypes developed by collaboration with Sun Chemical, Polymer Vision and Gamma Dynamics. It has a wide margin in critical aspects such as brightness, color saturation and response time. Because the optically active layer can be less than 15 micrometres thick, there is strong potential for rollable displays. Interferometric modulator (Mirasol) The technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid-crystal displays (LCD). Plasmonic electronic display Plasmonic nanostructures with conductive polymers have also been suggested as one kind of electronic paper. The material has two parts. The first part is a highly reflective metasurface made by metal-insulator-metal films tens of nanometers in thickness including nanoscale holes. The metasurfaces can reflect different colors depending on the thickness of the insulator. The standard RGB color schema can be used as pixels for full-color displays. The second part is a polymer with optical absorption controllable by an electrochemical potential. After growing the polymer on the plasmonic metasurfaces, the reflection of the metasurfaces can be modulated by the applied voltage. This technology presents broad range colors, high polarization-independent reflection (>50 %), strong contrast (>30 %), the fast response time (hundreds of ms), and long-term stability. In addition, it has ultralow power consumption (< 0.5 mW/cm2) and potential for high resolution (>10000 dpi). Since the ultrathin metasurfaces are flexible and the polymer is soft, the whole system can be bent. Desired future improvements for this technology include bistability, cheaper materials and implementation with TFT arrays. Other technologies Other research efforts into e-paper have involved using organic transistors embedded into flexible substrates, including attempts to build them into conventional paper. Simple color e-paper consists of a thin colored optical filter added to the monochrome technology described above. The array of pixels is divided into triads, typically consisting of the standard cyan, magenta and yellow, in the same way as CRT monitors (although using subtractive primary colors as opposed to additive primary colors). The display is then controlled like any other electronic color display. History E Ink Corporation of E Ink Holdings Inc. released the first colored E Ink displays to be used in a marketed product. The Ectaco jetBook Color was released in 2012 as the first colored electronic ink device, which used E Ink's Triton display technology. E Ink in early 2015 also announced another color electronic ink technology called Prism. This new technology is a color changing film that can be used for e-readers, but Prism is also marketed as a film that can be integrated into architectural design such as "wall, ceiling panel, or entire room instantly." The disadvantage of these current color displays is that they are considerably more expensive than standard E Ink displays. The jetBook Color costs roughly nine times more than other popular e-readers such as the Amazon Kindle. As of January 2015, Prism had not been announced to be used in the plans for any e-reader devices. Applications Several companies are simultaneously developing electronic paper and ink. While the technologies used by each company provide many of the same features, each has its own distinct technological advantages. All electronic paper technologies face the following general challenges: A method for encapsulation An ink or active material to fill the encapsulation Electronics to activate the ink Electronic ink can be applied to flexible or rigid materials. For flexible displays, the base requires a thin, flexible material tough enough to withstand considerable wear, such as extremely thin plastic. The method of how the inks are encapsulated and then applied to the substrate is what distinguishes each company from others. These processes are complex and are carefully guarded industry secrets. Nevertheless, making electronic paper is less complex and costly than LCDs. There are many approaches to electronic paper, with many companies developing technology in this area. Other technologies being applied to electronic paper include modifications of liquid-crystal displays, electrochromic displays, and the electronic equivalent of an Etch A Sketch at Kyushu University. Advantages of electronic paper include low power usage (power is only drawn when the display is updated), flexibility and better readability than most displays. Electronic ink can be printed on any surface, including walls, billboards, product labels and T-shirts. The ink's flexibility would also make it possible to develop rollable displays for electronic devices. Wristwatches In December 2005, Seiko released the first electronic ink based watch called the Spectrum SVRD001 wristwatch, which has a flexible electrophoretic display and in March 2010 Seiko released a second generation of this famous electronic ink watch with an active matrix display. The Pebble smart watch (2013) uses a low-power memory LCD manufactured by Sharp for its e-paper display. In 2019, Fossil launched a hybrid smartwatch called the Hybrid HR, integrating an always on electronic ink display with physical hands and dial to simulate the look of a traditional analog watch. E-book readers In 2004, Sony released the Librié in Japan, the first e-book reader with an electronic paper E Ink display. In September 2006, Sony released the PRS-500 Sony Reader e-book reader in the USA. On October 2, 2007, Sony announced the PRS-505, an updated version of the Reader. In November 2008, Sony released the PRS-700BC, which incorporated a backlight and a touchscreen. Mobile phones Motorola's low-cost mobile phone, the Motorola F3, uses an alphanumeric black-and-white electrophoretic display. The Samsung Alias 2 mobile phone incorporates electronic ink from E Ink into the keypad, which allows the keypad to change character sets and orientation while in different display modes. Smartphones On December 12, 2012, Yota Devices announced the first "YotaPhone" prototype and was later released in December 2013, a unique double-display smartphone. It has a 4.3-inch, HD LCD on the front and an electronic ink display on the back. On May and June 2020, Hisense released the Hisense A5c and A5 pro cc, the first color electronic ink smartphones. With a single color display, with a togglable front light running android 9 and Android 10. Computer monitors Electronic paper is used on computer monitors like the 13.3 inch Dasung Paperlike 3 HD and 25.3 inch Paperlike 253. Laptop Some laptops like Lenovo ThinkBook Plus use e-paper as a secondary screen. Other common laptops use reflective LCD panels with no backlight. Furthermore, some operating systems e.g. Xubuntu, Kali Linux provide a control to dim backlight LCD brightness to 0% in internal monitors, while crystals keep working so that the display is lighted by ambient light as it was paper. In late 2007, Amazon began producing and marketing the Amazon Kindle, an e-book reader with an e-paper display. In February 2009, Amazon released the Kindle 2 and in May 2009 the larger Kindle DX was announced. In July 2010 the third-generation Kindle was announced, with notable design changes. The fourth generation of Kindle, called Touch, was announced in September 2011 that was the Kindle's first departure from keyboards and page turn buttons in favor of touchscreens. In September 2012, Amazon announced the fifth generation of the Kindle called the Paperwhite, which incorporates a LED frontlight and a higher contrast display. In November 2009, Barnes and Noble launched the Barnes & Noble Nook, running an Android operating system. It differs from other e-readers in having a replaceable battery, and a separate touch-screen color LCD below the main electronic paper reading screen. In 2017, Sony and reMarkable offered e-books tailored for writing with a smart stylus. In 2020, Onyx released the first frontlit 13.3 inch electronic paper Android tablet, the Boox Max Lumi. At the end of the same year, Bigme released the first 10.3 inch color electronic paper Android tablet, the Bigme B1 Pro. This was also the first large electronic paper tablet to support 4g cellular data. Newspapers In February 2006, the Flemish daily De Tijd distributed an electronic version of the paper to select subscribers in a limited marketing study, using a pre-release version of the iRex iLiad. This was the first recorded application of electronic ink to newspaper publishing. The French daily Les Échos announced the official launch of an electronic version of the paper on a subscription basis in September 2007. Two offers were available, combining a one-year subscription and a reading device. The offer included either a light (176g) reading device (adapted for Les Echos by Ganaxa) or the iRex iLiad. Two different processing platforms were used to deliver readable information of the daily, one based on the newly developed GPP electronic ink platform from Ganaxa, and the other one developed internally by Les Echos. Displays embedded in smart cards Flexible display cards enable financial payment cardholders to generate a one-time password to reduce online banking and transaction fraud. Electronic paper offers a flat and thin alternative to existing key fob tokens for data security. The world's first ISO compliant smart card with an embedded display was developed by Innovative Card Technologies and nCryptone in 2005. The cards were manufactured by Nagra ID. Status displays Some devices, like USB flash drives, have used electronic paper to display status information, such as available storage space. Once the image on the electronic paper has been set, it requires no power to maintain, so the readout can be seen even when the flash drive is not plugged in. Electronic shelf labels E-paper based electronic shelf labels (ESL) are used to digitally display the prices of goods at retail stores. Electronic-paper-based labels are updated via two-way infrared or radio technology and powered by a rechargeable coin cell. Some variants use ZBD (zenithal bistable display) which is more similar to LCD but does not need power to retain an image. Public transport timetables E-paper displays at bus or trams stops can be remotely updated. Compared to LED or liquid-crystal displays (LCDs), they consume lower energy and the text or graphics stays visible during a power failure. Compared to LCDs, it easily visible under full sunshine. Digital signage Because of its energy-saving properties, electronic paper has proved a technology suited to digital signage applications. Electronic tags Typically, e-paper electronic tags integrate e-ink technology with wireless interfaces like NFC or UHF. They are most commonly used as employees' ID cards or as production labels to track manufacturing changes and status. E-paper tags are also increasingly being used as shipping labels, especially in the case of reusable boxes. An interesting feature provided by some e-paper Tags manufacturers is batteryless design. This means that the power needed for a display's content update is provided wirelessly and the module itself doesn't contain any battery. Other Other proposed applications include clothes, digital photo frames, information boards, and keyboards. Keyboards with dynamically changeable keys are useful for less represented languages, non-standard keyboard layouts such as Dvorak, or for special non-alphabetical applications such as video editing or games. The reMarkable is a writer tablet for reading and taking notes. See also E-book Embedded controller Electrofluidic Flexible display Flexible electronics Hardware Attached on Top (HAT) History of display technology Raspberry Pi/Arduino Raw display Serial Peripheral Interface References Further reading Electric paper, New Scientist, 2003 E-paper may offer video images, New Scientist, 2003 Paper comes alive New Scientist, 2003 Most flexible electronic paper yet revealed, New Scientist, 2004 Roll-up digital displays move closer to market New Scientist, 2005 External links Wired article on E Ink-Philips partnership, and background , retrieved 2007-08-26 MIT ePaper Project Fujitsu Develops World's First Film Substrate-based Bendable Color Electronic Paper featuring Image Memory Function American inventions Display technology Electronic engineering Electronic paper technology Paper
Electronic paper
[ "Technology", "Engineering" ]
4,913
[ "Electrical engineering", "Electronic engineering", "Computer engineering", "Display technology" ]
9,228
https://en.wikipedia.org/wiki/Earth
Earth is the third planet from the Sun and the only astronomical object known to harbor life. This is enabled by Earth being an ocean world, the only one in the Solar System sustaining liquid surface water. Almost all of Earth's water is contained in its global ocean, covering 70.8% of Earth's crust. The remaining 29.2% of Earth's crust is land, most of which is located in the form of continental landmasses within Earth's land hemisphere. Most of Earth's land is at least somewhat humid and covered by vegetation, while large sheets of ice at Earth's polar deserts retain more water than Earth's groundwater, lakes, rivers, and atmospheric water combined. Earth's crust consists of slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth has a liquid outer core that generates a magnetosphere capable of deflecting most of the destructive solar winds and cosmic radiation. Earth has a dynamic atmosphere, which sustains Earth's surface conditions and protects it from most meteoroids and UV-light at entry. It has a composition of primarily nitrogen and oxygen. Water vapor is widely present in the atmosphere, forming clouds that cover most of the planet. The water vapor acts as a greenhouse gas and, together with other greenhouse gases in the atmosphere, particularly carbon dioxide (CO2), creates the conditions for both liquid surface water and water vapor to persist via the capturing of energy from the Sun's light. This process maintains the current average surface temperature of , at which water is liquid under normal atmospheric pressure. Differences in the amount of captured energy between geographic regions (as with the equatorial region receiving more sunlight than the polar regions) drive atmospheric and ocean currents, producing a global climate system with different climate regions, and a range of weather phenomena such as precipitation, allowing components such as nitrogen to cycle. Earth is rounded into an ellipsoid with a circumference of about 40,000 km. It is the densest planet in the Solar System. Of the four rocky planets, it is the largest and most massive. Earth is about eight light-minutes away from the Sun and orbits it, taking a year (about 365.25 days) to complete one revolution. Earth rotates around its own axis in slightly less than a day (in about 23 hours and 56 minutes). Earth's axis of rotation is tilted with respect to the perpendicular to its orbital plane around the Sun, producing seasons. Earth is orbited by one permanent natural satellite, the Moon, which orbits Earth at 384,400 km (1.28 light seconds) and is roughly a quarter as wide as Earth. The Moon's gravity helps stabilize Earth's axis, causes tides and gradually slows Earth's rotation. Tidal locking has made the Moon always face Earth with the same side. Earth, like most other bodies in the Solar System, formed 4.5 billion years ago from gas and dust in the early Solar System. During the first billion years of Earth's history, the ocean formed and then life developed within it. Life spread globally and has been altering Earth's atmosphere and surface, leading to the Great Oxidation Event two billion years ago. Humans emerged 300,000 years ago in Africa and have spread across every continent on Earth. Humans depend on Earth's biosphere and natural resources for their survival, but have increasingly impacted the planet's environment. Humanity's current impact on Earth's climate and biosphere is unsustainable, threatening the livelihood of humans and many other forms of life, and causing widespread extinctions. Etymology The Modern English word Earth developed, via Middle English, from an Old English noun most often spelled . It has cognates in every Germanic language, and their ancestral root has been reconstructed as *erþō. In its earliest attestation, the word eorðe was used to translate the many senses of Latin and Greek γῆ gē: the ground, its soil, dry land, the human world, the surface of the world (including the sea), and the globe itself. As with Roman Terra/Tellūs and Greek Gaia, Earth may have been a personified goddess in Germanic paganism: late Norse mythology included Jörð ("Earth"), a giantess often given as the mother of Thor. Historically, "Earth" has been written in lowercase. Beginning with the use of Early Middle English, its definite sense as "the globe" was expressed as "the earth". By the era of Early Modern English, capitalization of nouns began to prevail, and the earth was also written the Earth, particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as Earth, by analogy with the names of the other planets, though "earth" and forms with "the earth" remain common. House styles now vary: Oxford spelling recognizes the lowercase form as the more common, with the capitalized form an acceptable variant. Another convention capitalizes "Earth" when appearing as a name, such as a description of the "Earth's atmosphere", but employs the lowercase when it is preceded by "the", such as "the atmosphere of the earth". It almost always appears in lowercase in colloquial expressions such as "what on earth are you doing?" The name Terra occasionally is used in scientific writing and especially in science fiction to distinguish humanity's inhabited planet from others, while in poetry Tellus has been used to denote personification of the Earth. Terra is also the name of the planet in some Romance languages, languages that evolved from Latin, like Italian and Portuguese, while in other Romance languages the word gave rise to names with slightly altered spellings, like the Spanish Tierra and the French Terre. The Latinate form Gæa or Gaea () of the Greek poetic name Gaia (; or ) is rare, though the alternative spelling Gaia has become common due to the Gaia hypothesis, in which case its pronunciation is rather than the more classical English . There are a number of adjectives for the planet Earth. The word "earthly" is derived from "Earth". From the Latin Terra comes terran , terrestrial , and (via French) terrene , and from the Latin Tellus comes tellurian and telluric. Natural history Formation The oldest material found in the Solar System is dated to Ga (billion years) ago. By the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth being estimated as likely taking anywhere from 70 to 100 million years to form. Estimates of the age of the Moon range from 4.5 Ga to significantly younger. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object with about 10% of Earth's mass, named Theia, collided with Earth. It hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.0 and , numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth. After formation Earth's atmosphere and oceans were formed by volcanic activity and outgassing. Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets. Sufficient water to fill the oceans may have been on Earth since it formed. In this model, atmospheric greenhouse gases kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. By , Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind. As the molten outer layer of Earth cooled it formed the first solid crust, which is thought to have been mafic in composition. The first continental crust, which was more felsic in composition, formed by the partial melting of this mafic crust. The presence of grains of the mineral zircon of Hadean age in Eoarchean sedimentary rocks suggests that at least some felsic crust existed as early as , only after Earth's formation. There are two main models of how this initial small volume of continental crust evolved to reach its current abundance: (1) a relatively steady growth up to the present day, which is supported by the radiometric dating of continental crust globally and (2) an initial rapid growth in the volume of continental crust during the Archean, forming the bulk of the continental crust that now exists, which is supported by isotopic evidence from hafnium in zircons and neodymium in sedimentary rocks. The two models and the data that support them can be reconciled by large-scale recycling of the continental crust, particularly during the early stages of Earth's history. New continental crust forms as a result of plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, tectonic forces have caused areas of continental crust to group together to form supercontinents that have subsequently broken apart. At approximately , one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia at , then finally Pangaea, which also began to break apart at . The most recent pattern of ice ages began about , and then intensified during the Pleistocene about . High- and middle-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every 21,000, 41,000 and 100,000 years. The Last Glacial Period, colloquially called the "last ice age", covered large parts of the continents, to the middle latitudes, in ice and ended about 11,700 years ago. Origin of life and evolution Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose. The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen () accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer () in the upper atmosphere. The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes. True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface. Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia, biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland, and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia. The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms.During the Neoproterozoic, , much of Earth might have been covered in ice. This hypothesis has been termed "Snowball Earth", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity. Following the Cambrian explosion, , there have been at least five major mass extinctions and many minor ones. Apart from the proposed current Holocene extinction event, the most recent was , when an asteroid impact triggered the extinction of non-avian dinosaurs and other large reptiles, but largely spared small animals such as insects, mammals, lizards and birds. Mammalian life has diversified over the past , and several million years ago, an African ape species gained the ability to stand upright. This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day. Future Earth's expected long-term future is tied to that of the Sun. Over the next , solar luminosity will increase by 10%, and over the next by 40%. Earth's increasing surface temperature will accelerate the inorganic carbon cycle, possibly reducing concentration to levels lethally low for current plants ( for C4 photosynthesis) in approximately . A lack of vegetation would result in the loss of oxygen in the atmosphere, making current animal life impossible. Due to the increased luminosity, Earth's mean temperature may reach in 1.5 billion years, and all ocean water will evaporate and be lost to space, which may trigger a runaway greenhouse effect, within an estimated 1.6 to 3 billion years. Even if the Sun were stable, a fraction of the water in the modern oceans will descend to the mantle, due to reduced steam venting from mid-ocean ridges. The Sun will evolve to become a red giant in about . Models predict that the Sun will expand to roughly , about 250 times its present radius. Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit from the Sun when the star reaches its maximum radius, otherwise, with tidal effects, it may enter the Sun's atmosphere and be vaporized. Physical characteristics Size and shape Earth has a rounded shape, through hydrostatic equilibrium, with an average diameter of , making it the fifth largest planetary sized and largest terrestrial object of the Solar System. Due to Earth's rotation it has the shape of an ellipsoid, bulging at its equator; its diameter is longer there than at its poles. Earth's shape also has local topographic variations; the largest local variations, like the Mariana Trench ( below local sea level), shortens Earth's average radius by 0.17% and Mount Everest ( above local sea level) lengthens it by 0.14%. Since Earth's surface is farthest out from its center of mass at its equatorial bulge, the summit of the volcano Chimborazo in Ecuador () is its farthest point out. Parallel to the rigid land topography the ocean exhibits a more dynamic topography. To measure the local variation of Earth's topography, geodesy employs an idealized Earth producing a geoid shape. Such a shape is gained if the ocean is idealized, covering Earth completely and without any perturbations such as tides and winds. The result is a smooth but irregular geoid surface, providing a mean sea level (MSL) as a reference level for topographic measurements. Surface Earth's surface is the boundary between the atmosphere, and the solid Earth and oceans. Defined in this way, it has an area of about . Earth can be divided into two hemispheres: by latitude into the polar Northern and Southern hemispheres; or by longitude into the continental Eastern and Western hemispheres. Most of Earth's surface is ocean water: 70.8% or . This vast pool of salty water is often called the world ocean, and makes Earth with its dynamic hydrosphere a water world or ocean world. Indeed, in Earth's early history the ocean may have covered Earth completely. The world ocean is commonly divided into the Pacific Ocean, Atlantic Ocean, Indian Ocean, Antarctic or Southern Ocean, and Arctic Ocean, from largest to smallest. The ocean covers Earth's oceanic crust, with the shelf seas covering the shelves of the continental crust to a lesser extent. The oceanic crust forms large oceanic basins with features like abyssal plains, seamounts, submarine volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, and a globe-spanning mid-ocean ridge system. At Earth's polar regions, the ocean surface is covered by seasonally variable amounts of sea ice that often connects with polar land, permafrost and ice sheets, forming polar ice caps. Earth's land covers 29.2%, or of Earth's surface. The land surface includes many islands around the globe, but most of the land surface is taken by the four continental landmasses, which are (in descending order): Africa-Eurasia, America (landmass), Antarctica, and Australia (landmass). These landmasses are further broken down and grouped into the continents. The terrain of the land surface varies greatly and consists of mountains, deserts, plains, plateaus, and other landforms. The elevation of the land surface varies from a low point of at the Dead Sea, to a maximum altitude of at the top of Mount Everest. The mean height of land above sea level is about . Land can be covered by surface water, snow, ice, artificial structures or vegetation. Most of Earth's land hosts vegetation, but considerable amounts of land are ice sheets (10%, not including the equally large area of land under permafrost) or deserts (33%). The pedosphere is the outermost layer of Earth's land surface and is composed of soil and subject to soil formation processes. Soil is crucial for land to be arable. Earth's total arable land is 10.7% of the land surface, with 1.3% being permanent cropland. Earth has an estimated of cropland and of pastureland. The land surface and the ocean floor form the top of Earth's crust, which together with parts of the upper mantle form Earth's lithosphere. Earth's crust may be divided into oceanic and continental crust. Beneath the ocean-floor sediments, the oceanic crust is predominantly basaltic, while the continental crust may include lower density materials such as granite, sediments and metamorphic rocks. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the mass of the crust. Earth's surface topography comprises both the topography of the ocean surface, and the shape of Earth's land surface. The submarine terrain of the ocean floor has an average bathymetric depth of 4 km, and is as varied as the terrain above sea level. Earth's surface is continually being shaped by internal plate tectonic processes including earthquakes and volcanism; by weathering and erosion driven by ice, water, wind and temperature; and by biological processes including the growth and decomposition of biomass into soil. Tectonic plates Earth's mechanically rigid outer layer of Earth's crust and upper mantle, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: at convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur. The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates. As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than old. The oldest oceanic crust is located in the Western Pacific and is estimated to be old. By comparison, the oldest dated continental crust is , although zircons have been found preserved as clasts within Eoarchean sedimentary rocks that give ages up to , indicating that at least some continental crust existed at that time. The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between . The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of and the Pacific Plate moving . At the other extreme, the slowest-moving plate is the South American Plate, progressing at a typical rate of . Internal structure Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about under the oceans to for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, which is divided into independently moving tectonic plates. Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core. Earth's inner core may be rotating at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year, although both somewhat higher and much lower rates have also been proposed. The radius of the inner core is about one-fifth of that of Earth. The density increases with depth. Among the Solar System's planetary-sized objects, Earth is the object with the highest density. Chemical composition Earth's mass is approximately (). It is composed mostly of iron (32.1% by mass), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to gravitational separation, the core is primarily composed of the denser elements: iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The most common rock constituents of the crust are oxides. Over 99% of the crust is composed of various oxides of eleven elements, principally oxides containing silicon (the silicate minerals), aluminium, iron, calcium, magnesium, potassium, or sodium. Internal heat The major contributors to Earth's internal heat are primordial heat (heat left over from Earth's formation) and radiogenic heat (heat produced by radioactive decay). The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to , and the pressure could reach . Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately , twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today. The mean heat loss from Earth is , for a global heat loss of . A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans. Gravitational field The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within Earth. Near Earth's surface, gravitational acceleration is approximately . Local differences in topography, geology, and deeper tectonic structure cause local and broad regional differences in Earth's gravitational field, known as gravity anomalies. Magnetic field The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is , with a magnetic dipole moment of at epoch 2000, decreasing nearly 6% per century (although it still remains stronger than its long time average). The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago. The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the day-side of the magnetosphere, to about 10 Earth radii, and extends the night-side magnetosphere into a long tail. Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bow shock precedes the day-side magnetosphere within the solar wind. Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates. The ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field, and the Van Allen radiation belts are formed by high-energy particles whose motion is essentially random, but contained in the magnetosphere. During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing an aurora. Orbit and rotation Rotation Earth's rotation period relative to the Sun—its mean solar day—is of mean solar time (). Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between longer than the mean solar day. Earth's rotation period relative to the fixed stars, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is of mean solar time (UT1), or Earth's rotation period relative to the precessing or moving mean March equinox (when the Sun is at 90° on the equator), is of mean solar time (UT1) . Thus the sidereal day is shorter than the stellar day by about 8.4 ms. Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same. Orbit Earth orbits the Sun, making Earth the third-closest planet to the Sun and part of the inner Solar System. Earth's average orbital distance is about , which is the basis for the astronomical unit (AU) and is equal to roughly 8.3 light minutes or 380 times Earth's distance to the Moon. Earth orbits the Sun every 365.2564 mean solar days, or one sidereal year. With an apparent movement of the Sun in Earth's sky at a rate of about 1°/day eastward, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian. The orbital speed of Earth averages about , which is fast enough to travel a distance equal to Earth's diameter, about , in seven minutes, and the distance from Earth to the Moon, , in about 3.5 hours. The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the Sun and Earth's north poles, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth-Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses. The Hill sphere, or the sphere of gravitational influence, of Earth is about in radius. This is the maximum distance at which Earth's gravitational influence is stronger than that of the more distant Sun and planets. Objects must orbit Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun. Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm. Axial tilt and seasons The axial tilt of Earth is approximately 23.439281° with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and in the Southern Hemisphere when the Tropic of Capricorn faces the Sun. In each instance, winter occurs simultaneously in the opposite hemisphere. During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. Above the Arctic Circle and below the Antarctic Circle there is no daylight at all for part of the year, causing a polar night, and this night extends for several months at the poles themselves. These same latitudes also experience a midnight sun, where the sun remains visible all day. By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when Earth's rotational axis is aligned with its orbital axis. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped. The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800-year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation. Earth's annual orbit is elliptical rather than circular, and its closest approach to the Sun is called perihelion. In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates shift over time due to precession and changes to the orbit, the latter of which follows cyclical patterns known as Milankovitch cycles. The annual change in the Earth–Sun distance causes an increase of about 6.8% in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere. Earth–Moon system Moon The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as "moons", after Earth's. The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust. Computer simulations suggest that two blob-like remnants of this protoplanet could be inside the Earth. The gravitational attraction between Earth and the Moon causes lunar tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases. Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately . Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 μs/yr—add up to significant changes. During the Ediacaran period, for example, (approximately ) there were 400±7 days in a year, with each day lasting 21.9±0.4 hours. The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon. Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting large changes over millions of years, as is the case for Mars, though this is disputed. Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth. Asteroids and artificial satellites Earth's co-orbital asteroids population consists of quasi-satellites, objects with a horseshoe orbit and trojans. There are at least seven quasi-satellites, including 469219 Kamoʻoalewa, ranging in diameter from 10 m to 5000 m. A trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun. The tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time. , there are 4,550 operational, human-made satellites orbiting Earth. There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris. Earth's largest artificial satellite is the International Space Station (ISS). Hydrosphere Earth's hydrosphere is the sum of Earth's water and its distribution. Most of Earth's hydrosphere consists of Earth's global ocean. Earth's hydrosphere also consists of water in the atmosphere and on land, including clouds, inland seas, lakes, rivers, and underground waters. The mass of the oceans is approximately 1.35 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of with a mean depth of , resulting in an estimated volume of . If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be . About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers. The remaining 30% is ground water, 1% surface water (covering only 2.8% of Earth's land) and other small forms of fresh water deposits such as permafrost, water vapor in the atmosphere, biological binding, etc. In Earth's coldest regions, snow survives over the summer and changes into ice. This accumulated snow and ice eventually forms into glaciers, bodies of ice that flow under the influence of their own gravity. Alpine glaciers form in mountainous areas, whereas vast ice sheets form over land in polar regions. The flow of glaciers erodes the surface, changing it dramatically, with the formation of U-shaped valleys and other landforms. Sea ice in the Arctic covers an area about as big as the United States, although it is quickly retreating as a consequence of climate change. The average salinity of Earth's oceans is about 35 grams of salt per kilogram of seawater (3.5% salt). Most of this salt was released from volcanic activity or extracted from cool igneous rocks. The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms. Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir. Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation. The abundance of water, particularly liquid water, on Earth's surface is a unique feature that distinguishes it from other planets in the Solar System. Solar System planets with considerable atmospheres do partly host atmospheric water vapor, but they lack surface conditions for stable surface water. Despite some moons showing signs of large reservoirs of extraterrestrial liquid water, with possibly even more volume than Earth's ocean, all of them are large bodies of water under a kilometers thick frozen surface layer. Atmosphere The atmospheric pressure at Earth's sea level averages , with a scale height of about . A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. Clouds cover around two-thirds of Earth's surface, more so over oceans than land. The height of the troposphere varies with latitude, ranging between at the poles to at the equator, with some variation resulting from weather and seasonal factors. Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved , forming the primarily nitrogen–oxygen atmosphere of today. This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric into . The ozone layer blocks ultraviolet solar radiation, permitting life on land. Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature. This last phenomenon is the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the surface, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be , in contrast to the current , and life on Earth probably would not exist in its current form. Weather and climate Earth's atmosphere has no definite boundary, gradually becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first of the surface; this lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy. The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°. Ocean heat content and currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions. Earth receives 1361 W/m2 of solar irradiance. The amount of solar energy that reaches Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates. Further factors that affect a location's climates are its proximity to oceans, the oceanic and atmospheric circulation, and topology. Places close to oceans typically have colder summers and warmer winters, due to the fact that oceans can store large amounts of heat. The wind transports the cold or the heat of the ocean to the land. Atmospheric circulation also plays an important role: San Francisco and Washington DC are both coastal cities at about the same latitude. San Francisco's climate is significantly more moderate as the prevailing wind direction is from sea to land. Finally, temperatures decrease with height causing mountainous areas to be colder than low-lying areas. Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation. Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region. The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes. The Köppen system rates regions based on observed temperature and precipitation. Surface air temperature can rise to around in hot deserts, such as Death Valley, and can fall as low as in Antarctica. Upper atmosphere The upper atmosphere, the atmosphere above the troposphere, is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as above Earth's surface, is a working definition for the boundary between the atmosphere and outer space. Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases. The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere. Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth. In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere. Life on Earth Earth is the only known place that has ever been habitable for life. Earth's life developed in Earth's early bodies of water some hundred million years after Earth formed. Earth's life has been shaping and inhabiting many particular ecosystems on Earth and has eventually expanded globally forming an overarching biosphere. Therefore, life has impacted Earth, significantly altering Earth's atmosphere and surface over long periods of time, causing changes like the Great Oxidation Event. Earth's life has also over time greatly diversified, allowing the biosphere to have different biomes, which are inhabited by comparatively similar plants and animals. The different biomes developed at distinct elevations or water depths, planetary temperature latitudes and on land also with different humidity. Earth's species diversity and biomass reaches a peak in shallow waters and with forests, particularly in equatorial, warm and humid conditions. While freezing polar regions and high altitudes, or extremely arid areas are relatively barren of plant and animal life. Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain a metabolism. Plants and other organisms take up nutrients from water, soils and the atmosphere. These nutrients are constantly recycled between different species. Extreme weather, such as tropical cyclones (including hurricanes and typhoons), occurs over most of Earth's surface and has a large impact on life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, blizzards, floods, droughts, wildfires, and other calamities and disasters. Human impact is felt in many areas due to pollution of the air and water, acid rain, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion. Human activities release greenhouse gases into the atmosphere which cause global warming. This is driving changes such as the melting of glaciers and ice sheets, a global rise in average sea levels, increased risk of drought and wildfires, and migration of species to colder areas. Human geography Originating from earlier primates in Eastern Africa 300,000years ago humans have since been migrating and with the advent of agriculture in the 10th millennium BC increasingly settling Earth's land. In the 20th century Antarctica had been the last continent to see a first and until today limited human presence. Human population has since the 19th century grown exponentially to seven billion in the early 2010s, and is projected to peak at around ten billion in the second half of the 21st century. Most of the growth is expected to take place in sub-Saharan Africa. Distribution and density of human population varies greatly around the world with the majority living in south to eastern Asia and 90% inhabiting only the Northern Hemisphere of Earth, partly due to the hemispherical predominance of the world's land mass, with 68% of the world's land mass being in the Northern Hemisphere. Furthermore, since the 19th century humans have increasingly converged into urban areas with the majority living in urban areas by the 21st century. Beyond Earth's surface humans have lived on a temporary basis, with only a few special-purpose deep underground and underwater presences and a few space stations. The human population virtually completely remains on Earth's surface, fully depending on Earth and the environment it sustains. Since the second half of the 20th century, some hundreds of humans have temporarily stayed beyond Earth, a tiny fraction of whom have reached another celestial body, the Moon. Earth has been subject to extensive human settlement, and humans have developed diverse societies and cultures. Most of Earth's land has been territorially claimed since the 19th century by sovereign states (countries) separated by political borders, and 205 such states exist today, with only parts of Antarctica and a few small regions remaining unclaimed. Most of these states together form the United Nations, the leading worldwide intergovernmental organization, which extends human governance over the ocean and Antarctica, and therefore all of Earth. Natural resources and land use Earth has resources that have been exploited by humans. Those termed non-renewable resources, such as fossil fuels, are only replenished over geological timescales. Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas. These deposits are used by humans both for energy production and as feedstock for chemical production. Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics. These metals and other elements are extracted by mining, a process which often brings environmental and health damage. Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of organic waste. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends on dissolved nutrients washed down from the land. In 2019, of Earth's land surface consisted of forest and woodlands, was shrub and grassland, were used for animal feed production and grazing, and were cultivated as croplands. Of the 1214% of ice-free land that is used for croplands, 2 percentage points were irrigated in 2015. Humans use building materials to construct shelters. Humans and the environment Human activities have impacted Earth's environments. Through activities such as the burning of fossil fuels, humans have been increasing the amount of greenhouse gases in the atmosphere, altering Earth's energy budget and climate. It is estimated that global temperatures in the year 2020 were warmer than the preindustrial baseline. This increase in temperature, known as global warming, has contributed to the melting of glaciers, rising sea levels, increased risk of drought and wildfires, and migration of species to colder areas. The concept of planetary boundaries was introduced to quantify humanity's impact on Earth. Of the nine identified boundaries, five have been crossed: Biosphere integrity, climate change, chemical pollution, destruction of wild habitats and the nitrogen cycle are thought to have passed the safe threshold. As of 2018, no country meets the basic needs of its population without transgressing planetary boundaries. It is thought possible to provide all basic physical needs globally within sustainable levels of resource use. Cultural and historical viewpoint Human cultures have developed many views of the planet. The standard astronomical symbols of Earth are a quartered circle, , representing the four corners of the world, and a globus cruciger, . Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity. Creation myths in many religions involve the creation of Earth by a supernatural deity or deities. The Gaia hypothesis, developed in the mid-20th century, compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability. Images of Earth taken from space, particularly during the Apollo program, have been credited with altering the way that people viewed the planet that they lived on, called the overview effect, emphasizing its beauty, uniqueness and apparent fragility. In particular, this caused a realization of the scope of effects from human activity on Earth's environment. Enabled by science, particularly Earth observation, humans have started to take action on environmental issues globally, acknowledging the impact of humans and the interconnectedness of Earth's environments. Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in Ancient Greece by the idea of a spherical Earth, which was attributed to both the philosophers Pythagoras and Parmenides. Earth was generally believed to be the center of the universe until the 16th century, when scientists first concluded that it was a moving object, one of the planets of the Solar System. It was only during the 19th century that geologists realized Earth's age was at least many millions of years. Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old. See also Notes References External links Earth – Profile – Solar System Exploration – NASA Earth Observatory – NASA Earth – Videos – International Space Station: Video (01:02) on YouTube – Earth (time-lapse) Video (00:27) on YouTube – Earth and auroras (time-lapse) Google Earth 3D, interactive map Interactive 3D visualization of the Sun, Earth and Moon system GPlates Portal (University of Sydney) Astronomical objects known since antiquity Global natural environment Nature Planets of the Solar System Terrestrial planets Solar System
Earth
[ "Astronomy" ]
11,379
[ "Outer space", "Solar System" ]
9,232
https://en.wikipedia.org/wiki/Eiffel%20Tower
The Eiffel Tower ( ; ) is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower from 1887 to 1889. Locally nicknamed "La dame de fer" (French for "Iron Lady"), it was constructed as the centerpiece of the 1889 World's Fair, and to crown the centennial anniversary of the French Revolution. Although initially criticised by some of France's leading artists and intellectuals for its design, it has since become a global cultural icon of France and one of the most recognisable structures in the world. The tower received 5,889,000 visitors in 2022. The Eiffel Tower is the most visited monument with an entrance fee in the world: 6.91 million people ascended it in 2015. It was designated a in 1964, and was named part of a UNESCO World Heritage Site ("Paris, Banks of the Seine") in 1991. The tower is tall, about the same height as an 81- building, and the tallest structure in Paris. Its base is square, measuring on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by . Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. The tower has three levels for visitors, with restaurants on the first and second levels. The top level's upper platform is above the ground—the highest observation deck accessible to the public in the European Union. Tickets can be purchased to ascend by stairs or lift to the first and second levels. The climb from ground level to the first level is over 300 steps, as is the climb from the first level to the second, making the entire ascent a 600 step climb. Although there is a staircase to the top level, it is usually accessible only by lift. On this top, third level is a private apartment built for Gustave Eiffel's personal use. He decorated it with furniture by Jean Lachaise and invited friends such as Thomas Edison. History Origin The design of the Eiffel Tower is attributed to Maurice Koechlin and Émile Nouguier, two senior engineers working for the Compagnie des Établissements Eiffel. It was envisioned after discussion about a suitable centerpiece for the proposed 1889 Exposition Universelle, a world's fair to celebrate the centennial of the French Revolution. In May 1884, working at home, Koechlin made a sketch of their idea, described by him as "a great pylon, consisting of four lattice girders standing apart at the base and coming together at the top, joined together by metal trusses at regular intervals". Eiffel initially showed little enthusiasm, but he did approve further study, and the two engineers then asked Stephen Sauvestre, the head of the company's architectural department, to contribute to the design. Sauvestre added decorative arches to the base of the tower, a glass pavilion to the first level, and other embellishments. The new version gained Eiffel's support: he bought the rights to the patent on the design which Koechlin, Nouguier, and Sauvestre had taken out, and the design was put on display at the Exhibition of Decorative Arts in the autumn of 1884 under the company name. On 30 March 1885, Eiffel presented his plans to the ; after discussing the technical problems and emphasising the practical uses of the tower, he finished his talk by saying the tower would symbolise Little progress was made until 1886, when Jules Grévy was re-elected as president of France and Édouard Lockroy was appointed as minister for trade. A budget for the exposition was passed and, on 1 May, Lockroy announced an alteration to the terms of the open competition being held for a centrepiece to the exposition, which effectively made the selection of Eiffel's design a foregone conclusion, as entries had to include a study for a four-sided metal tower on the Champ de Mars. (A 300-metre tower was then considered a herculean engineering effort.) On 12 May, a commission was set up to examine Eiffel's scheme and its rivals, which, a month later, decided that all the proposals except Eiffel's were either impractical or lacking in details. After some debate about the exact location of the tower, a contract was signed on 8 January 1887. Eiffel signed it acting in his own capacity rather than as the representative of his company, the contract granting him 1.5 million francs toward the construction costs: less than a quarter of the estimated 6.5 million francs. Eiffel was to receive all income from the commercial exploitation of the tower during the exhibition and for the next 20 years. He later established a separate company to manage the tower, putting up half the necessary capital himself. A French bank, the Crédit Industriel et Commercial (CIC), helped finance the construction of the Eiffel Tower. During the period of the tower's construction, the CIC was acquiring funds from predatory loans to the National Bank of Haiti, some of which went towards the financing of the tower. These loans were connected to an indemnity controversy that saw France force Haiti's government to financially compensate French slaveowners for lost income as a result of the Haitian Revolution, and required Haiti to pay the CIC and its partner nearly half of all taxes collected on exports, "effectively choking off the nation's primary source of income". According to The New York Times, "[at] a time when the [CIC] was helping finance one of the world's best-known landmarks, the Eiffel Tower, as a monument to French liberty, it was choking Haiti's economy, taking much of the young nation's income back to Paris and impairing its ability to start schools, hospitals and the other building blocks of an independent country." Artists' protest The proposed tower had been a subject of controversy, drawing criticism from those who did not believe it was feasible and those who objected on artistic grounds. Prior to the Eiffel Tower's construction, no structure had ever been constructed to a height of 300 m, or even 200 m for that matter, and many people believed it was impossible. These objections were an expression of a long-standing debate in France about the relationship between architecture and engineering. It came to a head as work began at the Champ de Mars: a "Committee of Three Hundred" (one member for each metre of the tower's height) was formed, led by the prominent architect Charles Garnier and including some of the most important figures of the arts, such as William-Adolphe Bouguereau, Guy de Maupassant, Charles Gounod and Jules Massenet. A petition called "Artists against the Eiffel Tower" was sent to the Minister of Works and Commissioner for the Exposition, Adolphe Alphand, and it was published by Le Temps on 14 February 1887: Gustave Eiffel responded to these criticisms by comparing his tower to the Egyptian pyramids: "My tower will be the tallest edifice ever erected by man. Will it not also be grandiose in its way? And why would something admirable in Egypt become hideous and ridiculous in Paris?" These criticisms were also dealt with by Édouard Lockroy in a letter of support written to Alphand, sardonically saying, "Judging by the stately swell of the rhythms, the beauty of the metaphors, the elegance of its delicate and precise style, one can tell this protest is the result of collaboration of the most famous writers and poets of our time", and he explained that the protest was irrelevant since the project had been decided upon months before, and construction on the tower was already under way. Garnier was a member of the Tower Commission that had examined the various proposals, and had raised no objection. Eiffel pointed out to a journalist that it was premature to judge the effect of the tower solely on the basis of the drawings, that the Champ de Mars was distant enough from the monuments mentioned in the protest for there to be little risk of the tower overwhelming them, and putting the aesthetic argument for the tower: "Do not the laws of natural forces always conform to the secret laws of harmony?" Some of the protesters changed their minds when the tower was built; others remained unconvinced. Guy de Maupassant supposedly ate lunch in the tower's restaurant every day because it was the one place in Paris where the tower was not visible. By 1918, it had become a symbol of Paris and of France after Guillaume Apollinaire wrote a nationalist poem in the shape of the tower (a calligram) to express his feelings about the war against Germany. Today, it is widely considered to be a remarkable piece of structural art, and is often featured in films and literature. Construction Work on the foundations started on 28 January 1887. Those for the east and south legs were straightforward, with each leg resting on four concrete slabs, one for each of the principal girders of each leg. The west and north legs, being closer to the river Seine, were more complicated: each slab needed two piles installed by using compressed-air caissons long and in diameter driven to a depth of to support the concrete slabs, which were thick. Each of these slabs supported a block of limestone with an inclined top to bear a supporting shoe for the ironwork. Each shoe was anchored to the stonework by a pair of bolts in diameter and long. The foundations were completed on 30 June, and the erection of the ironwork began. The visible work on-site was complemented by the enormous amount of exacting preparatory work that took place behind the scenes: the drawing office produced 1,700 general drawings and 3,629 detailed drawings of the 18,038 different parts needed. The task of drawing the components was complicated by the complex angles involved in the design and the degree of precision required: the position of rivet holes was specified to within and angles worked out to one second of arc. The finished components, some already riveted together into sub-assemblies, arrived on horse-drawn carts from a factory in the nearby Parisian suburb of Levallois-Perret and were first bolted together, with the bolts being replaced with rivets as construction progressed. No drilling or shaping was done on site: if any part did not fit, it was sent back to the factory for alteration. In all, 18,038 pieces were joined using 2.5 million rivets. At first, the legs were constructed as cantilevers, but about halfway to the first level construction was paused to create a substantial timber scaffold. This renewed concerns about the structural integrity of the tower, and sensational headlines such as "Eiffel Suicide!" and "Gustave Eiffel Has Gone Mad: He Has Been Confined in an Asylum" appeared in the tabloid press. Multiple famous artists of that time, Charles Garnier and Alexander Dumas, thought poorly of the newly made tower. Charles Garnier thought it was a "truly tragic street lamp". Alexander Dumas said that it was like "Odius shadow of the odious column built of rivets and iron plates extending like a black blot". There were multiple protests over the style and the reasoning of placing it in the middle of Paris. At this stage, a small "creeper" crane designed to move up the tower was installed in each leg. They made use of the guides for the lifts which were to be fitted in the four legs. The critical stage of joining the legs at the first level was completed by the end of March 1888. Although the metalwork had been prepared with the utmost attention to detail, provision had been made to carry out small adjustments to precisely align the legs; hydraulic jacks were fitted to the shoes at the base of each leg, capable of exerting a force of 800 tonnes, and the legs were intentionally constructed at a slightly steeper angle than necessary, being supported by sandboxes on the scaffold. Although construction involved 300 on-site employees, due to Eiffel's safety precautions and the use of movable gangways, guardrails and screens, only one person died. Inauguration and the 1889 exposition The main structural work was completed at the end of March 1889 and, on 31 March, Eiffel celebrated by leading a group of government officials, accompanied by representatives of the press, to the top of the tower. Because the lifts were not yet in operation, the ascent was made by foot, and took over an hour, with Eiffel stopping frequently to explain various features. Most of the party chose to stop at the lower levels, but a few, including the structural engineer, Émile Nouguier, the head of construction, Jean Compagnon, the President of the City Council, and reporters from Le Figaro and Le Monde Illustré, completed the ascent. At 2:35 pm, Eiffel hoisted a large Tricolour to the accompaniment of a 25-gun salute fired at the first level. There was still work to be done, particularly on the lifts and facilities, and the tower was not opened to the public until nine days after the opening of the exposition on 6 May; even then, the lifts had not been completed. The tower was an instant success with the public, and nearly 30,000 visitors made the 1,710-step climb to the top before the lifts entered service on 26 May. Tickets cost 2 francs for the first level, 3 for the second, and 5 for the top, with half-price admission on Sundays, and by the end of the exhibition there had been 1,896,987 visitors. After dark, the tower was lit by hundreds of gas lamps, and a beacon sent out three beams of red, white and blue light. Two searchlights mounted on a circular rail were used to illuminate various buildings of the exposition. The daily opening and closing of the exposition were announced by a cannon at the top. On the second level, the French newspaper Le Figaro had an office and a printing press, where a special souvenir edition, Le Figaro de la Tour, was made. At the top, there was a post office where visitors could send letters and postcards as a memento of their visit. Graffitists were also catered for: sheets of paper were mounted on the walls each day for visitors to record their impressions of the tower. Gustave Eiffel described the collection of responses as "truly curious". Famous visitors to the tower included the Prince of Wales, Sarah Bernhardt, "Buffalo Bill" Cody (his Wild West show was an attraction at the exposition) and Thomas Edison. Eiffel invited Edison to his private apartment at the top of the tower, where Edison presented him with one of his phonographs, a new invention and one of the many highlights of the exposition. Edison signed the guestbook with this message on September 10, 1889: Eiffel made use of his apartment at the top of the tower to carry out meteorological observations, and also used the tower to perform experiments on the action of air resistance on falling bodies. Subsequent events Eiffel had a permit for the tower to stand for 20 years. It was to be dismantled in 1909, when its ownership would revert to the City of Paris. The city had planned to tear it down (part of the original contest rules for designing a tower was that it should be easy to dismantle) but as the tower proved to be valuable for many innovations in the early 20th century, particularly radio telegraphy, it was allowed to remain after the expiry of the permit, and from 1910 it also became part of the International Time Service. For the 1900 Exposition Universelle, the lifts in the east and west legs were replaced by lifts running as far as the second level constructed by the French firm Fives-Lille. These had a compensating mechanism to keep the floor level as the angle of ascent changed at the first level, and were driven by a similar hydraulic mechanism as the Otis lifts, although this was situated at the base of the tower. Hydraulic pressure was provided by pressurised accumulators located near this mechanism. At the same time the lift in the north pillar was removed and replaced by a staircase to the first level. The layout of both first and second levels was modified, with the space available for visitors on the second level. The original lift in the south pillar was removed 13 years later. On 19 October 1901, Alberto Santos-Dumont, flying his No.6 airship, won a 100,000-franc prize offered by Henri Deutsch de la Meurthe for the first person to make a flight from St. Cloud to the Eiffel Tower and back in less than half an hour. In 1910, Father Theodor Wulf measured radiant energy at the top and bottom of the tower. He found more at the top than expected, incidentally discovering what are known today as cosmic rays. Two years later, on 4 February 1912, Austrian tailor Franz Reichelt died after jumping from the first level of the tower (a height of 57 m) to demonstrate his parachute design. In 1914, at the outbreak of World War I, a radio transmitter located in the tower jammed German radio communications, seriously hindering their advance on Paris and contributing to the Allied victory at the First Battle of the Marne. During World War I, the Eiffel Tower's wireless station played a crucial role in intercepting enemy communications from Berlin. In 1914, French forces successfully launched a counter-attack during the Battle of the Marne after gaining critical intelligence on the German Army's movements. In 1917, the station intercepted a coded message between Germany and Spain that referenced 'Operative H-21.' This information contributed to the arrest, conviction, and execution of Mata Hari, the famous spy accused of working for Germany. From 1925 to 1934, illuminated signs for Citroën adorned three of the tower's sides, making it the tallest advertising space in the world at the time. In April 1935, the tower was used to make experimental low-resolution television transmissions, using a shortwave transmitter of 200 watts power. On 17 November, an improved 180-line transmitter was installed. On two separate but related occasions in 1925, the con artist Victor Lustig "sold" the tower for scrap metal. A year later, in February 1926, pilot Leon Collet was killed trying to fly under the tower. His aircraft became entangled in an aerial belonging to a wireless station. A bust of Gustave Eiffel by Antoine Bourdelle was unveiled at the base of the north leg on 2 May 1929. In 1930, the tower lost the title of the world's tallest structure when the Chrysler Building in New York City was completed. In 1938, the decorative arcade around the first level was removed. Upon the German occupation of Paris in 1940, the lift cables were cut by the French. The tower was restricted to German visitors during the occupation and the lifts were not repaired until 1946. In 1940, German soldiers had to climb the tower to hoist a swastika-centered Reichskriegsflagge, but the flag was so large it blew away just a few hours later, and was replaced by a smaller one. When visiting Paris, Hitler chose to stay on the ground. When the Allies were nearing Paris in August 1944, Hitler ordered General Dietrich von Choltitz, the military governor of Paris, to demolish the tower along with the rest of the city. Von Choltitz disobeyed the order. On 25 August, before the Germans had been driven out of Paris, the German flag was replaced with a Tricolour by two men from the French Naval Museum, who narrowly beat three men led by Lucien Sarniguet, who had lowered the Tricolour on 13 June 1940 when Paris fell to the Germans. A fire started in the television transmitter on 3 January 1956, damaging the top of the tower. Repairs took a year, and in 1957, the present radio aerial was added to the top. In 1964, the Eiffel Tower was officially declared to be a historical monument by the Minister of Cultural Affairs, André Malraux. A year later, an additional lift system was installed in the north pillar. According to interviews, in 1967, Montreal Mayor Jean Drapeau negotiated a secret agreement with Charles de Gaulle for the tower to be dismantled and temporarily relocated to Montreal to serve as a landmark and tourist attraction during Expo 67. The plan was allegedly vetoed by the company operating the tower out of fear that the French government could refuse permission for the tower to be restored in its original location. In 1982, the original lifts between the second and third levels were replaced after 97 years in service. These had been closed to the public between November and March because the water in the hydraulic drive tended to freeze. The new cars operate in pairs, with one counterbalancing the other, and perform the journey in one stage, reducing the journey time from eight minutes to less than two minutes. At the same time, two new emergency staircases were installed, replacing the original spiral staircases. In 1983, the south pillar was fitted with an electrically driven Otis lift to serve the Jules Verne restaurant. The Fives-Lille lifts in the east and west legs, fitted in 1899, were extensively refurbished in 1986. The cars were replaced, and a computer system was installed to completely automate the lifts. The motive power was moved from the water hydraulic system to a new electrically driven oil-filled hydraulic system, and the original water hydraulics were retained solely as a counterbalance system. A service lift was added to the south pillar for moving small loads and maintenance personnel three years later. Robert Moriarty flew a Beechcraft Bonanza under the tower on 31 March 1984. In 1987, A. J. Hackett made one of his first bungee jumps from the top of the Eiffel Tower, using a special cord he had helped develop. Hackett was arrested by the police. On 27 October 1991, Thierry Devaux, along with mountain guide Hervé Calvayrac, performed a series of acrobatic figures while bungee jumping from the second floor of the tower. Facing the Champ de Mars, Devaux used an electric winch between figures to go back up to the second floor. When firemen arrived, he stopped after the sixth jump. For its "Countdown to the Year 2000" celebration on 31 December 1999, flashing lights and high-powered searchlights were installed on the tower. During the last three minutes of the year, the lights were turned on starting from the base of the tower and continuing to the top to welcome 2000 with a huge fireworks show. An exhibition above a cafeteria on the first floor commemorates this event. The searchlights on top of the tower made it a beacon in Paris's night sky, and 20,000 flashing bulbs gave the tower a sparkly appearance for five minutes every hour on the hour. The lights sparkled blue for several nights to herald the new millennium on 31 December 2000. The sparkly lighting continued for 18 months until July 2001. The sparkling lights were turned on again on 21 June 2003, and the display was planned to last for 10 years before they needed replacing. The tower received its th guest on 28 November 2002. The tower has operated at its maximum capacity of about 7 million visitors per year since 2003. In 2004, the Eiffel Tower began hosting a seasonal ice rink on the first level. A glass floor was installed on the first level during the 2014 refurbishment. Design Material The puddle iron (wrought iron) of the Eiffel Tower weighs 7,300 tonnes, and the addition of lifts, shops and antennae have brought the total weight to approximately 10,100 tonnes. As a demonstration of the economy of design, if the 7,300 tonnes of metal in the structure were melted down, it would fill the square base, on each side, to a depth of only assuming the density of the metal to be 7.8 tonnes per cubic metre. Additionally, a cubic box surrounding the tower (324 m × 125 m × 125 m) would contain  tonnes of air, weighing almost as much as the iron itself. Depending on the ambient temperature, the top of the tower may shift away from the sun by up to due to thermal expansion of the metal on the side facing the sun. Wind and weather considerations When it was built, Eiffel was accused of trying to create something artistic with no regard to the principles of engineering. However, Eiffel and his team were experienced bridge builders. In an interview with the newspaper Le Temps published on 14 February 1887, Eiffel said: He used graphical methods to determine the strength of the tower and empirical evidence to account for the effects of wind, rather than a mathematical formula. Close examination of the tower reveals a basically exponential shape. All parts of the tower were overdesigned to ensure maximum resistance to wind forces. The top half was assumed to have no gaps in the latticework. After it was completed, some have put forward various mathematical hypotheses in an attempt to explain the success of the design. A one devised in 2004 after letters sent by Eiffel to the French Society of Civil Engineers in 1885 were translated into English described it as a non-linear integral equation based on counteracting the wind pressure on any point of the tower with the tension between the construction elements at that point. The Eiffel Tower sways by up to in the wind. Floors Ground floor The four columns of the tower each house access stairs and elevators to the first two floors, while at the south column only the elevator to the second floor restaurant is publicly accessible. 1st floor The first floor is publicly accessible by elevator or stairs. When originally built, the first level contained three restaurants—one French, one Russian and one Flemish—and an "Anglo-American Bar". After the exposition closed, the Flemish restaurant was converted to a 250-seat theatre. Today there is the restaurant and other facilities. 2nd floor The second floor is publicly accessible by elevator or stairs and has a restaurant called , a gourmet restaurant with its own lift going up from the south column to the second level. This restaurant has one star in the Michelin Red Guide. It was run by the multi-Michelin star chef Alain Ducasse from 2007 to 2017. As of May 2019, it is managed by three-star chef Frédéric Anton. It owes its name to the famous science-fiction writer Jules Verne. 3rd floor The third floor is the top floor, publicly accessible by elevator. Originally there were laboratories for various experiments, and a small apartment reserved for Gustave Eiffel to entertain guests, which is now open to the public, complete with period decorations and lifelike mannequins of Eiffel and some of his notable guests. From 1937 until 1981, there was a restaurant near the top of the tower. It was removed due to structural considerations; engineers had determined it was too heavy and was causing the tower to sag. This restaurant was sold to an American restaurateur and transported to New York and then New Orleans. It was rebuilt on the edge of New Orleans' Garden District as a restaurant and later event hall. Today there is a champagne bar. Lifts The arrangement of the lifts has been changed several times during the tower's history. Given the elasticity of the cables and the time taken to align the cars with the landings, each lift, in normal service, takes an average of 8 minutes and 50 seconds to do the round trip, spending an average of 1 minute and 15 seconds at each level. The average journey time between levels is 1 minute. The original hydraulic mechanism is on public display in a small museum at the base of the east and west legs. Because the mechanism requires frequent lubrication and maintenance, public access is often restricted. The rope mechanism of the north tower can be seen as visitors exit the lift. Equipping the tower with adequate and safe passenger lifts was a major concern of the government commission overseeing the Exposition. Although some visitors could be expected to climb to the first level, or even the second, lifts had to be the main means of ascent. Constructing lifts to reach the first level was done by making the legs wide enough at the bottom and so nearly straight that they could contain a straight track. A contract was given to the French company Roux, Combaluzier & Lepape for two lifts to be fitted in the east and west legs. Roux, Combaluzier & Lepape used a pair of endless chains with rigid, articulated links to which the car was attached. Lead weights on some links of the upper or return sections of the chains counterbalanced most of the car's weight. The car was pushed up from below, not pulled up from above: to prevent the chain buckling, it was enclosed in a conduit. At the bottom of the run, the chains passed around diameter sprockets. Smaller sprockets at the top guided the chains. Installing lifts to the second level was more of a challenge because a straight track was impossible. No French company wanted to undertake the work. The European branch of Otis Brothers & Company submitted a proposal, but this was rejected: the fair's charter ruled out the use of any foreign material in the construction of the tower. The deadline for bids was extended, but still no French companies put themselves forward, and eventually the contract was given to Otis in July 1887. Otis were confident they would eventually be given the contract and had already started creating designs. The car was divided into two superimposed compartments, each holding 25 passengers, with the lift operator occupying an exterior platform on the first level. Motive power was provided by an inclined hydraulic ram long and in diameter in the tower leg with a stroke of : this moved a carriage carrying six sheaves. Five fixed sheaves were mounted higher up the leg, producing an arrangement similar to a block and tackle but acting in reverse, multiplying the stroke of the piston rather than the force generated. The hydraulic pressure in the driving cylinder was produced by a large open reservoir on the second level. After being exhausted from the cylinder, the water was pumped back up to the reservoir by two pumps in the machinery room at the base of the south leg. This reservoir also provided power to the lifts to the first level. The original lifts for the journey between the second and third levels were supplied by Léon Edoux. A pair of hydraulic rams were mounted on the second level, reaching nearly halfway up to the third level. One lift car was mounted on top of these rams: cables ran from the top of this car up to sheaves on the third level and back down to a second car. Each car travelled only half the distance between the second and third levels and passengers were required to change lifts halfway by means of a short gangway. The 10-ton cars each held 65 passengers. Engraved names Gustave Eiffel engraved on the building of the tower the names of 72 French scientists, engineers and mathematicians as a recognition of their contributions. Eiffel chose this "invocation of science" because of his concern over the artists' protest. At the beginning of the 20th century, the engravings were painted over, but they were restored in 1986–87 by the , a company operating the tower. Aesthetics The tower is painted in three shades: lighter at the top, getting progressively darker towards the bottom to complement the Parisian sky. It was originally reddish brown; this changed in 1968 to a bronze colour known as "Eiffel Tower Brown". In what is expected to be a temporary change, the tower was painted gold in commemoration of the 2024 Summer Olympics in Paris. Following the 2024 Summer Olympics held in Paris, Mayor Anne Hidalgo proposed keeping the Olympic rings on the tower permanently. The rings, which measure wide and high, were initially installed for the Games and were scheduled for removal after the Paralympics. Hidalgo's decision faced criticism from the Eiffel family and some residents concerned about altering the protected monument. The original 30-ton rings would be replaced with lighter versions for long-term display. The only non-structural elements are the four decorative grill-work arches, added in Sauvestre's sketches, which served to make the tower look more substantial and to make a more impressive entrance to the exposition. A pop-culture movie cliché is that the view from a Parisian window always includes the tower. In reality, since zoning restrictions limit the height of most buildings in Paris to seven storeys, only a small number of tall buildings have a clear view of the tower. Maintenance Maintenance of the tower includes applying 60 tons of paint every 7 years to prevent it from rusting. The tower has been completely repainted at least 19 times since it was built, with the most recent being in 2010. Lead paint was still being used as recently as 2001 when the practice was stopped out of concern for the environment. Communications The tower has been used for making radio transmissions since the beginning of the 20th century. Until the 1950s, sets of aerial wires ran from the cupola to anchors on the Avenue de Suffren and Champ de Mars. These were connected to longwave transmitters in small bunkers. In 1909, a permanent underground radio centre was built near the south pillar, which still exists today. On 20 November 1913, the Paris Observatory, using the Eiffel Tower as an aerial, exchanged wireless signals with the United States Naval Observatory, which used an aerial in Arlington County, Virginia. The object of the transmissions was to measure the difference in longitude between Paris and Washington, D.C. Today, radio and digital television signals are transmitted from the Eiffel Tower. FM radio Digital television A television antenna was first installed on the tower in 1957, increasing its height by . Work carried out in 2000 added a further , giving the current height of . Analogue television signals from the Eiffel Tower ceased on 8 March 2011. Dimensions Height changes The pinnacle height of the Eiffel Tower has changed multiple times over the years as described in the chart below. Taller structures The Eiffel Tower was the world's tallest structure when completed in 1889, a distinction it retained until 1929 when the Chrysler Building in New York City was topped out. The tower also lost its standing as the world's tallest tower to the Tokyo Tower in 1958 but retains its status as the tallest freestanding (non-guyed) structure in France. Lattice towers taller than the Eiffel Tower Structures in France taller than the Eiffel Tower Tourism Transport The nearest Paris Métro station is Bir-Hakeim and the nearest RER station is Champ de Mars-Tour Eiffel. The tower itself is located at the intersection of the quai Branly and the Pont d'Iéna. Popularity More than 300 million people have visited the tower since it was completed in 1889. In 2015, there were 6.91 million visitors. The tower is the most-visited paid monument in the world. An average of 25,000 people ascend the tower every day (which can result in long queues). Illumination copyright The tower and its image have been in the public domain since 1993, 70 years after Eiffel's death. In June 1990, a French court ruled that a special lighting display on the tower in 1989 to mark the tower's 100th anniversary was an "original visual creation" protected by copyright. The Court of Cassation, France's judicial court of last resort, upheld the ruling in March 1992. The (SETE) now considers any illumination of the tower to be a separate work of art that falls under copyright. As a result, the SNTE alleges that it is illegal to publish contemporary photographs of the lit tower at night without permission in France and some other countries for commercial use. For this reason, it is often rare to find images or videos of the lit tower at night on stock image sites, and media outlets rarely broadcast images or videos of it. The imposition of copyright has been controversial. The Director of Documentation for what was then called the (SNTE), Stéphane Dieu, commented in 2005: "It is really just a way to manage commercial use of the image, so that it isn't used in ways [of which] we don't approve". SNTE made over €1 million from copyright fees in 2002. However, it could also be used to restrict the publication of tourist photographs of the tower at night, as well as hindering non-profit and semi-commercial publication of images of the illuminated tower. The copyright claim itself has never been tested in courts to date, according to a 2014 article in the Art Law Journal, and there has never been an attempt to track down millions of people who have posted and shared their images of the illuminated tower on the Internet worldwide. However, the article adds that commercial uses of such images, like in a magazine, on a film poster, or on product packaging, may require prior permission. French doctrine and jurisprudence allows pictures incorporating a copyrighted work as long as their presence is incidental or accessory to the subject being represented, a reasoning akin to the de minimis rule. Therefore, SETE may be unable to claim copyright on photographs of Paris which happen to include the lit tower. Replicas As one of the most famous landmarks in the world, the Eiffel Tower has been the inspiration for the creation of many replicas and similar towers. An early example is Blackpool Tower in England. The mayor of Blackpool, Sir John Bickerstaffe, was so impressed on seeing the Eiffel Tower at the 1889 exposition that he commissioned a similar tower to be built in his town. It opened in 1894 and is tall. Tokyo Tower in Japan, built as a communications tower in 1958, was also inspired by the Eiffel Tower. Well known is the Petřín Lookout Tower in Prague too. There are various scale models of the tower in the United States, including a half-scale version at the Paris Las Vegas, Nevada, one in Paris, Texas built in 1993, and two 1:3 scale models at Kings Island, located in Mason, Ohio, and Kings Dominion, Virginia, amusement parks opened in 1972 and 1975 respectively. Two 1:3 scale models can be found in China, one in Durango, Mexico that was donated by the local French community, and several across Europe. In 2011, the TV show Pricing the Priceless on the National Geographic Channel speculated that a full-size replica of the tower would cost approximately US$480 million to build. This would be more than ten times the cost of the original (nearly 8 million in 1890 francs; around US$40 million in 2018 dollars). See also List of tallest buildings and structures in the Paris region List of tallest buildings and structures List of tourist attractions in Paris List of tallest towers List of tallest freestanding structures List of tallest freestanding steel structures List of tallest structures built before the 20th century List of transmission sites List of the 72 names on the Eiffel Tower Lattice tower Eiffel Tower,1909–1928 painting series by Robert Delaunay Panorama of the Paris Exhibition No. 3 (1900), silent film depicting Paris and the Eiffel Tower References Notes Bibliography External links List of radio services using today Eiffel Tower Articles containing video clips Buildings and structures in the 7th arrondissement of Paris Former world's tallest buildings Historic Civil Engineering Landmarks Landmarks in France Michelin-starred restaurants in France Observation towers in France Restaurant towers Skyscrapers in Paris Tourist attractions in Paris Towers in Paris Towers completed in 1889 Venues of the 2024 Summer Olympics Olympic volleyball venues Lattice towers Architectural controversies 1889 establishments in France Exposition Universelle (1889) World's fair architecture in Paris Monuments historiques of Paris
Eiffel Tower
[ "Engineering" ]
8,239
[ "Historic Civil Engineering Landmarks", "Architectural controversies", "Civil engineering", "Architecture" ]
9,236
https://en.wikipedia.org/wiki/Evolution
Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation. The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment. In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow. All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today. Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science. Heredity Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner. Sources of variation Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species. An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely. Mutation Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line. One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring. Sex and recombination In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial. Gene flow Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. Epigenetics Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. Evolutionary forces From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias. Natural selection Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles: Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation). Different traits confer different rates of survival and reproduction (differential fitness). These traits can be passed from generation to generation (heritability of fitness). More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms. Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation. Genetic drift Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles. According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. Mutation bias Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution. Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature. For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size. However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation. Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates. Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation. Genetic hitchhiking Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. Sexual selection A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits. Natural outcomes Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky: Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing. Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Coevolution Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Cooperation Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. Speciation Speciation is the process where a species diverges into two or more descendant species. There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. Applications Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution. Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. Evolutionary history of life Origin of life The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. Common descent All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree. Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed. Evolution of life Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. History of evolutionary thought Classical antiquity The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (). Middle Ages In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be. A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous". Pre-Darwinian The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan. Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. Darwinian revolution The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. Othniel C. Marsh, America’s first paleontologist, was the first to provide solid fossil evidence to support Darwin’s theory of evolution by unearthing the ancestors of the modern horse. In 1877, Marsh delivered a very influential speech before the annual meeting of the American Association for the Advancement of Science, providing a demonstrative argument for evolution. For the first time, Marsh traced the evolution of vertebrates from fish all the way through humans. Sparing no detail, he listed a wealth of fossil examples of past life forms. The significance of this speech was immediately recognized by the scientific community, and it was printed in its entirety in several scientific journals. In 1880, Marsh caught the attention of the scientific world with the publication of Odontornithes: a Monograph on Extinct Birds of North America, which included his discoveries of birds with teeth. These skeletons helped bridge the gap between dinosaurs and birds, and provided invaluable support for Darwin's theory of evolution. Darwin wrote to Marsh saying, "Your work on these old birds & on the many fossil animals of N. America has afforded the best support to the theory of evolution, which has appeared within the last 20 years" (since Darwin's publication of Origin of Species).Cianfaglione, Paul. "O.C. Marsh Odontornithes Monograph Still Relevant Today", 20 Jul 2016, Avian Musings: "going beyond the field mark." Pangenesis and heredity The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. The 'modern synthesis' In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology. Further syntheses Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations. The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability. Social and cultural responses In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District'' case. The debate over Darwin's ideas did not generate significant controversy in China. See also Chronospecies References Bibliography The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09. The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21. "Proceedings of a symposium held at the American Museum of Natural History in New York, 2002." . Retrieved 2014-11-29. "Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997." "Based on a conference held in Bellagio, Italy, June 25–30, 1989" Further reading Introductory reading American version. Advanced reading External links General information Adobe Flash required. "History of Evolution in the United States". Salon. Retrieved 2021-08-24. Experiments Online lectures Biology theories
Evolution
[ "Biology" ]
12,934
[ "Evolutionary biology", "Biology theories" ]
9,256
https://en.wikipedia.org/wiki/Enigma%20machine
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages. The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plaintext is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress. The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message. Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome. History The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II. Several Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depended on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need. Breaking Enigma Hans-Thilo Schmidt was a German who spied for the French, obtaining access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to Poland. Around December 1932, Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently, the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933. Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic. On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered). In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques. Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked. During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort. Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed. The Abwehr used different versions of Enigma machines. In November 1942, during Operation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor. The Abwehr code had been broken on 8 December 1941 by Dilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled the Double-Cross System to operate.From October 1944, the German Abwehr used the Schlüsselgerät 41 in limited quantities. Design Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915. Electrical pathway An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on. Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp. The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press. Rotors The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant. By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher. Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector. Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring. The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks. The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine. Stepping To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently. Turnover The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation. The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows. The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion. With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues. To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions. A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured. Entry wheel The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification. Reflector With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers. In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels. In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings. Plugboard The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it. A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used. Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters. Accessories Other features made various Enigma machines more secure or more convenient. Schreibmax Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext. Fernlesegerät Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it. Uhr In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs. Mathematical analysis The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let denote the plugboard transformation, denote that of the reflector (), and , , denote those of the left, middle and right rotors respectively. Then the encryption can be expressed as After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor is rotated positions, the transformation becomes where is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as and rotations of and . The encryption transformation can then be described as Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits). Choose 3 rotors from a set of 5 rotors = 5 x 4 x 3 = 60 26 positions per rotor = 26 x 26 x 26 = 17,576 Plugboard = 26! / ( 6! x 10! x 2^10) = 150,738,274,937,250 Multiply each of the above = 158,962,555,217,826,360,000 Operation Basic operation A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge. Details In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk. An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine: Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted. Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring. Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together. In very late versions, the wiring of the reconfigurable reflector. Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message. For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows: Wheel order: IV, II, V Ring settings: 15, 23, 26 Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW Indicator groups: lsa zbw vcj rxn Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack. Indicator Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible. One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message. At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message. This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique". During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings. This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key. Additional details The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop. Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ. The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA. The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters. The Kriegsmarine used four-character groups and counted those groups. Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key. Example enciphering process The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as LUSHQOXDMZNAIKFREPCYBWVGTJ and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ can be represented as 0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26 0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01 0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02 0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03 0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04 0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05 0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06 0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07 0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08 0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09 0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10 0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11 0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12 0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13 0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14 0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15 0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16 0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17 0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18 0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19 0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20 0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21 0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22 0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23 0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24 0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25 0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26 0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01 0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02 0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03 0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04 0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05 where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor. The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character: G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ   P EFMQAB(G)UINKXCJORDPZTHWVLYS         AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW   1 OFRJVM(A)ZHQNBXPYKCULGSWETDI  N  03  VIII   2 (N)UKCHVSMDGTZQFYEWPIALOXRJB  U  17  VI   3 XJMIYVCARQOWH(L)NDSUFKGBEPZT  D  15  V   4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ  C  25  β   R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q         c   4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK         β   3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY         V   2 TZDIPNJESYCUHAVRMXGKB(F)QWOL         VI   1 GLQYW(B)TIZDPSFKANJCUXREVMOH         VIII   P E(F)MQABGUINKXCJORDPZTHWVLYS         AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW F < KPTXIG(F)MESAUHYQBOVJCLRZDNW Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F. This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters. Models The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines. An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries. Commercial Enigma On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors. Enigma Handelsmaschine (1923) Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about . Schreibende Enigma (1924) This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering. Glühlampenmaschine, Enigma A (1924) The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version. The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step. Enigma B (1924) Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor. Enigma C (1926) Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter. Enigma D (1927) The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages. "Navy Cipher D" Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services. Enigma H (1929) There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently. Enigma K The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan. Military Enigma The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber. Funkschlüssel C The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926. The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933. Enigma G (1928–1930) By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G. The Abwehr used the Enigma G. This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma. Wehrmacht Enigma I (1930–1938) Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II. The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength. Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around . In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications. M3 (1934) By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five. Two extra rotors (1938) In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight. M4 (1942) A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor. Surviving machines The effort to break the Enigma was not disclosed until 1973. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts. The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. The Deutsches Spionagemuseum in Berlin also showcases two military variants. Enigma machines are also exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum. In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England. In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario. Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues. A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors. In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months. In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ. The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia. On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein. An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023. Derivatives The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar. Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform. A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts. Simulators See also Alastair Denniston Arlington Hall Arne Beurling Beaumanor Hall, a stately home used during the Second World War for military intelligence Cryptanalysis of the Enigma Erhard Maertens—investigated Enigma security Erich Fellgiebel ECM Mark II—cipher machine used by the Americans in the Second World War Fritz Thiele Gisbert Hasenjaeger—responsible for Enigma security United States Naval Computing Machine Laboratory Typex—cipher machine used by the British in the Second World War, based on the principles of the commercial Enigma machine Explanatory notes References Citations General and cited references Further reading Heath, Nick, Hacking the Nazis: The secret story of the women who broke Hitler's codes TechRepublic, 27 March 2015 Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part I", Cryptologia 25(2), April 2001, pp. 101–141. Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part II", Cryptologia 25(3), July 2001, pp. 177–212. Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part III", Cryptologia 25(4), October 2001, pp. 296–310. Perera, Tom. The Story of the ENIGMA: History, Technology and Deciphering, 2nd Edition, CD-ROM, 2004, Artifax Books, sample pages Rebecca Ratcliffe: Searching for Security. The German Investigations into Enigma's security. In: Intelligence and National Security 14 (1999) Issue 1 (Special Issue) S. 146–167. Rejewski, Marian. "How Polish Mathematicians Deciphered the Enigma" , Annals of the History of Computing 3, 1981. This article is regarded by Andrew Hodges, Alan Turing's biographer, as "the definitive account" (see Hodges' Alan Turing: The Enigma, Walker and Company, 2000 paperback edition, p. 548, footnote 4.5). Ulbricht, Heinz. Enigma Uhr, Cryptologia, 23(3), April 1999, pp. 194–205. Untold Story of Enigma Code-Breaker — The Ministry of Defence (U.K.) External links Gordon Corera, Poland's overlooked Enigma codebreakers, BBC News Magazine, 4 July 2014 Long-running list of places with Enigma machines on display Bletchley Park National Code Centre Home of the British codebreakers during the Second World War Enigma machines on the Crypto Museum Web site Pictures of a four-rotor naval enigma, including Flash (SWF) views of the machine Enigma Pictures and Demonstration by NSA Employee at RSA Kenngruppenheft Process of building an Enigma M4 replica Breaking German Navy Ciphers Broken stream ciphers Cryptographic hardware Encryption devices Military communications of Germany Military equipment introduced in the 1920s Products introduced in 1918 Rotor machines Signals intelligence of World War II World War II military equipment of Germany
Enigma machine
[ "Physics", "Technology" ]
12,736
[ "Physical systems", "Machines", "Rotor machines" ]
9,257
https://en.wikipedia.org/wiki/Enzyme
Enzymes () are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties. Enzymes are known to catalyze more than 5,000 biochemical reaction types. Other biocatalysts are catalytic RNA molecules, also called ribozymes. They are sometimes described as a type of enzyme rather than being like an enzyme, but even in the decades since ribozymes' discovery in 1980–1982, the word enzyme alone often means the protein type specifically (as is used in this article). An enzyme's specificity comes from its unique three-dimensional structure. Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties. Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew. Etymology and history By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified. French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes , to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms. Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers). The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry. The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail. Classification and nomenclature Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity. Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes. The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity. The top-level classification is: EC 1, Oxidoreductases: catalyze oxidation/reduction reactions EC 2, Transferases: transfer a functional group (e.g. a methyl or phosphate group) EC 3, Hydrolases: catalyze the hydrolysis of various bonds EC 4, Lyases: cleave various bonds by means other than hydrolysis and oxidation EC 5, Isomerases: catalyze isomerization changes within a single molecule EC 6, Ligases: join two molecules with covalent bonds. EC 7, Translocases: catalyze the movement of ions or molecules across membranes, or their separation within membranes. These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1). Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam. Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement. Structure Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate. Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site. In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity. A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components. Mechanism Substrate binding Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific. Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes. Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function. "Lock and key" model To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve. Induced fit model In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined. Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. Catalysis Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG‡, Gibbs free energy) By stabilizing the transition state: Creating an environment with a charge distribution complementary to that of the transition state to lower its energy By providing an alternative reaction pathway: Temporarily reacting with the substrate, forming a covalent intermediate to provide a lower energy transition state By destabilizing the substrate ground state: Distorting bound substrate(s) into their transition state form to reduce the energy required to reach the transition state By orienting the substrates into a productive arrangement to reduce the reaction entropy change (the contribution of this mechanism to catalysis is relatively small) Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilize charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate. Dynamics Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory. Substrate presentation Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane. Allosteric modulation Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway. Cofactors Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase). An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions. Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity. Coenzymes Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include: the hydride ion (H−), carried by NAD or NADP+ the phosphate group, carried by adenosine triphosphate the acetyl group, carried by coenzyme A formyl, methenyl or methyl groups, carried by folic acid and the methyl group, carried by S-adenosylmethionine Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH. Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day. Thermodynamics As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants: The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES‡). Finally the enzyme-product complex (EP) dissociates to release the products. Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions. Kinetics Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today. Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme. Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second. The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 108 to 109 (M−1 s−1). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of and are about and , respectively. Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects. Inhibition Enzyme reaction rates can be decreased by various types of enzyme inhibitors. Types of inhibition Competitive A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site. Non-competitive A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration. Uncompetitive An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare. Mixed A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation. Irreversible An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner. Functions of inhibitors In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism. Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration. Factors affecting enzyme activity As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc. The following table shows pH optima for various enzymes. Biological function Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase. An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber. Metabolism Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme. Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions. Control of activity There are five main ways that enzyme activity is controlled in the cell. Regulation Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms. Post-translational modification Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme. Quantity Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression. Subcellular distribution Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments. Organ specialization In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production. Involvement in disease Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase. One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired. Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance. Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light. Evolution Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases. Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below). Industrial applications Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature. See also Industrial enzymes List of enzymes Molecular machine Enzyme databases BRENDA ExPASy IntEnz KEGG MetaCyc References Further reading General , A biochemistry textbook available free online through NCBI Bookshelf. Etymology and history , A history of early enzymology. Enzyme structure and mechanism Kinetics and inhibition External links Biomolecules Catalysis Metabolism Process chemicals
Enzyme
[ "Chemistry", "Biology" ]
7,692
[ "Catalysis", "Natural products", "Biochemistry", "Organic compounds", "Cellular processes", "Biomolecules", "Molecular biology", "Structural biology", "Chemical kinetics", "Metabolism", "Process chemicals" ]
9,259
https://en.wikipedia.org/wiki/Equivalence%20relation
In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number is equal to itself (reflexive). If , then (symmetric). If and , then (transitive). Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class. Notation Various notations are used in the literature to denote that two elements and of a set are equivalent with respect to an equivalence relation the most common are "" and "", which are used when is implicit, and variations of "", "", or "" to specify explicitly. Non-equivalence may be written "" or "". Definition A binary relation on a set is said to be an equivalence relation, if and only if it is reflexive, symmetric and transitive. That is, for all and in (reflexivity). if and only if (symmetry). If and then (transitivity). together with the relation is called a setoid. The equivalence class of under denoted is defined as Alternative definition using relational algebra In relational algebra, if and are relations, then the composite relation is defined so that if and only if there is a such that and . This definition is a generalisation of the definition of functional composition. The defining properties of an equivalence relation on a set can then be reformulated as follows: . (reflexivity). (Here, denotes the identity function on .) (symmetry). (transitivity). Examples Simple example On the set , the relation is an equivalence relation. The following sets are equivalence classes of this relation: The set of all equivalence classes for is This set is a partition of the set . It is also called the quotient set of by . Equivalence relations The following relations are all equivalence relations: "Is equal to" on the set of numbers. For example, is equal to "Has the same birthday as" on the set of all people. "Is similar to" on the set of all triangles. "Is congruent to" on the set of all triangles. Given a natural number , "is congruent to, modulo " on the integers. Given a function , "has the same image under as" on the elements of 's domain . For example, and have the same image under , viz. . "Has the same absolute value as" on the set of real numbers "Has the same cosine as" on the set of all angles. Relations that are not equivalences The relation "≥" between real numbers is reflexive and transitive, but not symmetric. For example, 7 ≥ 5 but not 5 ≥ 7. The relation "has a common factor greater than 1 with" between natural numbers greater than 1, is reflexive and symmetric, but not transitive. For example, the natural numbers 2 and 6 have a common factor greater than 1, and 6 and 3 have a common factor greater than 1, but 2 and 3 do not have a common factor greater than 1. The empty relation R (defined so that aRb is never true) on a set X is vacuously symmetric and transitive; however, it is not reflexive (unless X itself is empty). The relation "is approximately equal to" between real numbers, even if more precisely defined, is not an equivalence relation, because although reflexive and symmetric, it is not transitive, since multiple small changes can accumulate to become a big change. However, if the approximation is defined asymptotically, for example by saying that two functions f and g are approximately equal near some point if the limit of f − g is 0 at that point, then this defines an equivalence relation. Connections to other relations A partial order is a relation that is reflexive, , and transitive. Equality is both an equivalence relation and a partial order. Equality is also the only relation on a set that is reflexive, symmetric and antisymmetric. In algebraic expressions, equal variables may be substituted for one another, a facility that is not available for equivalence related variables. The equivalence classes of an equivalence relation can substitute for one another, but not individuals within a class. A strict partial order is irreflexive, transitive, and asymmetric. A partial equivalence relation is transitive and symmetric. Such a relation is reflexive if and only if it is total, that is, if for all there exists some Therefore, an equivalence relation may be alternatively defined as a symmetric, transitive, and total relation. A ternary equivalence relation is a ternary analogue to the usual (binary) equivalence relation. A reflexive and symmetric relation is a dependency relation (if finite), and a tolerance relation if infinite. A preorder is reflexive and transitive. A congruence relation is an equivalence relation whose domain is also the underlying set for an algebraic structure, and which respects the additional structure. In general, congruence relations play the role of kernels of homomorphisms, and the quotient of a structure by a congruence relation can be formed. In many important cases, congruence relations have an alternative representation as substructures of the structure on which they are defined (e.g., the congruence relations on groups correspond to the normal subgroups). Any equivalence relation is the negation of an apartness relation, though the converse statement only holds in classical mathematics (as opposed to constructive mathematics), since it is equivalent to the law of excluded middle. Each relation that is both reflexive and left (or right) Euclidean is also an equivalence relation. Well-definedness under an equivalence relation If is an equivalence relation on and is a property of elements of such that whenever is true if is true, then the property is said to be well-defined or a under the relation A frequent particular case occurs when is a function from to another set if implies then is said to be a for a or simply This occurs, e.g. in the character theory of finite groups. The latter case with the function can be expressed by a commutative triangle. See also invariant. Some authors use "compatible with " or just "respects " instead of "invariant under ". More generally, a function may map equivalent arguments (under an equivalence relation ) to equivalent values (under an equivalence relation ). Such a function is known as a morphism from to Related important definitions Let , and be an equivalence relation. Some key definitions and terminology follow: Equivalence class A subset of such that holds for all and in , and never for in and outside , is called an equivalence class of by . Let denote the equivalence class to which belongs. All elements of equivalent to each other are also elements of the same equivalence class. Quotient set The set of all equivalence classes of by denoted is the quotient set of by If is a topological space, there is a natural way of transforming into a topological space; see Quotient space for the details. Projection The projection of is the function defined by which maps elements of into their respective equivalence classes by Theorem on projections: Let the function be such that if then Then there is a unique function such that If is a surjection and then is a bijection. Equivalence kernel The equivalence kernel of a function is the equivalence relation ~ defined by The equivalence kernel of an injection is the identity relation. Partition A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Moreover, the elements of P are pairwise disjoint and their union is X. Counting partitions Let X be a finite set with n elements. Since every equivalence relation over X corresponds to a partition of X, and vice versa, the number of equivalence relations on X equals the number of distinct partitions of X, which is the nth Bell number Bn: (Dobinski's formula). Fundamental theorem of equivalence relations A key result links equivalence relations and partitions: An equivalence relation ~ on a set X partitions X. Conversely, corresponding to any partition of X, there exists an equivalence relation ~ on X. In both cases, the cells of the partition of X are the equivalence classes of X by ~. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Thus there is a natural bijection between the set of all equivalence relations on X and the set of all partitions of X. Comparing equivalence relations If and are two equivalence relations on the same set , and implies for all then is said to be a coarser relation than , and is a finer relation than . Equivalently, is finer than if every equivalence class of is a subset of an equivalence class of , and thus every equivalence class of is a union of equivalence classes of . is finer than if the partition created by is a refinement of the partition created by . The equality equivalence relation is the finest equivalence relation on any set, while the universal relation, which relates all pairs of elements, is the coarsest. The relation " is finer than " on the collection of all equivalence relations on a fixed set is itself a partial order relation, which makes the collection a geometric lattice. Generating equivalence relations Given any set an equivalence relation over the set of all functions can be obtained as follows. Two functions are deemed equivalent when their respective sets of fixpoints have the same cardinality, corresponding to cycles of length one in a permutation. An equivalence relation on is the equivalence kernel of its surjective projection Conversely, any surjection between sets determines a partition on its domain, the set of preimages of singletons in the codomain. Thus an equivalence relation over a partition of and a projection whose domain is are three equivalent ways of specifying the same thing. The intersection of any collection of equivalence relations over X (binary relations viewed as a subset of ) is also an equivalence relation. This yields a convenient way of generating an equivalence relation: given any binary relation R on X, the equivalence relation is the intersection of all equivalence relations containing R (also known as the smallest equivalence relation containing R). Concretely, R generates the equivalence relation if there exists a natural number and elements such that , , and or , for The equivalence relation generated in this manner can be trivial. For instance, the equivalence relation generated by any total order on X has exactly one equivalence class, X itself. Equivalence relations can construct new spaces by "gluing things together." Let X be the unit Cartesian square and let ~ be the equivalence relation on X defined by for all and for all Then the quotient space can be naturally identified (homeomorphism) with a torus: take a square piece of paper, bend and glue together the upper and lower edge to form a cylinder, then bend the resulting cylinder so as to glue together its two open ends, resulting in a torus. Algebraic structure Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids. Group theory Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections that preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathematical structure of equivalence relations. Let '~' denote an equivalence relation over some nonempty set A, called the universe or underlying set. Let G denote the set of bijective functions over A that preserve the partition structure of A, meaning that for all and Then the following three connected theorems hold: ~ partitions A into equivalence classes. (This is the , mentioned above); Given a partition of A, G is a transformation group under composition, whose orbits are the cells of the partition; Given a transformation group G over A, there exists an equivalence relation ~ over A, whose equivalence classes are the orbits of G. In sum, given an equivalence relation ~ over A, there exists a transformation group G over A whose orbits are the equivalence classes of A under ~. This transformation group characterisation of equivalence relations differs fundamentally from the way lattices characterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe A. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A. Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets. Related thinking can be found in Rosen (2008: chpt. 10). Categories and groupoids Let G be a set and let "~" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if The advantages of regarding an equivalence relation as a special case of a groupoid include: Whereas the notion of "free equivalence relation" does not exist, that of a free groupoid on a directed graph does. Thus it is meaningful to speak of a "presentation of an equivalence relation," i.e., a presentation of the corresponding groupoid; Bundles of groups, group actions, sets, and equivalence relations can be regarded as special cases of the notion of groupoid, a point of view that suggests a number of analogies; In many contexts "quotienting," and hence the appropriate equivalence relations often called congruences, are important. This leads to the notion of an internal groupoid in a category. Lattices The equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. The canonical map ker : X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. Less formally, the equivalence relation ker on X, takes each function f : X → X to its kernel ker f. Likewise, ker(ker) is an equivalence relation on X^X. Equivalence relations and mathematical logic Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number. An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples: Reflexive and transitive: The relation ≤ on N. Or any preorder; Symmetric and transitive: The relation R on N, defined as aRb ↔ ab ≠ 0. Or any partial equivalence relation; Reflexive and symmetric: The relation R on Z, defined as aRb ↔ "a − b is divisible by at least one of 2 or 3." Or any dependency relation. Properties definable in first-order logic that an equivalence relation may or may not possess include: The number of equivalence classes is finite or infinite; The number of equivalence classes equals the (finite) natural number n; All equivalence classes have infinite cardinality; The number of elements in each equivalence class is the natural number n. See also Notes References Brown, Ronald, 2006. Topology and Groupoids. Booksurge LLC. . Castellani, E., 2003, "Symmetry and equivalence" in Brading, Katherine, and E. Castellani, eds., Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press: 422–433. Robert Dilworth and Crawley, Peter, 1973. Algebraic Theory of Lattices. Prentice Hall. Chpt. 12 discusses how equivalence relations arise in lattice theory. Higgins, P.J., 1971. Categories and groupoids. Van Nostrand. Downloadable since 2005 as a TAC Reprint. John Randolph Lucas, 1973. A Treatise on Time and Space. London: Methuen. Section 31. Rosen, Joseph (2008) Symmetry Rules: How Science and Nature are Founded on Symmetry. Springer-Verlag. Mostly chapters. 9,10. Raymond Wilder (1965) Introduction to the Foundations of Mathematics 2nd edition, Chapter 2-8: Axioms defining equivalence, pp 48–50, John Wiley & Sons. External links Bogomolny, A., "Equivalence Relationship" cut-the-knot. Accessed 1 September 2009 Equivalence relation at PlanetMath Equivalence (mathematics) Reflexive relations Symmetric relations Transitive relations
Equivalence relation
[ "Physics" ]
3,674
[ "Symmetric relations", "Symmetry" ]
9,260
https://en.wikipedia.org/wiki/Equivalence%20class
In mathematics, when the elements of some set have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set into equivalence classes. These equivalence classes are constructed so that elements and belong to the same equivalence class if, and only if, they are equivalent. Formally, given a set and an equivalence relation on the of an element in is denoted or, equivalently, to emphasize its equivalence relation The definition of equivalence relations implies that the equivalence classes form a partition of meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of by and is denoted by When the set has some structure (such as a group operation or a topology) and the equivalence relation is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories. Definition and notation An equivalence relation on a set is a binary relation on satisfying the three properties: for all (reflexivity), implies for all (symmetry), if and then for all (transitivity). The equivalence class of an element is defined as The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets. The set of all equivalence classes in with respect to an equivalence relation is denoted as and is called modulo (or the of by ). The surjective map from onto which maps each element to its equivalence class, is called the , or the canonical projection. Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from to . Since its composition with the canonical surjection is the identity of such an injection is called a section, when using the terminology of category theory. Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called . For example, in modular arithmetic, for every integer greater than , the congruence modulo is an equivalence relation on the integers, for which two integers and are equivalent—in this case, one says congruent—if divides this is denoted Each class contains a unique non-negative integer smaller than and these integers are the canonical representatives. The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted and produces the remainder of the Euclidean division of by . Properties Every element of is a member of the equivalence class Every two equivalence classes and are either equal or disjoint. Therefore, the set of all equivalence classes of forms a partition of : every element of belongs to one and only one equivalence class. Conversely, every partition of comes from an equivalence relation in this way, according to which if and only if and belong to the same set of the partition. It follows from the properties in the previous section that if is an equivalence relation on a set and and are two elements of the following statements are equivalent: Examples Let be the set of all rectangles in a plane, and the equivalence relation "has the same area as", then for each positive real number there will be an equivalence class of all the rectangles that have area Consider the modulo 2 equivalence relation on the set of integers, such that if and only if their difference is an even number. This relation gives rise to exactly two equivalence classes: one class consists of all even numbers, and the other class consists of all odd numbers. Using square brackets around one member of the class to denote an equivalence class under this relation, and all represent the same element of Let be the set of ordered pairs of integers with non-zero and define an equivalence relation on such that if and only if then the equivalence class of the pair can be identified with the rational number and this equivalence relation and its equivalence classes can be used to give a formal definition of the set of rational numbers. The same construction can be generalized to the field of fractions of any integral domain. If consists of all the lines in, say, the Euclidean plane, and means that and are parallel lines, then the set of lines that are parallel to each other form an equivalence class, as long as a line is considered parallel to itself. In this situation, each equivalence class determines a point at infinity. Graphical representation An undirected graph may be associated to any symmetric relation on a set where the vertices are the elements of and two vertices and are joined if and only if Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques. Invariants If is an equivalence relation on and is a property of elements of such that whenever is true if is true, then the property is said to be an invariant of or well-defined under the relation A frequent particular case occurs when is a function from to another set ; if whenever then is said to be or simply This occurs, for example, in the character theory of finite groups. Some authors use "compatible with " or just "respects " instead of "invariant under ". Any function is class invariant under according to which if and only if The equivalence class of is the set of all elements in which get mapped to that is, the class is the inverse image of This equivalence relation is known as the kernel of More generally, a function may map equivalent arguments (under an equivalence relation on ) to equivalent values (under an equivalence relation on ). Such a function is a morphism of sets equipped with an equivalence relation. Quotient space in topology In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes. In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action. The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation. A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously. Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above. See also Equivalence partitioning, a method for devising test sets in software testing based on dividing the possible program inputs into equivalence classes according to the behavior of the program on those inputs Homogeneous space, the quotient space of Lie groups Notes References Further reading External links Algebra Binary relations Equivalence (mathematics) Set theory
Equivalence class
[ "Mathematics" ]
1,735
[ "Set theory", "Mathematical logic", "Binary relations", "Mathematical relations", "Algebra" ]
9,263
https://en.wikipedia.org/wiki/Ether
In organic chemistry, ethers are a class of compounds that contain an ether group—a single oxygen atom bonded to two separate carbon atoms, each part of an organyl group (e.g., alkyl or aryl). They have the general formula , where R and R′ represent the organyl groups. Ethers can again be classified into two varieties: if the organyl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin. Structure and bonding Ethers feature bent linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp3. Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however. Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane. Vinyl- and acetylenic ethers Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds. Nomenclature In the IUPAC Nomenclature system, ethers are named using the general formula "alkoxyalkane", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a "methoxy-" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3). Trivial name IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by "ether". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties. Polyethers Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term "oxide" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties. Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers. The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO). Related compounds Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′). There are compounds which, instead of C in the linkage, contain heavier group 14 chemical elements (e.g., Si, Ge, Sn, Pb). Such compounds are considered ethers as well. Examples of such ethers are silyl enol ethers (containing the linkage), disiloxane (the other name of this compound is disilyl ether, containing the linkage) and stannoxanes (containing the linkage). Physical properties Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless. Reactions The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes. Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below. Cleavage Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides: ROCH3 + HBr → CH3Br + ROH These reactions proceed via onium intermediates, i.e. [RO(H)CH3]+Br−. Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base. Despite these difficulties the chemical paper pulping processes are based on cleavage of ether bonds in the lignin. Peroxide formation When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes. Lewis bases Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. borane diethyl etherate (). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms with many complexes. Alpha-halogenation This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers. Synthesis Dehydration of alcohols The dehydration of alcohols affords ethers: 2 R–OH → R–O–R + H2O at high temperature This direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol: R–CH2–CH2(OH) → R–CH=CH2 + H2O The dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers. Electrophilic addition of alcohols to alkenes Alcohols add to electrophilically activated alkenes. The method is atom-economical: R2C=CR2 + R–OH → R2CH–C(–O–R)–R2 Acid catalysis is required for this reaction. Commercially important ethers prepared in this way are derived from isobutene or isoamylene, which protonate to give relatively stable carbocations. Using ethanol and methanol with these two alkenes, four fuel-grade ethers are produced: methyl tert-butyl ether (MTBE), methyl tert-amyl ether (TAME), ethyl tert-butyl ether (ETBE), and ethyl tert-amyl ether (TAEE). Solid acid catalysts are typically used to promote this reaction. Epoxides Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes: By the oxidation of alkenes with a peroxyacid such as m-CPBA. By the base intramolecular nucleophilic substitution of a halohydrin. Many ethers, ethoxylates and crown ethers, are produced from epoxides. Williamson and Ullmann ether syntheses Nucleophilic displacement of alkyl halides by alkoxides R–ONa + R′–X → R–O–R′ + NaX This reaction, the Williamson ether synthesis, involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Although popular in textbooks, the method is usually impractical on scale because it cogenerates significant waste. Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups. In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism. C6H5OH + OH− → C6H5–O− + H2O C6H5–O− + R–X → C6H5OR The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper. Important ethers See also Ester Ether lipid Ether addiction Ether (song) History of general anesthesia Inhalant Chemical paper pulping processes: Kraft process (and Soda pulping), Organosolv pulping process and the Sulfite process References Functional groups Impression material
Ether
[ "Chemistry" ]
2,697
[ "Organic compounds", "Functional groups", "Ethers" ]
9,264
https://en.wikipedia.org/wiki/Ecliptic
The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun. It was a central concept in a number of ancient sciences, providing the framework for key measurements in astronomy, astrology and calendar-making. From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars – specifically the Zodiac constellations. The planets of the solar system can also be seen along the ecliptic, because their orbital planes are very close to Earth's. The moon's orbital plane is also similar to Earth's; the ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system. Ancient scientists were able to calculate Earth's axial tilt by comparing the ecliptic plane to that of the equator. Sun's apparent motion The ecliptic is the apparent path of the Sun throughout the course of a year. Because Earth takes one year to orbit the Sun, the apparent position of the Sun takes one year to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth did not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at uniform speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time. Because of the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Because of further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion. Relationship to the celestial equator Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the March equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the September equinox or descending node. The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0.014°) per year. Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation. This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (March) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox. Obliquity of the ecliptic Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years because of planetary perturbations. The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895: where is the obliquity and is tropical centuries from B1900.0 to the date in question. From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: where hereafter is Julian centuries from J2000.0. JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies: These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. J. Laskar computed an expression to order good to /1000 years over 10,000 years. All of these expressions are for the mean obliquity, that is, without the nutation of the equator included. The true or instantaneous obliquity includes the nutation. Plane of the Solar System Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, Jupiter's orbit is within a little more than ½° of it, and the other major planets are all within about 6°. Because of this, most Solar System bodies appear very close to the ecliptic in the sky. The invariable plane is defined by the angular momentum of the entire Solar System, essentially the vector sum of all of the orbital and rotational angular momenta of all the bodies of the system; more than 60% of the total comes from the orbit of Jupiter. That sum requires precise knowledge of every object in the system, making it a somewhat uncertain value. Because of the uncertainty regarding the exact location of the invariable plane, and because the ecliptic is well defined by the apparent motion of the Sun, the ecliptic is used as the reference plane of the Solar System both for precision and convenience. The only drawback of using the ecliptic instead of the invariable plane is that over geologic time scales, it will move against fixed reference points in the sky's distant background. Celestial reference plane The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator. Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the March equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or −90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the March equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table. Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars. Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 January 2010 as: longitude 118°09′15.8″, latitude +1°43′16.7″, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 January 2010 0h TT as above, without the addition of nutation. Eclipses Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it. Equinoxes and solstices The exact instants of equinoxes and solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, and 270°. Because of perturbations of Earth's orbit and anomalies of the calendar, the dates of these are not fixed. In the constellations The ecliptic currently passes through the following thirteen constellations: There are twelve constellations that are not on the ecliptic, but are close enough that the Moon and planets can occasionally appear in them. Cetus Pegasus Aquila Scutum Serpens Hydra Corvus Crater Sextans Canis Minor Auriga Orion Astrology The ecliptic forms the center of the zodiac, a celestial belt about 20° wide in latitude through which the Sun, Moon, and planets always appear to move. Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion in one month. In ancient times, the signs corresponded roughly to 12 of the constellations that straddle the ecliptic. These signs are sometimes still used in modern terminology. The "First Point of Aries" was named when the March equinox Sun was actually in the constellation Aries; it has since moved into Pisces because of precession of the equinoxes. See also Formation and evolution of the Solar System Invariable plane Protoplanetary disk Celestial coordinate system Notes and references External links The Ecliptic: the Sun's Annual Path on the Celestial Sphere Durham University Department of Physics Seasons and Ecliptic Simulator University of Nebraska-Lincoln MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois Earth's Seasons U.S. Naval Observatory The Basics - the Ecliptic, the Equator, and Coordinate Systems AstrologyClub.Org ; comparison of the definitions of LeVerrier, Newcomb, and Standish. Astronomical coordinate systems Dynamics of the Solar System Technical factors of astrology Planes (geometry)
Ecliptic
[ "Astronomy", "Mathematics" ]
2,812
[ "Dynamics of the Solar System", "Mathematical objects", "Infinity", "Astronomical coordinate systems", "Coordinate systems", "Planes (geometry)", "Solar System" ]
9,277
https://en.wikipedia.org/wiki/Ellipse
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity , a number ranging from (the limiting case of a circle) to (the limiting case of infinite elongation, no longer an ellipse but a parabola). An ellipse has a simple algebraic solution for its area, but for its perimeter (also known as circumference), integration is required to obtain an exact solution. Analytically, the equation of a standard ellipse centered at the origin with width and height is: Assuming , the foci are for . The standard parametric equation is: Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity: Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sunplanet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics. The name, (, "omission"), was given by Apollonius of Perga in his Conics. Definition as locus of points An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane: The midpoint of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity. The case yields a circle and is included as a special type of ellipse. The equation can be viewed in a different way (see figure): is called the circular directrix (related to focus of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below. Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone. In Cartesian coordinates Standard equation The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the x-axis is the major axis, and: For an arbitrary point the distance to the focus is and to the other focus . Hence the point is on the ellipse whenever: Removing the radicals by suitable squarings and using (see diagram) produces the standard equation of the ellipse: or, solved for y: The width and height parameters are called the semi-major and semi-minor axes. The top and bottom points are the co-vertices. The distances from a point on the ellipse to the left and right foci are and . It follows from the equation that the ellipse is symmetric with respect to the coordinate axes and hence with respect to the origin. Parameters Principal axes Throughout this article, the semi-major and semi-minor axes are denoted and , respectively, i.e. In principle, the canonical ellipse equation may have (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names and and the parameter names and Linear eccentricity This is the distance from the center to a focus: . Eccentricity The eccentricity can be expressed as: assuming An ellipse with equal axes () has zero eccentricity, and is a circle. Semi-latus rectum The length of the chord through one focus, perpendicular to the major axis, is called the latus rectum. One half of it is the semi-latus rectum . A calculation shows: The semi-latus rectum is equal to the radius of curvature at the vertices (see section curvature). Tangent An arbitrary line intersects an ellipse at 0, 1, or 2 points, respectively called an exterior line, tangent and secant. Through any point of an ellipse there is a unique tangent. The tangent at a point of the ellipse has the coordinate equation: A vector parametric equation of the tangent is: Proof: Let be a point on an ellipse and be the equation of any line containing . Inserting the line's equation into the ellipse equation and respecting yields: There are then cases: Then line and the ellipse have only point in common, and is a tangent. The tangent direction has perpendicular vector , so the tangent line has equation for some . Because is on the tangent and the ellipse, one obtains . Then line has a second point in common with the ellipse, and is a secant. Using (1) one finds that is a tangent vector at point , which proves the vector equation. If and are two points of the ellipse such that , then the points lie on two conjugate diameters (see below). (If , the ellipse is a circle and "conjugate" means "orthogonal".) Shifted ellipse If the standard ellipse is shifted to have center , its equation is The axes are still parallel to the x- and y-axes. General ellipse In analytic geometry, the ellipse is defined as a quadric: the set of points of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation provided To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant Then the ellipse is a non-degenerate real ellipse if and only if C∆ < 0. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates , and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae: These expressions can be derived from the canonical equation by a Euclidean transformation of the coordinates : Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations: where is the 2-argument arctangent function. Parametric representation Standard parametric representation Using trigonometric functions, a parametric representation of the standard ellipse is: The parameter t (called the eccentric anomaly in astronomy) is not the angle of with the x-axis, but has a geometric meaning due to Philippe de La Hire (see below). Rational representation With the substitution and trigonometric formulae one obtains and the rational parametric equation of an ellipse which covers any point of the ellipse except the left vertex . For this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing The left vertex is the limit Alternately, if the parameter is considered to be a point on the real projective line , then the corresponding rational parametrization is Then Rational representations of conic sections are commonly used in computer-aided design (see Bézier curve). Tangent slope as parameter A parametric representation, which uses the slope of the tangent at a point of the ellipse can be obtained from the derivative of the standard representation : With help of trigonometric formulae one obtains: Replacing and of the standard representation yields: Here is the slope of the tangent at the corresponding ellipse point, is the upper and the lower half of the ellipse. The vertices, having vertical tangents, are not covered by the representation. The equation of the tangent at point has the form . The still unknown can be determined by inserting the coordinates of the corresponding ellipse point : This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae. General ellipse Another definition of an ellipse uses affine transformations: Any ellipse is an affine image of the unit circle with equation . Parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (with non-zero determinant) and is an arbitrary vector. If are the column vectors of the matrix , the unit circle , , is mapped onto the ellipse: Here is the center and are the directions of two conjugate diameters, in general not perpendicular. Vertices The four vertices of the ellipse are , for a parameter defined by: (If , then .) This is derived as follows. The tangent vector at point is: At a vertex parameter , the tangent is perpendicular to the major/minor axes, so: Expanding and applying the identities gives the equation for Area From Apollonios theorem (see below) one obtains: The area of an ellipse is Semiaxes With the abbreviations the statements of Apollonios's theorem can be written as: Solving this nonlinear system for yields the semiaxes: Implicit representation Solving the parametric representation for by Cramer's rule and using , one obtains the implicit representation Conversely: If the equation with of an ellipse centered at the origin is given, then the two vectors point to two conjugate points and the tools developed above are applicable. Example: For the ellipse with equation the vectors are Rotated standard ellipse For one obtains a parametric representation of the standard ellipse rotated by angle : Ellipse in space The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows to be vectors in space. Polar forms Polar form relative to center In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate measured from the major axis, the ellipse's equation is where is the eccentricity, not Euler's number. Polar form relative to focus If instead we use polar coordinates with the origin at one focus, with the angular coordinate still measured from the major axis, the ellipse's equation is where the sign in the denominator is negative if the reference direction points towards the center (as illustrated on the right), and positive if that direction points away from the center. The angle is called the true anomaly of the point. The numerator is the semi-latus rectum. Eccentricity and the directrix property Each of the two lines parallel to the minor axis, and at a distance of from it, is called a directrix of the ellipse (see diagram). For an arbitrary point of the ellipse, the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: The proof for the pair follows from the fact that and satisfy the equation The second case is proven analogously. The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola): For any point (focus), any line (directrix) not through , and any real number with the ellipse is the locus of points for which the quotient of the distances to the point and to the line is that is: The extension to , which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane. (The choice yields a parabola, and if , a hyperbola.) Proof Let , and assume is a point on the curve. The directrix has equation . With , the relation produces the equations and The substitution yields This is the equation of an ellipse (), or a parabola (), or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If , introduce new parameters so that , and then the equation above becomes which is the equation of an ellipse with center , the x-axis as major axis, and the major/minor semi axis . Construction of a directrix Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence can be constructed as shown in the diagram. Directrix is the perpendicular to the main axis at point . General ellipse If the focus is and the directrix , one obtains the equation (The right side of the equation uses the Hesse normal form of a line to calculate the distance .) Focus-to-focus reflection property An ellipse possesses the following property: The normal at a point bisects the angle between the lines . Proof Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram). Let be the point on the line with distance to the focus , where is the semi-major axis of the ellipse. Let line be the external angle bisector of the lines and Take any other point on By the triangle inequality and the angle bisector theorem, therefore must be outside the ellipse. As this is true for every choice of only intersects the ellipse at the single point so must be the tangent line. Application The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery). Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis. Conjugate diameters Definition of conjugate diameters A circle has the following property: The midpoints of parallel chords lie on a diameter. An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.) Definition Two diameters of an ellipse are conjugate if the midpoints of chords parallel to lie on From the diagram one finds: Two diameters of an ellipse are conjugate whenever the tangents at and are parallel to . Conjugate diameters in an ellipse generalize orthogonal diameters in a circle. In the parametric equation for a general ellipse given above, any pair of points belong to a diameter, and the pair belong to its conjugate diameter. For the common parametric representation of the ellipse with equation one gets: The points (signs: (+,+) or (−,−) ) (signs: (−,+) or (+,−) ) are conjugate and In case of a circle the last equation collapses to Theorem of Apollonios on conjugate diameters For an ellipse with semi-axes the following is true: Let and be halves of two conjugate diameters (see diagram) then . The triangle with sides (see diagram) has the constant area , which can be expressed by , too. is the altitude of point and the angle between the half diameters. Hence the area of the ellipse (see section metric properties) can be written as . The parallelogram of tangents adjacent to the given conjugate diameters has the Proof Let the ellipse be in the canonical form with parametric equation The two points are on conjugate diameters (see previous section). From trigonometric formulae one obtains and The area of the triangle generated by is and from the diagram it can be seen that the area of the parallelogram is 8 times that of . Hence Orthogonal tangents For the ellipse the intersection points of orthogonal tangents lie on the circle . This circle is called orthoptic or director circle of the ellipse (not to be confused with the circular directrix defined above). Drawing ellipses Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle was known to the 5th century mathematician Proclus, and the tool now known as an elliptical trammel was invented by Leonardo da Vinci. If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices. For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved. de La Hire's point construction The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation of an ellipse: Draw the two circles centered at the center of the ellipse with radii and the axes of the ellipse. Draw a line through the center, which intersects the two circles at point and , respectively. Draw a line through that is parallel to the minor axis and a line through that is parallel to the major axis. These lines meet at an ellipse point (see diagram). Repeat steps (2) and (3) with different lines through the center. Pins-and-string method The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is . The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse. The Byzantine architect Anthemius of Tralles () described how this method could be used to construct an elliptical reflector, and it was elaborated in a now-lost 9th-century treatise by Al-Ḥasan ibn Mūsā. A similar method for drawing confocal ellipses with a closed string is due to the Irish bishop Charles Graves. Paper strip methods The two following methods rely on the parametric representation (see , above): This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes have to be known. Method 1 The first method starts with a strip of paper of length . The point, where the semi axes meet is marked by . If the strip slides with both ends on the axes of the desired ellipse, then point traces the ellipse. For the proof one shows that point has the parametric representation , where parameter is the angle of the slope of the paper strip. A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a fixed sum , which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method. A variation of the paper strip method 1 uses the observation that the midpoint of the paper strip is moving on the circle with center (of the ellipse) and radius . Hence, the paperstrip can be cut at point into halves, connected again by a joint at and the sliding end fixed at the center (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe. Method 2 The second method starts with a strip of paper of length . One marks the point, which divides the strip into two substrips of length and . The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by , where parameter is the angle of slope of the paper strip. This method is the base for several ellipsographs (see section below). Similar to the variation of the paper strip method 1 a variation of the paper strip method 2 can be established (see diagram) by cutting the part between the axes into halves. Most ellipsograph drafting instruments are based on the second paperstrip method. Approximation by osculating circles From Metric properties below, one obtains: The radius of curvature at the vertices is: The radius of curvature at the co-vertices is: The diagram shows an easy way to find the centers of curvature at vertex and co-vertex , respectively: mark the auxiliary point and draw the line segment draw the line through , which is perpendicular to the line the intersection points of this line with the axes are the centers of the osculating circles. (proof: simple calculation.) The centers for the remaining vertices are found by symmetry. With help of a French curve one draws a curve, which has smooth contact to the osculating circles. Steiner generation The following method to construct single points of an ellipse relies on the Steiner generation of a conic section: Given two pencils of lines at two points (all lines containing and , respectively) and a projective but not perspective mapping of onto , then the intersection points of corresponding lines form a non-degenerate projective conic section. For the generation of points of the ellipse one uses the pencils at the vertices . Let be an upper co-vertex of the ellipse and . is the center of the rectangle . The side of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal as direction onto the line segment and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined ellipse. With help of the points the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse. Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. As hypotrochoid The ellipse is a special case of the hypotrochoid when , as shown in the adjacent image. The special case of a moving circle with radius inside a circle with radius is called a Tusi couple. Inscribed angles and three-point form Circles A circle with equation is uniquely determined by three points not on a line. A simple way to determine the parameters uses the inscribed angle theorem for circles: For four points (see diagram) the following statement is true: The four points are on a circle if and only if the angles at and are equal. Usually one measures inscribed angles by a degree or radian θ, but here the following measurement is more convenient: In order to measure the angle between two lines with equations one uses the quotient: Inscribed angle theorem for circles For four points no three of them on a line, we have the following (see diagram): The four points are on a circle, if and only if the angles at and are equal. In terms of the angle measurement above, this means: At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord. Three-point form of circle equation As a consequence, one obtains an equation for the circle determined by three non-collinear points : For example, for the three-point equation is: , which can be rearranged to Using vectors, dot products and determinants this formula can be arranged more clearly, letting : The center of the circle satisfies: The radius is the distance between any of the three points and the center. Ellipses This section considers the family of ellipses defined by equations with a fixed eccentricity . It is convenient to use the parameter: and to write the ellipse equation as: where q is fixed and vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if , the major axis is parallel to the x-axis; if , it is parallel to the y-axis.) Like a circle, such an ellipse is determined by three points not on a line. For this family of ellipses, one introduces the following q-analog angle measure, which is not a function of the usual angle measure θ: In order to measure an angle between two lines with equations one uses the quotient: Inscribed angle theorem for ellipses Given four points , no three of them on a line (see diagram). The four points are on an ellipse with equation if and only if the angles at and are equal in the sense of the measurement above—that is, if At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin. Three-point form of ellipse equation A consequence, one obtains an equation for the ellipse determined by three non-collinear points : For example, for and one obtains the three-point form and after conversion Analogously to the circle case, the equation can be written more clearly using vectors: where is the modified dot product Pole-polar relation Any ellipse can be described in a suitable coordinate system by an equation . The equation of the tangent at a point of the ellipse is If one allows point to be an arbitrary point different from the origin, then point is mapped onto the line , not through the center of the ellipse. This relation between points and lines is a bijection. The inverse function maps line onto the point and line onto the point Such a relation between points and lines generated by a conic is called pole-polar relation or polarity. The pole is the point; the polar the line. By calculation one can confirm the following properties of the pole-polar relation of the ellipse: For a point (pole) on the ellipse, the polar is the tangent at this point (see diagram: For a pole outside the ellipse, the intersection points of its polar with the ellipse are the tangency points of the two tangents passing (see diagram: For a point within the ellipse, the polar has no point with the ellipse in common (see diagram: The intersection point of two polars is the pole of the line through their poles. The foci and , respectively, and the directrices and , respectively, belong to pairs of pole and polar. Because they are even polar pairs with respect to the circle , the directrices can be constructed by compass and straightedge (see Inversive geometry). Pole-polar relations exist for hyperbolas and parabolas as well. Metric properties All metric properties given below refer to an ellipse with equation except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.() will be given. Area The area enclosed by an ellipse is: where and are the lengths of the semi-major and semi-minor axes, respectively. The area formula is intuitive: start with a circle of radius (so its area is ) and stretch it by a factor to make an ellipse. This scales the area by the same factor: However, using the same approach for the circumference would be fallacious – compare the integrals and . It is also easy to rigorously prove the area formula using integration as follows. Equation () can be rewritten as For this curve is the top half of the ellipse. So twice the integral of over the interval will be the area of the ellipse: The second integral is the area of a circle of radius that is, So An ellipse defined implicitly by has area The area can also be expressed in terms of eccentricity and the length of the semi-major axis as (obtained by solving for flattening, then computing the semi-minor axis). So far we have dealt with erect ellipses, whose major and minor axes are parallel to the and axes. However, some applications require tilted ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its emittance. In this case a simple formula still applies, namely where , are intercepts and , are maximum values. It follows directly from Apollonios's theorem. Circumference The circumference of an ellipse is: where again is the length of the semi-major axis, is the eccentricity, and the function is the complete elliptic integral of the second kind, which is in general not an elementary function. The circumference of the ellipse may be evaluated in terms of using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details). The exact infinite series is: where is the double factorial (extended to negative odd integers in the usual way, giving and ). This series converges, but by expanding in terms of James Ivory, Bessel and Kummer derived a series that converges much more rapidly. It is most concisely written in terms of the binomial coefficient with : The coefficients are slightly smaller (by a factor of ), but also is numerically much smaller than except at and . For eccentricities less than 0.5 the error is at the limits of double-precision floating-point after the term. Srinivasa Ramanujan gave two close approximations for the circumference in §16 of "Modular Equations and Approximations to "; they are and where takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order and respectively. This is because the second formula's infinite series expansion matches Ivory's formula up to the term. Arc length More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by Then the arc length from to is: This is equivalent to where is the incomplete elliptic integral of the second kind with parameter Some lower and upper bounds on the circumference of the canonical ellipse with are Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes. Given an ellipse whose axes are drawn, we can construct the endpoints of a particular elliptic arc whose length is one eighth of the ellipse's circumference using only straightedge and compass in a finite number of steps; for some specific shapes of ellipses, such as when the axes have a length ratio of , it is additionally possible to construct the endpoints of a particular arc whose length is one twelfth of the circumference. (The vertices and co-vertices are already endpoints of arcs whose length is one half or one quarter of the ellipse's circumference.) However, the general theory of straightedge-and-compass elliptic division appears to be unknown, unlike in the case of the circle and the lemniscate. The division in special cases has been investigated by Legendre in his classical treatise. Curvature The curvature is given by: and the radius of curvature, ρ = 1/κ, at point : The radius of curvature of an ellipse, as a function of angle from the center, is: where e is the eccentricity. Radius of curvature at the two vertices and the centers of curvature: Radius of curvature at the two co-vertices and the centers of curvature: The locus of all the centers of curvature is called an evolute. In the case of an ellipse, the evolute is an astroid. In triangle geometry Ellipses appear in triangle geometry as Steiner ellipse: ellipse through the vertices of the triangle with center at the centroid, inellipses: ellipses which touch the sides of a triangle. Special cases are the Steiner inellipse and the Mandart inellipse. As plane sections of quadrics Ellipses appear as plane sections of the following quadrics: Ellipsoid Elliptic cone Elliptic cylinder Hyperboloid of one sheet Hyperboloid of two sheets Applications Physics Elliptical reflectors and acoustics If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci. Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners. Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra. Planetary orbits In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation. More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus. Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.) For elliptical orbits, useful relations involving the eccentricity are: where is the radius at apoapsis, i.e., the farthest distance of the orbit to the barycenter of the system, which is a focus of the ellipse is the radius at periapsis, the closest distance is the length of the semi-major axis Also, in terms of and , the semi-major axis is their arithmetic mean, the semi-minor axis is their geometric mean, and the semi-latus rectum is their harmonic mean. In other words, Harmonic oscillators The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion. Phase visualization In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase. Elliptical gears Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage. Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears. An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base. Optics In a material that is optically anisotropic (birefringent), the refractive index depends on the direction of the light. The dependency can be described by an index ellipsoid. (If the material is optically isotropic, this ellipsoid is a sphere.) In lamp-pumped solid-state lasers, elliptical cylinder-shaped reflectors have been used to direct light from the pump lamp (coaxial with one ellipse focal axis) to the active medium rod (coaxial with the second focal axis). In laser-plasma produced EUV light sources used in microchip lithography, EUV light is generated by plasma positioned in the primary focus of an ellipsoid mirror and is collected in the secondary focus at the input of the lithography machine. Statistics and finance In statistics, a bivariate random vector is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in the financial field because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return. Computer graphics Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken. In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector. It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation. Drawing with Bézier paths Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations. Optimization theory It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem. See also Cartesian oval, a generalization of the ellipse Circumconic and inconic Distance of closest approach of ellipses Ellipse fitting Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolae Elliptic partial differential equation Elliptical distribution, in statistics Elliptical dome Geodesics on an ellipsoid Great ellipse Kepler's laws of planetary motion n-ellipse, a generalization of the ellipse for n foci Oval Perimeter of an ellipse Spheroid, the ellipsoid obtained by rotating an ellipse about its major or minor axis Stadium (geometry), a two-dimensional geometric shape constructed of a rectangle with semicircles at a pair of opposite sides Steiner circumellipse, the unique ellipse circumscribing a triangle and sharing its centroid Superellipse, a generalization of an ellipse that can look more rectangular or more "pointy" True, eccentric, and mean anomaly Notes References External links Apollonius' Derivation of the Ellipse at Convergence The Shape and History of The Ellipse in Washington, D.C. by Clark Kimberling Ellipse circumference calculator Collection of animated ellipse demonstrations Trammel according Frans van Schooten by Matt Parker Conic sections Plane curves Elementary shapes Algebraic curves
Ellipse
[ "Mathematics" ]
9,392
[ "Planes (geometry)", "Euclidean plane geometry", "Plane curves" ]
9,279
https://en.wikipedia.org/wiki/Elephant
Elephants are the largest living land animals. Three living species are currently recognised: the African bush elephant (Loxodonta africana), the African forest elephant (L. cyclotis), and the Asian elephant (Elephas maximus). They are the only surviving members of the family Elephantidae and the order Proboscidea; extinct relatives include mammoths and mastodons. Distinctive features of elephants include a long proboscis called a trunk, tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk is prehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs. Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, including savannahs, forests, deserts, and marshes. They are herbivorous, and they stay near water when it is accessible. They are considered to be keystone species, due to their impact on their environments. Elephants have a fission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as the matriarch. Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increased testosterone and aggression known as musth, which helps them gain dominance over other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness, and possibly show concern for dying and dead individuals of their kind. African bush elephants and Asian elephants are listed as endangered and African forest elephants as critically endangered on the IUCN Red Lists. One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment in circuses. Elephants have an iconic status in human culture and have been widely featured in art, folklore, religion, literature, and popular culture. Etymology The word elephant is derived from the Latin word (genitive ) , which is the Latinised form of the ancient Greek () (genitive ()), probably from a non-Indo-European language, likely Phoenician. It is attested in Mycenaean Greek as (genitive ) in Linear B syllabic script. As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal. The word elephant appears in Middle English as () and was borrowed from Old French (12th century). Taxonomy and evolution Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria. Elephants and sirenians are further grouped in the clade Tethytheria. Three species of living elephants are recognised; the African bush elephant (Loxodonta africana), forest elephant (Loxodonta cyclotis), and Asian elephant (Elephas maximus). African elephants were traditionally considered a single species, Loxodonta africana, but molecular studies have affirmed their status as separate species. Mammoths (Mammuthus) are nested within living elephants as they are more closely related to Asian elephants than to African elephants. Another extinct genus of elephant, Palaeoloxodon, is also recognised, which appears to have close affinities with African elephants and to have hybridised with African forest elephants. Evolution Over 180 extinct members of order Proboscidea have been described. The earliest proboscideans, the African Eritherium and Phosphatherium are known from the late Paleocene. The Eocene included Numidotherium, Moeritherium, and Barytherium from Africa. These animals were relatively small and, some, like Moeritherium and Barytherium were probably amphibious. Later on, genera such as Phiomia and Palaeomastodon arose; the latter likely inhabited more forested areas. Proboscidean diversification changed little during the Oligocene. One notable species of this epoch was Eritreum melakeghebrekristosi of the Horn of Africa, which may have been an ancestor to several later species. A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18–19 million years ago, allowing proboscideans to disperse from their African homeland across Eurasia and later, around 16–15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the "shovel tuskers" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres. Elephantids are distinguished from earlier proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher-crowned (hypsodont) and more efficient in consuming grass. The Late Miocene saw major climactic changes, which resulted in the decline and extinction of many proboscidean groups. The earliest members of the modern genera of Elephantidae appeared during the latest Miocene–early Pliocene around 5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago. Over the course of the Early Pleistocene, all non-elephantid probobscidean genera outside of the Americas became extinct with the exception of Stegodon, with gomphotheres dispersing into South America as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. Proboscideans were represented by around 23 species at the beginning of the Late Pleistocene. Proboscideans underwent a dramatic decline during the Late Pleistocene as part of the Late Pleistocene extinctions of most large mammals globally, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the American gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island, where they persisted until around 4,000 years ago. Over the course of their evolution, probobscideans grew in size. With that came longer limbs and wider feet with a more digitigrade stance, along with a larger head and shorter neck. The trunk evolved and grew longer to provide reach. The number of premolars, incisors, and canines decreased, and the cheek teeth (molars and premolars) became longer and more specialised. The incisors developed into tusks of different shapes and sizes. Several species of proboscideans became isolated on islands and experienced insular dwarfism, some dramatically reducing in body size, such as the tall dwarf elephant species Palaeoloxodon falconeri. Living species Anatomy Elephants are the largest living terrestrial animals. Some species of the extinct elephant genus Palaeoloxodon considerably exceeded modern elephants in size making them among the largest land mammals ever. The skeleton is made up of 326–351 bones. The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs. The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. By contrast, the lower jaw is dense. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The skull is built to withstand great stress, particularly when fighting or using the tusks. The brain is surrounded by arches in the skull, which serve as protection. Because of the size of the head, the neck is relatively short to provide better support. Elephants are homeotherms and maintain their average body temperature at ~ 36 °C (97 °F), with a minimum of 35.2 °C (95.4 °F) during the cool season, and a maximum of 38.0 °C (100.4 °F) during the hot dry season. Ears and eyes Elephant ear flaps, or pinnae, are thick in the middle with a thinner tip and supported by a thicker base. They contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, releasing excess heat into the environment. This effect is increased by flapping the ears back and forth. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates and have the largest ear flaps. The ossicles are adapted for hearing low frequencies, being most sensitive at 1 kHz. Lacking a lacrimal apparatus (tear duct), the eye relies on the harderian gland in the orbit to keep it moist. A durable nictitating membrane shields the globe. The animal's field of vision is compromised by the location and limited mobility of the eyes. Elephants are dichromats and they can see well in dim light but not in bright light. Trunk The elongated and prehensile trunk, or proboscis, consists of both the nose and upper lip, which fuse in early fetal development. This versatile appendage contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsal, ventral, and lateral muscles, while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum consists of small elastic muscles between the nostrils, which are divided by cartilage at the base. A unique proboscis nerve – a combination of the maxillary and facial nerves – lines each side of the appendage. As a muscular hydrostat, the trunk moves through finely controlled muscle contractions, working both with and against each other. Using three basic movements: bending, twisting, and longitudinal stretching or retracting, the trunk has near unlimited flexibility. Objects grasped by the end of the trunk can be moved to the mouth by curving the appendage inward. The trunk can also bend at different points by creating stiffened "pseudo-joints". The tip can be moved in a way similar to the human hand. The skin is more elastic on the dorsal side of the elephant trunk than underneath; allowing the animal to stretch and coil while maintaining a strong grasp. The flexibility of the trunk is aided by the numerous wrinkles in the skin. The African elephants have two finger-like extensions at the tip of the trunk that allow them to pluck small food. The Asian elephant has only one and relies more on wrapping around a food item. Asian elephant trunks have better motor coordination. The trunk's extreme flexibility allows it to forage and wrestle other elephants with it. It is powerful enough to lift up to , but it also has the precision to crack a peanut shell without breaking the seed. With its trunk, an elephant can reach items up to high and dig for water in the mud or sand below. It also uses it to clean itself. Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right. Elephant trunks are capable of powerful siphoning. They can expand their nostrils by 30%, leading to a 64% greater nasal volume, and can breathe in almost 30 times faster than a human sneeze, at over . They suck up water, which is squirted into the mouth or over the body. The trunk of an adult Asian elephant is capable of retaining of water. They will also sprinkle dust or grass on themselves. When underwater, the elephant uses its trunk as a snorkel. The trunk also acts as a sense organ. Its sense of smell may be four times greater than a bloodhound's nose. The infraorbital nerve, which makes the trunk sensitive to touch, is thicker than both the optic and auditory nerves. Whiskers grow all along the trunk, and are particularly packed at the tip, where they contribute to its tactile sensitivity. Unlike those of many mammals, such as cats and rats, elephant whiskers do not move independently ("whisk") to sense the environment; the trunk itself must move to bring the whiskers into contact with nearby objects. Whiskers grow in rows along each side on the ventral surface of the trunk, which is thought to be essential in helping elephants balance objects there, whereas they are more evenly arranged on the dorsal surface. The number and patterns of whiskers are distinctly different between species. Damaging the trunk would be detrimental to an elephant's survival, although in rare cases, individuals have survived with shortened ones. One trunkless elephant has been observed to graze using its lips with its hind legs in the air and balancing on its front knees. Floppy trunk syndrome is a condition of trunk paralysis recorded in African bush elephants and involves the degeneration of the peripheral nerves and muscles. The disorder has been linked to lead poisoning. Teeth Elephants usually have 26 teeth: the incisors, known as the tusks; 12 deciduous premolars; and 12 molars. Unlike most mammals, teeth are not replaced by new ones emerging from the jaws vertically. Instead, new teeth start at the back of the mouth and push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. This is followed by four more tooth replacements at the ages of four to six, 9–15, 18–28, and finally in their early 40s. The final (usually sixth) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are more diamond-shaped in African elephants. Tusks The tusks of an elephant are modified second incisors in the upper jaw. They replace deciduous milk teeth at 6–12 months of age and keep growing at about a year. As the tusk develops, it is topped with smooth, cone-shaped enamel that eventually wanes. The dentine is known as ivory and has a cross-section of intersecting lines, known as "engine turning", which create diamond-shaped patterns. Being living tissue, tusks are fairly soft and about as dense as the mineral calcite. The tusk protrudes from a socket in the skull, and most of it is external. At least one-third of the tusk contains the pulp, and some have nerves that stretch even further. Thus, it would be difficult to remove it without harming the animal. When removed, ivory will dry up and crack if not kept cool and wet. Tusks function in digging, debarking, marking, moving objects, and fighting. Elephants are usually right- or left-tusked, similar to humans, who are typically right- or left-handed. The dominant, or "master" tusk, is typically more worn down, as it is shorter and blunter. For African elephants, tusks are present in both males and females and are around the same length in both sexes, reaching up to , but those of males tend to be more massive. In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all. Tuskless males exist and are particularly common among Sri Lankan elephants. Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was long and weighed . Hunting for elephant ivory in Africa and Asia has resulted in an effective selection pressure for shorter tusks and tusklessness. Skin An elephant's skin is generally very tough, at thick on the back and parts of the head. The skin around the mouth, anus, and inside of the ear is considerably thinner. Elephants are typically grey, but African elephants look brown or reddish after rolling in coloured mud. Asian elephants have some patches of depigmentation, particularly on the head. Calves have brownish or reddish hair, with the head and back being particularly hairy. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the tip of the tail and parts of the head and genitals. Normally, the skin of an Asian elephant is covered with more hair than its African counterpart. Their hair is thought to help them lose heat in their hot environments. Although tough, an elephant's skin is very sensitive and requires mud baths to maintain moisture and protection from burning and insect bites. After bathing, the elephant will usually use its trunk to blow dust onto its body, which dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs to expose their soles to the air. Elephants only have sweat glands between the toes, but the skin allows water to disperse and evaporate, cooling the animal. In addition, cracks in the skin may reduce dehydration and allow for increased thermal regulation in the long term. Legs, locomotion, and posture To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bones in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis (blood cell creation). Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front. The position of the limbs and leg bones allows an elephant to stand still for extended periods of time without tiring. Elephants are incapable of turning their manus as the ulna and radius of the front legs are secured in pronation. Elephants may also lack the pronator quadratus and pronator teres muscles or have very small ones. The circular feet of an elephant have soft tissues, or "cushion pads" beneath the manus or pes, which allow them to bear the animal's great mass. They appear to have a sesamoid, an extra "toe" similar in placement to a giant panda's extra "thumb", that also helps in weight distribution. As many as five toenails can be found on both the front and hind feet. Elephants can move both forward and backward, but are incapable of trotting, jumping, or galloping. They can move on land only by walking or ambling: a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders moving up and down while the foot is planted on the ground. The fast gait does not meet all the criteria of running, since there is no point where all the feet are off the ground, although the elephant uses its legs much like other running animals, and can move faster by quickening its stride. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of . At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals. The cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving. Elephants are capable swimmers: they can swim for up to six hours while completely waterborne, moving at and traversing up to continuously. Internal systems The brain of an elephant weighs compared to for a human brain. It is the largest of all terrestrial mammals. While the elephant brain is larger overall, it is proportionally smaller than the human brain. At birth, an elephant's brain already weighs 30–40% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally. Their temporal lobes are proportionally larger than those of other animals, including humans. The throat of an elephant appears to contain a pouch where it can store water for later use. The larynx of the elephant is the largest known among mammals. The vocal folds are anchored close to the epiglottis base. When comparing an elephant's vocal folds to those of a human, an elephant's are proportionally longer, thicker, with a greater cross-sectional area. In addition, they are located further up the vocal tract with an acute slope. The heart of an elephant weighs . Its apex has two pointed ends, an unusual trait among mammals. In addition, the ventricles of the heart split towards the top, a trait also found in sirenians. When upright, the elephant's heart beats around 28 beats per minute and actually speeds up to 35 beats when it lies down. The blood vessels are thick and wide and can hold up under high blood pressure. The lungs are attached to the diaphragm, and breathing relies less on the expanding of the ribcage. Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air. Elephants breathe mostly with the trunk but also with the mouth. They have a hindgut fermentation system, and their large and small intestines together reach in length. Less than half of an elephant's food intake gets digested, despite the process lasting a day. An elephant's bladder can store up to 18 litres of urine and its kidneys can produce more than 50 litres of urine per day. Sex characteristics A male elephant's testes, like other Afrotheria, are internally located near the kidneys. The penis can be as long as with a wide base. It curves to an 'S' when fully erect and has an orifice shaped like a Y. The female's clitoris may be . The vulva is found lower than in other herbivores, between the hind legs instead of under the tail. Determining pregnancy status can be difficult due to the animal's large belly. The female's mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female's trunk. Elephants have a unique organ, the temporal gland, located on both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth. Females have also been observed with these secretions. Behaviour and ecology Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass, and roots. African elephants mostly browse, while Asian elephants mainly graze. They can eat as much as of food and drink of water in a day. Elephants tend to stay near water sources. They have morning, afternoon, and nighttime feeding sessions. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down. Elephants average 3–4 hours of sleep per day. Both males and family groups typically move no more than a day, but distances as far as have been recorded in the Etosha region of Namibia. Elephants go on seasonal migrations in response to changes in environmental conditions. In northern Botswana, they travel to the Chobe River after the local waterholes dry up in late August. Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; smaller herbivores can access trees mowed down by elephants. When they dig for water during droughts, they create waterholes that can be used by other animals. When they use waterholes, they end up making them bigger. At Mount Elgon, elephants dig through caves and pave the way for ungulates, hyraxes, bats, birds, and insects. Elephants are important seed dispersers; African forest elephants consume and deposit many seeds over great distances, with either no effect or a positive effect on germination. In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the smaller Malayan tapir. Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys. Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, elephant numbers have threatened several species of small birds that depend on woodlands. Their weight causes the soil to compress, leading to runoff and erosion. Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. The size of adult elephants makes them nearly invulnerable to predators. Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa and tigers in Asia. The lions of Savuti, Botswana, have adapted to hunting elephants, targeting calves, juveniles or even sub-adults. There are rare reports of adult Asian elephants falling prey to tigers. Elephants tend to have high numbers of parasites, particularly nematodes, compared to many other mammals. This may be due to elephants being less vulnerable to predation; in other mammal species, individuals weakened by significant parasite loads are easily killed off by predators, removing them from the population. Social organisation Elephants are generally gregarious animals. African bush elephants in particular have a complex, stratified social structure. Female elephants spend their entire lives in tight-knit matrilineal family groups. They are led by the matriarch, who is often the eldest female. She remains leader of the group until death or if she no longer has the energy for the role; a study on zoo elephants found that the death of the matriarch led to greater stress in the surviving elephants. When her tenure is over, the matriarch's eldest daughter takes her place instead of her sister (if present). One study found that younger matriarchs take potential threats less seriously. Large family groups may split if they cannot be supported by local resources. At Amboseli National Park, Kenya, female groups may consist of around ten members, including four adults and their dependent offspring. Here, a cow's life involves interaction with those outside her group. Two separate families may associate and bond with each other, forming what are known as bond groups. During the dry season, elephant families may aggregate into clans. These may number around nine groups, in which clans do not form strong bonds but defend their dry-season ranges against other clans. The Amboseli elephant population is further divided into the "central" and "peripheral" subpopulations. Female Asian elephants tend to have more fluid social associations. In Sri Lanka, there appear to be stable family units or "herds" and larger, looser "groups". They have been observed to have "nursing units" and "juvenile-care units". In southern India, elephant populations may contain family groups, bond groups, and possibly clans. Family groups tend to be small, with only one or two adult females and their offspring. A group containing more than two cows and their offspring is known as a "joint family". Malay elephant populations have even smaller family units and do not reach levels above a bond group. Groups of African forest elephants typically consist of one cow with one to three offspring. These groups appear to interact with each other, especially at forest clearings. Adult males live separate lives. As he matures, a bull associates more with outside males or even other families. At Amboseli, young males may be away from their families 80% of the time by 14–15 years of age. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. A dominance hierarchy exists among males, whether they are social or solitary. Dominance depends on age, size, and sexual condition. Male elephants can be quite sociable when not competing for mates and form vast and fluid social networks. Older bulls act as the leaders of these groups. The presence of older males appears to subdue the aggression and "deviant" behaviour of younger ones. The largest all-male groups can reach close to 150 individuals. Adult males and females come together to breed. Bulls will accompany family groups if a cow is in oestrous. Sexual behaviour Musth Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at 15 years old, but it is not very intense until they are older than 25. At Amboseli, no bulls under 24 were found to be in musth, while half of those aged 25–35 and all those over 35 were. In some areas, there may be seasonal influences on the timing of musths. The main characteristic of a bull's musth is a fluid discharged from the temporal gland that runs down the side of his face. Behaviours associated with musth include walking with a high and swinging head, nonsynchronous ear flapping, picking at the ground with the tusks, marking, rumbling, and urinating in the sheath. The length of this varies between males of different ages and conditions, lasting from days to months. Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring. Rarely do they full-on fight. There is at least one documented case of infanticide among Asian elephants at Dong Yai Wildlife Sanctuary, with the researchers describing it as most likely normal behaviour among aggressive musth elephants. Mating Elephants are polygynous breeders, and most copulations occur during rainfall. An oestrous cow uses pheromones in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires him to collect a chemical sample with his trunk and taste it with the vomeronasal organ at the roof of the mouth. The oestrous cycle of a cow lasts 14–16 weeks, with the follicular phase lasting 4–6 weeks and the luteal phase lasting 8–10 weeks. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, appears to change the female's scent, signaling to males that she is in heat, but ovulation does not occur until the second (or ovulatory) surge. Cows over 45–50 years of age are less fertile. Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females seek them out, particularly older ones. Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths. For young females, the approach of an older bull can be intimidating, so her relatives stay nearby for comfort. During copulation, the male rests his trunk on the female. The penis is mobile enough to move without the pelvis. Before mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or an ejaculatory pause. Homosexual behaviour has been observed in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting, and "championships" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity, where they engage in mutual masturbation with their trunks. Birth and development Gestation in elephants typically lasts between one and a half and two years and the female will not give birth again for at least four years. The relatively long pregnancy is supported by several corpus luteums and gives the foetus more time to develop, particularly the brain and trunk. Births tend to take place during the wet season. Typically, only a single young is born, but twins sometimes occur. Calves are born roughly tall and with a weight of around . They are precocial and quickly stand and walk to follow their mother and family herd. A newborn calf will attract the attention of all the herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother limits access to her young. Alloparenting – where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically aged two to twelve years. For the first few days, the newborn is unsteady on its feet and needs its mother's help. It relies on touch, smell, and hearing, as its eyesight is less developed. With little coordination in its trunk, it can only flop it around which may cause it to trip. When it reaches its second week, the calf can walk with more balance and has more control over its trunk. After its first month, the trunk can grab and hold objects but still lacks sucking abilities, and the calf must bend down to drink. It continues to stay near its mother as it is still reliant on her. For its first three months, a calf relies entirely on its mother's milk, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, there is progress in lip and leg movements. By nine months, mouth, trunk, and foot coordination are mastered. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year. After a year, a calf is fully capable of grooming, drinking, and feeding itself. It still needs its mother's milk and protection until it is at least two years old. Suckling after two years may improve growth, health, and fertility. Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former are sexually mature by the age of nine years while the latter become mature around 14–15 years. Adulthood starts at about 18 years of age in both sexes. Elephants have long lifespans, reaching 60–70 years of age. Lin Wang, a captive male Asian elephant, lived for 86 years. Communication Elephants communicate in various ways. Individuals greet one another by touching each other on the mouth, temporal glands, and genitals. This allows them to pick up chemical cues. Older elephants use trunk-slaps, kicks, and shoves to control younger ones. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. A calf will press against its mother's front legs to signal it wants to rest and will touch her breast or leg when it wants to suckle. Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as tossing around dust and vegetation. They are usually bluffing when performing these actions. Excited elephants also raise their heads and spread their ears but additionally may raise their trunks. Submissive elephants will lower their heads and trunks, as well as flatten their ears against their necks, while those that are ready to fight will bend their ears in a V shape. Elephants produce several vocalisations—some of which pass though the trunk—for both short and long range communication. This includes trumpeting, bellowing, roaring, growling, barking, snorting, and rumbling. Elephants can produce infrasonic rumbles. For Asian elephants, these calls have a frequency of 14–24 Hz, with sound pressure levels of 85–90 dB and last 10–15 seconds. For African elephants, calls range from 15 to 35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, possibly over . Elephants are known to communicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. An individual foot stomping or mock charging can create seismic signals that can be heard at travel distances of up to . Seismic waveforms produced by rumbles travel . Intelligence and cognition Elephants are among the most intelligent animals. They exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins. One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later. Elephants are among the species known to use tools. An Asian elephant has been observed fine-tuning branches for use as flyswatters. Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps which give them long lasting memories of their environment on a wide scale. Individuals may be able to remember where their family members are located. Scientists debate the extent to which elephants feel emotion. They are attracted to the bones of their own kind, regardless of whether they are related. As with chimpanzees and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing "concern"; however, the Oxford Companion to Animal Behaviour (1987) said that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion". Conservation Status African bush elephants were listed as Endangered by the International Union for Conservation of Nature (IUCN) in 2021, and African forest elephants were listed as Critically Endangered in the same year. In 1979, Africa had an estimated population of at least 1.3 million elephants, possibly as high as 3.0 million. A decade later, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in Eastern Africa, 204,000 in Southern Africa, and 19,000 in Western Africa. The population of rainforest elephants was lower than anticipated, at around 214,000 individuals. Between 1977 and 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers hastened, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were various, with unconfirmed losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa. The IUCN estimated that total population in Africa is estimated at to 415,000 individuals for both species combined as of 2016. African elephants receive at least some legal protection in every country where they are found. Successful conservation efforts in certain areas have led to high population densities while failures have led to declines as high as 70% or more of the course of ten years. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings stopped in the late 1980s and early 1990s. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies. In 2020, the IUCN listed the Asian elephant as endangered due to the population declining by half over "the last three generations". Asian elephants once ranged from Western to East Asia and south to Sumatra. and Java. It is now extinct in these areas, and the current range of Asian elephants is highly fragmented. The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. Around 60% of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in the Western Ghats may have stabilised. Threats The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence. Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use was comparable to that of gold. The ivory trade contributed to the fall of the African elephant population in the late 20th century. This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan. Around the same time, Kenya destroyed all its ivory stocks. Ivory was banned internationally by CITES in 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not as badly affected. Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local populations were healthy, but only if their supplies were from culled individuals or those that died of natural causes. The ban allowed the elephant to recover in parts of Africa. In February 2012, 650 elephants in Bouba Njida National Park, Cameroon, were slaughtered by Chadian raiders. This has been called "one of the worst concentrated killings" since the ivory ban. Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said "they would enact a nearly complete ban on the import and export of ivory" due to causes of extinction. Other threats to elephants include habitat destruction and fragmentation. The Asian elephant lives in areas with some of the highest human populations and may be confined to small islands of forest among human-dominated landscapes. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the protection of wildlife corridors which give populations greater interconnectivity and space. Chili pepper products as well as guarding with defense tools have been found to be effective in preventing crop-raiding by elephants. Less effective tactics include beehive and electric fences. Human relations Working animal Elephants have been working animals since at least the Indus Valley civilization over 4,000 years ago and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old, the age range when they are both more trainable and can work for more years. They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used. Individuals of the Asian species have been often trained as working animals. Asian elephants are used to carry and pull both objects and people in and out of areas as well as lead people in religious celebrations. They are valued over mechanised tools as they can perform the same tasks but in more difficult terrain, with strength, memory, and delicacy. Elephants can learn over 30 commands. Musth bulls are difficult and dangerous to work with and so are chained up until their condition passes. In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under The Prevention of Cruelty to Animals Act of 1960. In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live. The practice of working elephants has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre. Warfare Historically, elephants were considered formidable instruments of war. They were described in Sanskrit texts as far back as 1500 BC. From South Asia, the use of elephants in warfare spread west to Persia and east to Southeast Asia. The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BC) while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century. War elephants were also employed in the Mediterranean and North Africa throughout the classical period since the reign of Ptolemy II in Egypt. The Carthaginian general Hannibal famously took African elephants across the Alps during his war with the Romans and reached the Po Valley in 218 BC with all of them alive, but died of disease and combat a year later. An elephant's head and sides were equipped with armour, the trunk may have had a sword tied to it and tusks were sometimes covered with sharpened iron or brass. Trained elephants would attack both humans and horses with their tusks. They might have grasped an enemy soldier with the trunk and tossed him to their mahout, or pinned the soldier to the ground and speared him. Some shortcomings of war elephants included their great visibility, which made them easy to target, and limited maneuverability compared to horses. Alexander the Great achieved victory over armies with war elephants by having his soldiers injure the trunks and legs of the animals which caused them to panic and become uncontrollable. Zoos and circuses Elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 – 15 September 1885), who was a major attraction in the Barnum & Bailey Circus. These animals do not reproduce well in captivity due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, imports of the species almost stopped by the end of the 1980s. Subsequently, the US received many captive African elephants from Zimbabwe, which had an overabundance of the animals. Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they allow easy access to the animals and provide fund and knowledge for preserving their natural habitats, as well as safekeeping for the species. Opponents claim that animals in zoos are under physical and mental stress. Elephants have been recorded displaying stereotypical behaviours in the form of wobbling the body or head and pacing the same route both forwards and backwards. This has been observed in 54% of individuals in UK zoos. Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live just as long. The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals. In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins, and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was rebuked for using an electric prod on an elephant. Despite this, he denied that any of these practices hurt the animals. Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on positive reinforcement when training his animals. Barnum and Bailey circus retired its touring elephants in May 2016. Attacks Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans. In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive. In parts of India, male elephants have entered villages at night, destroying homes and killing people. From 2000 to 2004, 300 people died in Jharkhand, and in Assam, 239 people were reportedly killed between 2001 and 2006. Throughout the country, 1,500 people were killed by elephants between 2019 and 2022, which led to 300 elephants being killed in kind. Local people have reported that some elephants were drunk during the attacks, though officials have disputed this. Purportedly drunk elephants attacked an Indian village in December 2002, killing six people, which led to the retaliatory slaughter of about 200 elephants by locals. Cultural significance Elephants have a universal presence in global culture. They have been represented in art since Paleolithic times. Africa, in particular, contains many examples of elephant rock art, especially in the Sahara and southern Africa. In Asia, the animals are depicted as motifs in Hindu and Buddhist shrines and temples. Elephants were often difficult to portray by people with no first-hand experience of them. The ancient Romans, who kept the animals in captivity, depicted elephants more accurately than medieval Europeans who portrayed them more like fantasy creatures, with horse, bovine, and boar-like traits, and trumpet-like trunks. As Europeans gained more access to captive elephants during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci. Elephants have been the subject of religious beliefs. The Mbuti people of central Africa believe that the souls of their dead ancestors resided in elephants. Similar ideas existed among other African societies, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu, in modern-day Nigeria, placed elephant tusks underneath their dead leader's feet in the grave. The animals' importance is only totemic in Africa but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise, in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows. One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma in some traditions. Ganesha is associated with writers and merchants, and it is believed that he can give people success as well as grant them their desires, but could also take these things away. In Buddhism, Buddha is said to have taken the form of a white elephant when he entered his mother's womb to be reincarnated as a human. In Western popular culture, elephants symbolise the exotic, especially since – as with the giraffe, hippopotamus, and rhinoceros – there are no similar animals familiar to Western audiences. As characters, elephants are most common in children's stories, where they are portrayed positively. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to or finding a family, such as "The Elephant's Child" from Rudyard Kipling's Just So Stories, Disney's Dumbo, and Kathryn and Byron Jackson's The Saggy Baggy Elephant. Other elephant heroes given human qualities include Jean de Brunhoff's Babar, David McKee's Elmer, and Dr. Seuss's Horton. Several cultural references emphasise the elephant's size and strangeness. For instance, a "white elephant" is a byword for something that is weird, unwanted, and has no value. The expression "elephant in the room" refers to something that is being ignored but ultimately must be addressed. The story of the blind men and an elephant involves blind men touching different parts of an elephant and trying to figure out what it is. See also Animal track Desert elephant Elephants' graveyard List of individual elephants Motty, captive hybrid of an Asian and African elephant National Elephant Day (Thailand) World Elephant Day References Bibliography Further reading Saxe, John Godfrey (1872). "The Blindmen and the Elephant" at Wikisource. The Poems of John Godfrey Saxe. External links International Elephant Foundation Articles containing video clips Herbivorous mammals Mammal common names Pliocene first appearances Tool-using mammals Paraphyletic groups
Elephant
[ "Biology" ]
11,363
[ "Phylogenetics", "Paraphyletic groups" ]
9,281
https://en.wikipedia.org/wiki/Evolutionary%20linguistics
Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics. A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals. For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. There is so far little scientific evidence for any of these claims, and some of them have been labelled as pseudoscience. History 1863–1945: social Darwinism Although pre-Darwinian theorists had compared languages to living organisms as a metaphor, the comparison was first taken literally in 1863 by the historical linguist August Schleicher who was inspired by Charles Darwin's On the Origin of Species. At the time there was not enough evidence to prove that Darwin's theory of natural selection was correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution of species. A review of Schleicher's book Darwinism as Tested by the Science of Language appeared in the first issue of Nature journal in 1870. Darwin reiterated Schleicher's proposition in his 1871 book The Descent of Man, claiming that languages are comparable to species, and that language change occurs through natural selection as words 'struggle for life'. Darwin believed that languages had evolved from animal mating calls. Darwinists considered the concept of language creation as unscientific. August Schleicher and his friend Ernst Haeckel were keen gardeners and regarded the study of cultures as a type of botany, with different species competing for the same living space. Similar ideas became later advocated by politicians who wanted to appeal to working class voters, not least by the national socialists who subsequently included the concept of struggle for living space in their agenda. Highly influential until the end of World War II, social Darwinism was eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies. This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by the Paris Linguistic Society as early as in 1866. Ferdinand de Saussure proposed structuralism to replace evolutionary linguistics in his Course in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishing Sorbonne as an international centrepoint of humanistic thinking. From 1959 onwards: genetic determinism In the United States, structuralism was however fended off by the advocates of behavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach of Noam Chomsky who published a modification of Louis Hjelmslev's formal structuralist theory, claiming that syntactic structures are innate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT. Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted the post-structuralists in the Science Wars of the late 1990s. The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities. The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit. Chomsky eventually claimed that syntactic structures are caused by a random mutation in the human genome, proposing a similar explanation for other human faculties such as ethics. But Steven Pinker argued in 1990 that they are the outcome of evolutionary adaptations. From 1976 onwards: Neo-Darwinism At the same time when the Chomskyan paradigm of biological determinism defeated humanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 that generative grammar was under fire in applied linguistics and in the process of being replaced with usage-based linguistics; a derivative of Richard Dawkins's memetics. It is a concept of linguistic units as replicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestseller The Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky's Universal Grammar, grouped under different brands including a framework called Cognitive Linguistics (with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused with functional linguistics) to confront both Chomsky and the humanists. The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics and linguistic typology; while the generative approach has maintained its position in general linguistics, especially syntax; and in computational linguistics. View of linguistics Evolutionary linguistics is part of a wider framework of Universal Darwinism. In this view, linguistics is seen as an ecological environment for research traditions struggling for the same resources. According to David Hull, these traditions correspond to species in biology. Relationships between research traditions can be symbiotic, competitive or parasitic. An adaptation of Hull's theory in linguistics is proposed by William Croft. He argues that the Darwinian method is more advantageous than linguistic models based on physics, structuralist sociology, or hermeneutics. Approaches Evolutionary linguistics is often divided into functionalism and formalism, concepts which are not to be confused with functionalism and formalism in the humanistic reference. Functional evolutionary linguistics considers languages as adaptations to human mind. The formalist view regards them as crystallised or non-adaptational. Functionalism (adaptationism) The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated. It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning. Language is not considered as a separate area of cognition, but as coinciding with general cognitive capacities, such as perception, attention, motor skills, and spatial and visual processing. It is argued to function according to the same principles as these. It is thought that the brain links action schemes to form–meaning pairs which are called constructions. Cognitive linguistic approaches to syntax are called cognitive and construction grammar. Also deriving from memetics and other cultural replicator theories, these can study the natural or social selection and adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units. The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given. What correspond to replicators or mind-viruses in memetics are called linguemes in Croft's theory of Utterance Selection (TUS), and likewise linguemes or constructions in construction grammar and usage-based linguistics; and metaphors, frames or schemas in cognitive and construction grammar. The reference of memetics has been largely replaced with that of a Complex Adaptive System. In current linguistics, this term covers a wide range of evolutionary notions while maintaining the Neo-Darwinian concepts of replication and replicator population. Functional evolutionary linguistics is not to be confused with functional humanistic linguistics. Formalism (structuralism) Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances in crystallography, Schleicher argued that different types of languages are like plants, animals and crystals. The idea of linguistic structures as frozen drops was revived in tagmemics, an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused by the Creation. In modern biolinguistics, the X-bar tree is argued to be like natural systems such as ferromagnetic droplets and botanic forms. Generative grammar considers syntactic structures similar to snowflakes. It is hypothesised that such patterns are caused by a mutation in humans. The formal–structural evolutionary aspect of linguistics is not to be confused with structural linguistics. Evidence There was some hope of a breakthrough with the discovery of the FOXP2 gene. There is little support, however, for the idea that FOXP2 is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. The idea that people have a language instinct is disputed. Memetics is sometimes discredited as pseudoscience and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience. All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes. Criticism Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics. Ferdinand de Saussure commented on 19th century evolutionary linguistics: Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development. Esa Itkonen nonetheless deems the revival of Darwinism as a hopeless enterprise: Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not their genotype. See also Biolinguistics Evolutionary psychology of language FOXP2 Origin of language Historical linguistics Phylogenetic tree Universal Darwinism References Further reading External links Agent-Based Models of Language Evolution ARTI Artificial Intelligence Laboratory, Vrije Universiteit Brussel Cognitive Neuroscience Laboratory Computerized comparative linguistics Fluid Construction Grammar Language Evolution and Computation Bibliography Language Evolution and Computation Research Unit, University of Edinburgh Evolution of language Sociobiology
Evolutionary linguistics
[ "Biology" ]
2,236
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
9,284
https://en.wikipedia.org/wiki/Equation
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign . The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation. Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables. The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length. Description An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides. The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms. For example, the equation has left-hand side , which has four terms, and right-hand side , consisting of just one term. The names of the variables suggest that and are unknowns, and that , , and are parameters, but this is normally fixed by the context (in some contexts, may be a parameter, or , , and may be ordinary variables). An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side. Properties Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to: Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero. Multiplying or dividing both sides of an equation by a non-zero quantity. Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum. For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity. If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation has the solution Raising both sides to the exponent of 2 (which means applying the function to both sides of the equation) changes the equation to , which not only has the previous solution but also introduces the extraneous solution, Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation. The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination. Examples Analogous illustration An equation is analogous to a weighing scale, balance, or seesaw. Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation). In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same. Parameters and unknowns Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters. An example of an equation involving x and y as unknowns and the parameter R is When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle. Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0. The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions. A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system has the unique solution x = −1, y = 1. Identities An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable. In algebra, an example of an identity is the difference of two squares: which is true for all x and y. Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are: and which are both true for all values of θ. For example, to solve for the value of θ that satisfies the equation: where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give: yielding the following solution for θ: Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number. Algebra Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions. Polynomial equations In general, an algebraic equation or polynomial equation is an equation of the form , or where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.). For example, is a univariate algebraic (polynomial) equation with integer coefficients and is a multivariate polynomial equation over the rational numbers. Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). Systems of linear equations A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example, is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually. In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Geometry Analytic geometry In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form , where and are real numbers and are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in or as the solution set of two linear equations with values in A conic section is the intersection of a cone with equation and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic. The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians. Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra. Cartesian equations In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics. One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation . Parametric equations A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example, are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve. The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). Number theory Diophantine equations A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. Algebraic and transcendental numbers An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental. Algebraic geometry Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations. Differential equations A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics. In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. Ordinary differential equations An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions. Partial differential equations A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations. Types of equations Equations can be classified according to the types of operations and quantities involved. Important types include: An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree: linear equation for degree one quadratic equation for degree two cubic equation for degree three quartic equation for degree four quintic equation for degree five sextic equation for degree six septic equation for degree seven octic equation for degree eight A Diophantine equation is an equation where the unknowns are required to be integers A transcendental equation is an equation involving a transcendental function of its unknowns A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations A functional equation is an equation in which the unknowns are functions rather than simple quantities Equations involving derivatives, integrals and finite differences: A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as . Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable. A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process See also Formula History of algebra Indeterminate equation List of equations List of scientific equations named after people Term (logic) Theory of equations Cancelling out Notes References External links Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations. Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). Elementary algebra
Equation
[ "Mathematics" ]
4,460
[ "Mathematical objects", "Elementary algebra", "Equations", "Elementary mathematics", "Algebra" ]
9,299
https://en.wikipedia.org/wiki/Erasmus%20Darwin
Erasmus Robert Darwin (12 December 173118 April 1802) was an English physician. One of the key thinkers of the Midlands Enlightenment, he was also a natural philosopher, physiologist, slave-trade abolitionist, inventor, freemason, and poet. His poems included much natural history, including a statement of evolution and the relatedness of all forms of life. He was a member of the Darwin–Wedgwood family, which includes his grandsons Charles Darwin and Francis Galton. Darwin was a founding member of the Lunar Society of Birmingham, a discussion group of pioneering industrialists and natural philosophers. He turned down an invitation from George III to become Physician to the King. Early life and education Darwin was born in 1731 at Elston Hall, Nottinghamshire, near Newark-on-Trent, England, the youngest of seven children of Robert Darwin of Elston (1682–1754), a lawyer and physician, and his wife Elizabeth Hill (1702–97). The name Erasmus had been used by a number of his family and derives from his ancestor Erasmus Earle, Common Sergent of England under Oliver Cromwell. His siblings were: Robert Waring Darwin of Elston (17 October 1724 – 4 November 1816) Elizabeth Darwin (15 September 1725 – 8 April 1800) William Alvey Darwin (3 October 1726 – 7 October 1783) Anne Darwin (12 November 1727 – 3 August 1813) Susannah Darwin (10 April 1729 – 29 September 1789) Rev. John Darwin, rector of Elston (28 September 1730 – 24 May 1805) He was educated at Chesterfield Grammar School, then later at St John's College, Cambridge. He obtained his medical education at the University of Edinburgh Medical School. Darwin settled in 1756 as a physician at Nottingham, but met with little success and so moved the following year to Lichfield to try to establish a practice there. A few weeks after his arrival, using a novel course of treatment, he restored the health of a young fisherman whose death seemed inevitable. This ensured his success in the new locale. Darwin was a highly successful physician for more than fifty years in the Midlands. In 1761, he was elected to the Royal Society. George III invited him to be Royal Physician, but Darwin declined. Personal life Darwin married twice and had 14 children, including two illegitimate daughters by an employee, and, possibly, at least one further illegitimate daughter. In 1757 he married Mary (Polly) Howard (1740–1770), the daughter of Charles Howard, a Lichfield solicitor. They had four sons and one daughter, two of whom (a son and a daughter) died in infancy: Charles Darwin (1758–1778), uncle of the naturalist Erasmus Darwin Jr (1759–1799) Elizabeth Darwin (1763, survived 4 months) Robert Waring Darwin (1766–1848), father of the naturalist Charles Darwin William Alvey Darwin (1767, survived 19 days) The first Mrs. Darwin died in 1770. A governess, Mary Parker, was hired to look after Robert. By late 1771, employer and employee had become intimately involved and together they had two illegitimate daughters: Susanna Parker (1772–1856) Mary Parker Jr (1774–1859) Susanna and Mary Jr later established a boarding school for girls. In 1782, Mary Sr (the governess) married Joseph Day (1745–1811), a Birmingham merchant, and moved away. There was also a rumour that Darwin fathered another child, this time with a married woman. A Lucy Swift gave birth in 1771 to a baby, also named Lucy, who was christened a daughter of her mother and William Swift. It has been suggested that the father was really Darwin. However, it is more likely that this child was the legitimate daughter of Lamech Swift, at that time owner of the Derby Silk Mill and his wife Dorothy, who became a friend of the two Parker girls. Lucy Swift, later known as Lucy Hardcastle after her marriage, went on to be known as a botanist and teacher. In 1775, Darwin met Elizabeth Pole, daughter of Charles Colyear, 2nd Earl of Portmore, and wife of Colonel Edward Pole (1718–1780); but as she was married, Darwin could only make his feelings known for her through poetry. When Edward Pole died, Darwin married Elizabeth and moved to her home, Radbourne Hall, west of Derby. The hall and village are these days known as Radbourne. In 1782, they moved to Full Street, Derby. They had four sons, one of whom died in infancy, and three daughters: Edward Darwin (1782–1829) Frances Ann Violetta Darwin (1783–1874), married Samuel Tertius Galton, was the mother of Francis Galton Emma Georgina Elizabeth Darwin (1784–1818) Sir Francis Sacheverel Darwin (1786–1859) Revd. John Darwin (1787–1818), rector of All Saints' Church, Elston Henry Darwin (1789–1790), died in infancy Harriet Darwin (1790–1825), married Admiral Thomas James Maling Darwin's personal appearance is described in unflattering detail in his Biographical Memoirs, printed by the Monthly Magazine in 1802. Darwin, the description reads, "was of middle stature, in person gross and corpulent; his features were coarse, and his countenance heavy; if not wholly void of animation, it certainly was by no means expressive. The print of him, from a painting of Mr. Wright, is a good likeness. In his gait and dress he was rather clumsy and slovenly, and frequently walked with his tongue hanging out of his mouth." Freemasonry Darwin had been a Freemason throughout his life, in the Time Immemorial Lodge of Cannongate Kilwinning, No. 2, of Scotland. Later on, Sir Francis Darwin, one of his sons, was made a Mason in Tyrian Lodge, No. 253, at Derby, in 1807 or 1808. His son Reginald was made a Mason in Tyrian Lodge in 1804. Charles Darwin's name does not appear on the rolls of the Lodge but it is very possible that he, like Francis, was a Mason. Death Darwin died suddenly on 18 April 1802, weeks after having moved to Breadsall Priory, just north of Derby. The Monthly Magazine of 1802, in its Biographical Memoirs of the Late Dr. Darwin, reports that "during the last few years, Dr. Darwin was much subject to inflammation in his breast and lungs; he had a very serious attack of this disease in the course of the last Spring, from which, after repeated bleedings, by himself and a surgeon, he with great difficulty recovered." Darwin's death, the Biographical Memoirs continues, "is variously accounted for: it is supposed to have been caused by the cold fit of an inflammatory fever. Dr. Fox, of Derby, considers the disease which occasioned it to have been angina pectoris; but Dr. Garlicke, of the same place, thinks this opinion not sufficiently well founded. Whatever was the disease, it is not improbable, surely, that the fatal event was hastened by the violent fit of passion with which he was seized in the morning." His body is buried in All Saints' Church, Breadsall. Erasmus Darwin is commemorated on one of the Moonstones, a series of monuments in Birmingham. Writings Botanical works and the Lichfield Botanical Society Darwin formed 'A Botanical Society, at Lichfield' almost always incorrectly named as the Lichfield Botanical Society (despite the name, composed of only three men, Erasmus Darwin, Sir Brooke Boothby and Mr John Jackson, proctor of Lichfield Cathedral) to translate the works of the Swedish botanist Carl Linnaeus from Latin into English. This took seven years. The result was two publications: A System of Vegetables between 1783 and 1785, and The Families of Plants in 1787. In these volumes, Darwin coined many of the English names of plants that we use today. Darwin then wrote The Loves of the Plants, a long poem, which was a popular rendering of Linnaeus' works. Darwin also wrote Economy of Vegetation, and together the two were published as The Botanic Garden. Among other writers he influenced were Anna Seward and Maria Jacson. Zoonomia Darwin's most important scientific work, Zoonomia (1794–1796), contains a system of pathology and a chapter on 'Generation'. In the latter, he anticipated some of the views of Jean-Baptiste Lamarck, which foreshadowed the modern theory of evolution. Erasmus Darwin's works were read and commented on by his grandson Charles Darwin the naturalist. Erasmus Darwin based his theories on David Hartley's psychological theory of associationism. The essence of his views is contained in the following passage, which he follows up with the conclusion that one and the same kind of living filament is and has been the cause of all organic life: Would it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which THE GREAT FIRST CAUSE endued with animality, with the power of acquiring new parts, attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end! Erasmus Darwin also anticipated survival of the fittest in Zoönomia mainly when writing about the "three great objects of desire" for every organism: "lust, hunger, and security." A similar "survival of the fittest" view in Zoönomia is Erasmus' view on how a species "should" propagate itself. Erasmus' idea that "the strongest and most active animal should propagate the species, which should thence become improved". Today, this is called the theory of survival of the fittest. His grandson Charles Darwin posited the different and fuller theory of natural selection. Charles' theory was that natural selection is the inheritance of changed genetic characteristics that are better adaptations to the environment; these are not necessarily based in "strength" and "activity", which themselves ironically can lead to the overpopulation that results in natural selection yielding nonsurvivors of genetic traits. Erasmus Darwin was familiar with the earlier proto-evolutionary thinking of James Burnett, Lord Monboddo, and cited him in his 1803 work Temple of Nature. Poem on evolution Erasmus Darwin offered the first glimpse of his theory of evolution, obliquely, in a question at the end of a long footnote to his popular poem The Loves of the Plants (1789), which was republished throughout the 1790s in several editions as The Botanic Garden. His poetic concept was to anthropomorphise the stamen (male) and pistil (female) sexual organs, as bride and groom. In this stanza on the flower Curcuma (also Flax and Turmeric) the "youths" are infertile, and he devotes the footnote to other examples of neutered organs in flowers, insect castes, and finally associates this more broadly with many popular and well-known cases of vestigial organs (male nipples, the third and fourth wings of flies, etc.) Darwin's final long poem, The Temple of Nature, was published posthumously in 1803. The poem was originally titled The Origin of Society. It is considered his best poetic work. It centres on his own conception of evolution. The poem traces the progression of life from micro-organisms to civilised society. The poem contains a passage that describes the struggle for existence. His poetry was admired by Wordsworth, while Coleridge was intensely critical, writing, "I absolutely nauseate Darwin's poem". It often made reference to his interests in science; for example botany and steam engines. Education of women The last two leaves of Darwin's A plan for the conduct of female education in boarding schools (1797) contain a book list, an apology for the work, and an advert for "Miss Parkers School". The school advertised on the last page is the one he set up in Ashbourne, Derbyshire, for his two illegitimate children, Susanna and Mary. Darwin regretted that a good education had not been generally available to women in Britain in his time, and drew on the ideas of Locke, Rousseau, and Genlis in organising his thoughts. Addressing the education of middle-class girls, Darwin argued that amorous romance novels were inappropriate and that they should seek simplicity in dress. He contends that young women should be educated in schools, rather than privately at home, and learn appropriate subjects. These subjects include physiognomy, physical exercise, botany, chemistry, mineralogy, and experimental philosophy. They should familiarise themselves with arts and manufactures through visits to sites like Coalbrookdale, and Wedgwood's potteries; they should learn how to handle money, and study modern languages. Darwin's educational philosophy took the view that men and women should have different capabilities, skills, interests, and spheres of action, where the woman's education was designed to support and serve male accomplishment and financial reward, and to relieve him of daily responsibility for children and the chores of life. In the context of the times, this program may be read as a modernising influence in the sense that the woman was at least to learn about the "man's world", although not be allowed to participate in it. The text was written seven years after A Vindication of the Rights of Woman by Mary Wollstonecraft, which has the central argument that women should be educated in a rational manner to give them the opportunity to contribute to society. Some women of Darwin's era were receiving more substantial education and participating in the broader world. An example is Susanna Wright, who was raised in Lancashire and became an American colonist associated with the Midlands Enlightenment. It is not known whether Darwin and Wright knew each other, although they definitely knew many people in common. Other women who received substantial education and who participated in the broader world (albeit sometimes anonymously) whom Darwin definitely knew were Maria Jacson and Anna Seward. Lunar Society These dates indicate the year in which Darwin became friends with these people, who, in turn, became members of the Lunar Society. The Lunar Society existed from 1765 to 1813. Before 1765: Matthew Boulton, originally a buckle maker in Birmingham John Whitehurst of Derby, maker of clocks and scientific instruments, pioneer of geology After 1765: Josiah Wedgwood, potter 1765 Dr. William Small, 1765, man of science, formerly Professor of Natural Philosophy at the College of William and Mary, where Thomas Jefferson was an appreciative pupil Richard Lovell Edgeworth, 1766, inventor James Watt, 1767, improver of steam engine James Keir, 1767, pioneer of the chemical industry Thomas Day, 1768, eccentric and author Dr. William Withering, 1775, the death of Dr. Small left an opening for a physician in the group. Joseph Priestley, 1780, experimental chemist and discoverer of many substances. Samuel Galton, 1782, a Quaker gunmaker with a taste for science, took Darwin's place after Darwin moved to Derby. Darwin also established a lifelong friendship with Benjamin Franklin, who shared Darwin's support for the American and French revolutions. The Lunar Society was instrumental as an intellectual driving force behind England's Industrial Revolution. The members of the Lunar Society, and especially Darwin, opposed the slave trade. He attacked it in The Botanic Garden (1789–1791), and in The Loves of Plants (1789), The Economy of Vegetation (1791), and the Phytologia (1800). Other activities In 1761, Darwin was elected a fellow of the Royal Society. In addition to the Lunar Society, Erasmus Darwin belonged to the influential Derby Philosophical Society, as did his brother-in-law Samuel Fox (see family tree below). He experimented with the use of air and gases to alleviate infections and cancers in patients. A Pneumatic Institution was established at Clifton in 1799 for clinically testing these ideas. He conducted research into the formation of clouds, on which he published in 1788. He also inspired Robert Weldon's Somerset Coal Canal caisson lock. In 1792, Darwin was elected as a member to the American Philosophical Society in Philadelphia. Percy Bysshe Shelley specifically mentions Darwin in the first sentence of the 1818 Preface to Frankenstein to support his contention that the creation of life is possible. His wife Mary Shelley in her introduction to the 1831 edition of Frankenstein wrote that she overheard her husband talk about Darwin's experiments with Lord Byron about unspecified "experiments of Dr. Darwin" that led to the idea for the novel. Cosmological speculation Contemporary literature dates the cosmological theories of the Big Bang and Big Crunch to the 19th and 20th centuries. However, Erasmus Darwin had speculated on these sorts of events in The Botanic Garden, A Poem in Two Parts: Part 1, The Economy of Vegetation, 1791: Inventions Darwin was the inventor of several devices, though he did not patent any: he believed this would damage his reputation as a doctor. He encouraged his friends to patent their own modifications of his designs. A horizontal windmill, which he designed for Josiah Wedgwood (who would be Charles Darwin's other grandfather, see family tree below). A carriage that would not tip over (1766). A steering mechanism for his carriage, known today as the Ackermann linkage, that would be adopted by cars 130 years later (1759). A speaking machine, which was a mechanical larynx made of wood, silk, and leather and pronounced several sounds so well 'as to deceive all who heard it unseen' (at Clifton in 1799). A canal lift for barges. A minute artificial bird. A copying machine (1778). A variety of weather monitoring machines. Rocket engine In notes dating to 1779, Darwin made a sketch of a simple hydrogen-oxygen rocket engine, with gas tanks connected by plumbing and pumps to an elongated combustion chamber and expansion nozzle, a concept not to be seen again until one century later. Major publications Erasmus Darwin, A Botanical Society at Lichfield. A System of Vegetables, according to their classes, orders... translated from the 13th edition of Linnaeus' Systema Vegetabiliium. 2 vols., 1783, Lichfield, J. Jackson, for Leigh and Sotheby, London. Erasmus Darwin, A Botanical Society at Lichfield. The Families of Plants with their natural characters...Translated from the last edition of Linnaeus' Genera Plantarum. 1787, Lichfield, J. Jackson, for J. Johnson, London. Erasmus Darwin, The Botanic Garden, Part I, The Economy of Vegetation. 1791 London, J. Johnson. Part II, The Loves of the Plants. 1789, London, J. Johnson. Erasmus Darwin, Zoonomia; or, The Laws of Organic Life, 1794, Part I. London, J. Johnson. Part I–III. 1796, London, J. Johnson. (last two leaves contain a book list, an apology for the work, and an advert for "Miss Parkers School") Erasmus Darwin, Phytologia; or, The Philosophy of Agriculture and Gardening. 1800, London, J. Johnson. Erasmus Darwin, The Temple of Nature; or, The Origin of Society. 1803, London, J. Johnson. Family tree Commemoration Erasmus Darwin House, his home in Lichfield, Staffordshire, is a museum dedicated to him and his life's work. A secondary school at Burntwood, near Lichfield, was renamed Erasmus Darwin Academy in 2011. A science building on the Clifton campus of Nottingham Trent University is named after him. In fiction Charles Sheffield, an author noted largely for hard science fiction, wrote a number of stories featuring Darwin in a role similar to that of Sherlock Holmes. These stories were collected in a book, The Amazing Dr. Darwin. The forgetting of Erasmus' designs for a rocket is a major plot point in Stephen Baxter's tale of alternate universes, Manifold: Origin. Phrases from Darwin's poem The Botanic Garden are used as chapter headings in The Pornographer of Vienna by Lewis Crofts. Darwin appears as a character in Sergey Lukyanenko's novel New Watch as a Dark Other, and a prophet living in Regent's Park Estate. See also Evolutionary ideas of the Renaissance and Enlightenment History of evolutionary thought Notes References Sources Biographies and criticism King-Hele, Desmond. 1963. Doctor Darwin. Scribner's, N.Y. King-Hele, Desmond. 1977. Doctor of Revolution: the life and genius of Erasmus Darwin. Faber, London. King-Hele, Desmond. 1999. Erasmus Darwin: a life of unequalled achievement Giles de la Mare Publishers. King-Hele, Desmond (ed) 2002. Charles Darwin's 'The Life of Erasmus Darwin' Cambridge University Press. Krause, Ernst 1879. Erasmus Darwin, with a preliminary notice by Charles Darwin. Murray, London. Pearson, Hesketh. 1930. Doctor Darwin. Dent, London. Porter, Roy, 1989. 'Erasmus Darwin: doctor of evolution?' in 'History, Humanity and Evolution: Essays for John C. Greene, ed. James R. Moore. Further reading Darwin, Erasmus. (1794–96). Zoonomia. J. Johnson (reissued by Cambridge University Press, 2009; ) External links Erasmus Darwin House, Lichfield Revolutionary Players website "Preface and 'a preliminary notice'" by Charles Darwin in Ernst Krause, Erasmus Darwin (1879) Letter from Erasmus Darwin to Dr. William Withering at Mount Holyoke College Proto-evolutionary biologists People of the Industrial Revolution 18th-century British botanists English entomologists Members of the Lunar Society of Birmingham Fellows of the Royal Society Darwin–Wedgwood family People from Lichfield People from Newark and Sherwood (district) 1731 births 1802 deaths Alumni of St John's College, Cambridge Alumni of the University of Edinburgh Paintings by Joseph Wright of Derby 18th-century English medical doctors English physiologists English naturalists English poets English abolitionists English inventors People from Breadsall People educated at Chesterfield Grammar School
Erasmus Darwin
[ "Biology" ]
4,567
[ "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
9,309
https://en.wikipedia.org/wiki/Extractor%20%28mathematics%29
An -extractor is a bipartite graph with nodes on the left and nodes on the right such that each node on the left has neighbors (on the right), which has the added property that for any subset of the left vertices of size at least , the distribution on right vertices obtained by choosing a random node in and then following a random edge to get a node x on the right side is -close to the uniform distribution in terms of total variation distance. A disperser is a related graph. An equivalent way to view an extractor is as a bivariate function in the natural way. With this view it turns out that the extractor property is equivalent to: for any source of randomness that gives bits with min-entropy , the distribution is -close to , where denotes the uniform distribution on . Extractors are interesting when they can be constructed with small relative to and is as close to (the total randomness in the input sources) as possible. Extractor functions were originally researched as a way to extract randomness from weakly random sources. See randomness extractor. Using the probabilistic method it is easy to show that extractor graphs with really good parameters exist. The challenge is to find explicit or polynomial time computable examples of such graphs with good parameters. Algorithms that compute extractor (and disperser) graphs have found many applications in computer science. References Ronen Shaltiel, Recent developments in extractors - a survey Graph families Pseudorandomness Theoretical computer science
Extractor (mathematics)
[ "Mathematics" ]
307
[ "Theoretical computer science", "Applied mathematics" ]
9,310
https://en.wikipedia.org/wiki/Enterprise%20resource%20planning
Enterprise resource planning (ERP) is the integrated management of main business processes, often in real time and mediated by software and technology. ERP is usually referred to as a category of business management software—typically a suite of integrated applications—that an organization can use to collect, store, manage and interpret data from many business activities. ERP systems can be local-based or cloud-based. Cloud-based applications have grown in recent years due to the increased efficiencies arising from information being readily available from any location with Internet access. ERP provides an integrated and continuously updated view of the core business processes using common databases maintained by a database management system. ERP systems track business resources—cash, raw materials, production capacity—and the status of business commitments: orders, purchase orders, and payroll. The applications that make up the system share data across various departments (manufacturing, purchasing, sales, accounting, etc.) that provide the data. ERP facilitates information flow between all business functions and manages connections to outside stakeholders. According to Gartner, the global ERP market size is estimated at $35 billion in 2021. Though early ERP systems focused on large enterprises, smaller enterprises increasingly use ERP systems. The ERP system integrates varied organizational systems and facilitates error-free transactions and production, thereby enhancing the organization's efficiency. However, developing an ERP system differs from traditional system development. ERP systems run on a variety of computer hardware and network configurations, typically using a database as an information repository. Origin The Gartner Group first used the acronym ERP in the 1990s to include the capabilities of material requirements planning (MRP), and the later manufacturing resource planning (MRP II), as well as computer-integrated manufacturing. Without replacing these terms, ERP came to represent a larger whole that reflected the evolution of application integration beyond manufacturing. Not all ERP packages are developed from a manufacturing core; ERP vendors variously began assembling their packages with finance-and-accounting, maintenance, and human-resource components. By the mid-1990s ERP systems addressed all core enterprise functions. Governments and non–profit organizations also began to use ERP systems. An "ERP system selection methodology" is a formal process for selecting an enterprise resource planning (ERP) system. Existing methodologies include: Kuiper's funnel method, Dobrin's three-dimensional (3D) web-based decision support tool, and the Clarkston Potomac methodology. Expansion ERP systems experienced rapid growth in the 1990s. Because of the year 2000 problem many companies took the opportunity to replace their old systems with ERP. ERP systems initially focused on automating back office functions that did not directly affect customers and the public. Front office functions, such as customer relationship management (CRM), dealt directly with customers, or e-business systems such as e-commerce and e-government—or supplier relationship management (SRM) became integrated later, when the internet simplified communicating with external parties. "ERP II" was coined in 2000 in an article by Gartner Publications entitled ERP Is Dead—Long Live ERP II. It describes web–based software that provides real–time access to ERP systems to employees and partners (such as suppliers and customers). The ERP II role expands traditional ERP resource optimization and transaction processing. Rather than just manage buying, selling, etc.—ERP II leverages information in the resources under its management to help the enterprise collaborate with other enterprises. ERP II is more flexible than the first generation ERP. Rather than confine ERP system capabilities within the organization, it goes beyond the corporate walls to interact with other systems. Enterprise application suite is an alternate name for such systems. ERP II systems are typically used to enable collaborative initiatives such as supply chain management (SCM), customer relationship management (CRM) and business intelligence (BI) among business partner organizations through the use of various electronic business technologies. The large proportion of companies are pursuing a strong managerial targets in ERP system instead of acquire an ERP company. Developers now make more effort to integrate mobile devices with the ERP system. ERP vendors are extending ERP to these devices, along with other business applications, so that businesses don't have to rely on third-party applications. As an example, the e-commerce platform Shopify was able to make ERP tools from Microsoft and Oracle available on its app in October 2021. Technical stakes of modern ERP concern integration—hardware, applications, networking, supply chains. ERP now covers more functions and roles—including decision making, stakeholders' relationships, standardization, transparency, globalization, etc. Characteristics ERP systems typically include the following characteristics: An integrated system Operates in (or near) real time A common database that supports all the applications A consistent look and feel across modules Installation of the system with elaborate application/data integration by the Information Technology (IT) department, provided the implementation is not done in small steps Deployment options include: on-premises, cloud hosted, or SaaS Functional areas An ERP system covers the following common functional areas. In many ERP systems, these are called and grouped together as ERP modules: Financial accounting: general ledger, fixed assets, payables including vouchering, matching and payment, receivables and collections, cash management, financial consolidation Management accounting: budgeting, costing, cost management, activity based costing, billing, invoicing (optional) Human resources: recruiting, training, rostering, payroll, benefits, retirement and pension plans, diversity management, retirement, separation Manufacturing: engineering, bill of materials, work orders, scheduling, capacity, workflow management, quality control, manufacturing process, manufacturing projects, manufacturing flow, product life cycle management Order processing: order to cash, order entry, credit checking, pricing, available to promise, inventory, shipping, sales analysis and reporting, sales commissioning Supply chain management: supply chain planning, supplier scheduling, product configurator, order to cash, purchasing, inventory, claim processing, warehousing (receiving, putaway, picking and packing) Project management: project planning, resource planning, project costing, work breakdown structure, billing, time and expense, performance units, activity management Customer relationship management (CRM): sales and marketing, commissions, service, customer contact, call center supportCRM systems are not always considered part of ERP systems but rather business support systems (BSS) Supplier relationship management (SRM): suppliers, orders, payments. Data services: various "self-service" interfaces for customers, suppliers or employees Management of school and educational institutes. Contract Management: creating, monitoring, and managing contracts, reducing administrative burdens and minimising legal risks. These modules often feature contract templates, electronic signature capabilities, automated alerts for contract milestones, and advanced search functionality. GRP – ERP use in government Government resource planning (GRP) is the equivalent of an ERP for the public sector and an integrated office automation system for government bodies. The software structure, modularization, core algorithms and main interfaces do not differ from other ERPs, and ERP software suppliers manage to adapt their systems to government agencies. Both system implementations, in private and public organizations, are adopted to improve productivity and overall business performance in organizations, but comparisons (private vs. public) of implementations shows that the main factors influencing ERP implementation success in the public sector are cultural. Best practices Most ERP systems incorporate best practices. This means the software reflects the vendor's interpretation of the most effective way to perform each business process. Systems vary in how conveniently the customer can modify these practices. Use of best practices eases compliance with requirements such as International Financial Reporting Standards, Sarbanes-Oxley, or Basel II. They can also help comply with de facto industry standards, such as electronic funds transfer. This is because the procedure can be readily codified within the ERP software and replicated with confidence across multiple businesses that share that business requirement. Connectivity to plant floor information ERP systems connect to real–time data and transaction data in a variety of ways. These systems are typically configured by systems integrators, who bring unique knowledge on process, equipment, and vendor solutions. Direct integration – ERP systems have connectivity (communications to plant floor equipment) as part of their product offering. This requires that the vendors offer specific support for the plant floor equipment their customers operate. Database integration – ERP systems connect to plant floor data sources through staging tables in a database. Plant floor systems deposit the necessary information into the database. The ERP system reads the information in the table. The benefit of staging is that ERP vendors do not need to master the complexities of equipment integration. Connectivity becomes the responsibility of the systems integrator. Enterprise appliance transaction modules (EATM) – These devices communicate directly with plant floor equipment and with the ERP system via methods supported by the ERP system. EATM can employ a staging table, web services, or system–specific program interfaces (APIs). An EATM offers the benefit of being an off–the–shelf solution. Custom–integration solutions – Many system integrators offer custom solutions. These systems tend to have the highest level of initial integration cost, and can have a higher long term maintenance and reliability costs. Long term costs can be minimized through careful system testing and thorough documentation. Custom–integrated solutions typically run on workstation or server-class computers. Implementation ERP's scope usually implies significant changes to staff work processes and practices. Generally, three types of services are available to help implement such changes: consulting, customization, and support. Implementation time depends on business size, number of modules, customization, the scope of process changes, and the readiness of the customer to take ownership for the project. Modular ERP systems can be implemented in stages. The typical project for a large enterprise takes about 14 months and requires around 150 consultants. Small projects can require months; multinational and other large implementations can take years. Customization can substantially increase implementation times. Besides that, information processing influences various business functions e.g. some large corporations like Walmart use a just in time inventory system. This reduces inventory storage and increases delivery efficiency, and requires up-to-date data. Before 2014, Walmart used a system called Inforem developed by IBM to manage replenishment. Process preparation Implementing ERP typically requires changes in existing business processes. Poor understanding of needed process changes prior to starting implementation is a main reason for project failure. The difficulties could be related to the system, business process, infrastructure, training, or lack of motivation. It is therefore crucial that organizations thoroughly analyze processes before they deploy an ERP software. Analysis can identify opportunities for process modernization. It also enables an assessment of the alignment of current processes with those provided by the ERP system. Research indicates that risk of business process mismatch is decreased by: Linking current processes to the organization's strategy Analyzing the effectiveness of each process Understanding existing automated solutions ERP implementation is considerably more difficult (and politically charged) in decentralized organizations, because they often have different processes, business rules, data semantics, authorization hierarchies, and decision centers. This may require migrating some business units before others, delaying implementation to work through the necessary changes for each unit, possibly reducing integration (e.g., linking via master data management) or customizing the system to meet specific needs. A potential disadvantage is that adopting "standard" processes can lead to a loss of competitive advantage. While this has happened, losses in one area are often offset by gains in other areas, increasing overall competitive advantage. Configuration Configuring an ERP system is largely a matter of balancing the way the organization wants the system to work, and the way the system is designed to work out of the box. ERP systems typically include many configurable settings that in effect modify system operations. For example, in the ServiceNow platform, business rules can be written requiring the signature of a business owner within 2 weeks of a newly completed risk assessment. The tool can be configured to automatically email notifications to the business owner, and transition the risk assessment to various stages in the process depending on the owner's responses or lack thereof. Two-tier enterprise resource planning Two-tier ERP software and hardware lets companies run the equivalent of two ERP systems at once: one at the corporate level and one at the division or subsidiary level. For example, a manufacturing company could use an ERP system to manage across the organization using independent global or regional distribution, production or sales centers, and service providers to support the main company's customers. Each independent center (or) subsidiary may have its own business operations cycles, workflows, and business processes. Given the realities of globalization, enterprises continuously evaluate how to optimize their regional, divisional, and product or manufacturing strategies to support strategic goals and reduce time-to-market while increasing profitability and delivering value. With two-tier ERP, the regional distribution, production, or sales centers and service providers continue operating under their own business model—separate from the main company, using their own ERP systems. Since these smaller companies' processes and workflows are not tied to main company's processes and workflows, they can respond to local business requirements in multiple locations. Factors that affect enterprises' adoption of two-tier ERP systems include: Manufacturing globalization, the economics of sourcing in emerging economies Potential for quicker, less costly ERP implementations at subsidiaries, based on selecting software more suited to smaller companies Extra effort, (often involving the use of enterprise application integration) is required where data must pass between two ERP systems Two-tier ERP strategies give enterprises agility in responding to market demands and in aligning IT systems at a corporate level while inevitably resulting in more systems as compared to one ERP system used throughout the organization. Customization ERP systems are theoretically based on industry best practices, and their makers intend that organizations deploy them "as is". ERP vendors do offer customers configuration options that let organizations incorporate their own business rules, but gaps in features often remain even after configuration is complete. ERP customers have several options to reconcile feature gaps, each with their own pros/cons. Technical solutions include rewriting part of the delivered software, writing a homegrown module to work within the ERP system, or interfacing to an external system. These three options constitute varying degrees of system customization—with the first being the most invasive and costly to maintain. Alternatively, there are non-technical options such as changing business practices or organizational policies to better match the delivered ERP feature set. Key differences between customization and configuration include: Customization is always optional, whereas the software must always be configured before use (e.g., setting up cost/profit center structures, organizational trees, purchase approval rules, etc.). The software is designed to handle various configurations and behaves predictably in any allowed configuration. The effect of configuration changes on system behavior and performance is predictable and is the responsibility of the ERP vendor. The effect of customization is less predictable. It is the customer's responsibility, and increases testing requirements. Configuration changes survive upgrades to new software versions. Some customizations (e.g., code that uses pre–defined "hooks" that are called before/after displaying data screens) survive upgrades, though they require retesting. Other customizations (e.g., those involving changes to fundamental data structures) are overwritten during upgrades and must be re-implemented. Advantages of customization include: Improving user acceptance Potential to obtain competitive advantage vis-à-vis companies using only standard features. Customization's disadvantages include that it may: Increase time and resources required to implement and maintain Hinder seamless interfacing/integration between suppliers and customers due to the differences between systems Limit the company's ability to upgrade the ERP software in the future Create overreliance on customization, undermining the principles of ERP as a standardizing software platform Extensions ERP systems can be extended with third-party software, often via vendor-supplied interfaces. Extensions offer features such as: product data management product life cycle management customer relations management data mining e-procurement Data migration Data migration is the process of moving, copying, and restructuring data from an existing system to the ERP system. Migration is critical to implementation success and requires significant planning. Unfortunately, since migration is one of the final activities before the production phase, it often receives insufficient attention. The following steps can structure migration planning: Identify the data to be migrated. Determine the migration timing. Generate data migration templates for key data components Freeze the toolset. Decide on the migration-related setup of key business accounts. Define data archiving policies and procedures. Often, data migration is incomplete because some of the data in the existing system is either incompatible or not needed in the new system. As such, the existing system may need to be kept as an archived database to refer back to once the new ERP system is in place. Advantages The most fundamental advantage of ERP is that the integration of a myriad of business processes saves time and expense. Management can make decisions faster and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include: Sales forecasting, which allows inventory optimization. Chronological history of every transaction through relevant data compilation in every area of operation. Order tracking, from acceptance through fulfillment Revenue tracking, from invoice through cash receipt Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing (what the vendor invoiced) ERP systems centralize business data, which: Eliminates the need to synchronize changes between multiple systems—consolidation of finance, marketing, sales, human resource, and manufacturing applications Brings legitimacy and transparency to each bit of statistical data Facilitates standard product naming/coding Provides a comprehensive enterprise view (no "islands of information"), making real–time information available to management anywhere, anytime to make proper decisions Protects sensitive data by consolidating multiple security systems into a single structure Benefits ERP creates a more agile company that adapts better to change. It also makes a company more flexible and less rigidly structured so organization components operate more cohesively, enhancing the business—internally and externally. ERP can improve data security in a closed environment. A common control system, such as the kind offered by ERP systems, allows organizations the ability to more easily ensure key company data is not compromised. This changes, however, with a more open environment, requiring further scrutiny of ERP security features and internal company policies regarding security. ERP provides increased opportunities for collaboration. Data takes many forms in the modern enterprise, including documents, files, forms, audio and video, and emails. Often, each data medium has its own mechanism for allowing collaboration. ERP provides a collaborative platform that lets employees spend more time collaborating on content rather than mastering the learning curve of communicating in various formats across distributed systems. ERP is enhanced decision-making capabilities. By consolidating data from various departments and functions into a single, unified platform, ERP systems provide decision-makers with real-time insights and comprehensive analytics. This enables more informed and data-driven decision-making processes across the organization, leading to improved strategic planning, resource allocation, and overall business performance. Moreover, ERP systems facilitate better forecasting and trend analysis, helping businesses anticipate market changes, identify opportunities, and mitigate risks more effectively. Disadvantages Customization can be problematic. Compared to the best-of-breed approach, ERP can be seen as meeting an organization's lowest common denominator needs, forcing the organization to find workarounds to meet unique demands. Re-engineering business processes to fit the ERP system may damage competitiveness or divert focus from other critical activities. ERP can cost more than less integrated or less comprehensive solutions. High ERP switching costs can increase the ERP vendor's negotiating power, which can increase support, maintenance, and upgrade expenses. Overcoming resistance to sharing sensitive information between departments can divert management attention. Integration of truly independent businesses can create unnecessary dependencies. Extensive training requirements take resources from daily operations. Harmonization of ERP systems can be a mammoth task (especially for big companies) and requires a lot of time, planning, and money. Critical success factors The application of critical success factors can prevent organizations from making costly mistakes, and the effective usage of CSFs can ensure project success and reduce failures during project implementations. Adoption rates Research published in 2011 based on a survey of 225 manufacturers, retailers and distributors found "high" rates of interest and adoption of ERP systems and that very few businesses were "completely untouched" by the concept of an ERP system. 27% of the companies survey had a fully operational system, 12% were at that time rolling out a system and 26% had an existing ERP system which they were extending or upgrading. Postmodern ERP The term "postmodern ERP" was coined by Gartner in 2013, when it first appeared in the paper series "Predicts 2014". According to Gartner's definition of the postmodern ERP strategy, legacy, monolithic and highly customized ERP suites, in which all parts are heavily reliant on each other, should sooner or later be replaced by a mixture of both cloud-based and on-premises applications, which are more loosely coupled and can be easily exchanged if needed. The basic idea is that there should still be a core ERP solution that would cover most important business functions, while other functions will be covered by specialist software solutions that merely extend the core ERP. This concept is similar to the "best-of-breed" approach to software execution, but it shouldn't be confused with it. While in both cases, applications that make up the whole are relatively loosely connected and quite easily interchangeable, in the case of the latter there is no ERP solution whatsoever. Instead, every business function is covered by a separate software solution. There is, however, no golden rule as to what business functions should be part of the core ERP, and what should be covered by supplementary solutions. According to Gartner, every company must define their own postmodern ERP strategy, based on company's internal and external needs, operations and processes. For example, a company may define that the core ERP solution should cover those business processes that must stay behind the firewall, and therefore, choose to leave their core ERP on-premises. At the same time, another company may decide to host the core ERP solution in the cloud and move only a few ERP modules as supplementary solutions to on-premises. The main benefits that companies will gain from implementing postmodern ERP strategy are speed and flexibility when reacting to unexpected changes in business processes or on the organizational level. With the majority of applications having a relatively loose connection, it is fairly easy to replace or upgrade them whenever necessary. In addition to that, following the examples above, companies can select and combine cloud-based and on-premises solutions that are most suited for their ERP needs. The downside of postmodern ERP is that it will most likely lead to an increased number of software vendors that companies will have to manage, as well as pose additional integration challenges for the central IT. See also List of ERP software packages Business process management Comparison of project management software References Bibliography Henderson, Ian ERP from the Frontline MBE Making ERP Work Ram, Jiwat, and David Corkindale. “Developing a Framework for the Management of Critical Success Factors in Organisational Innovation Projects: A Case of Enterprise Resource Planning Systems.” Integrating Innovation: South Australian Entrepreneurship Systems and Strategies, edited by Göran Roos and Allan O’Connor, University of Adelaide Press, 2015, pp. 327–54, . Riposo, Jessie, Guy Weichenberg, Chelsea Kaihoi Duran, Bernard Fox, William Shelton, and Andreas Thorsen. “Organizational Change Management.” In Improving Air Force Enterprise Resource Planning-Enabled Business Transformation, 23–28. RAND Corporation, 2013. . Aronin, B. S., Bailey, J. W., Byun, J. S., Davis, G. A., Wolfe, C. L., Frazier, T. P., & Bronson, P. F. (2018). ERP Systems in the DoD. In Global Combat Support System – Marine Corps: Root Cause Analysis (pp. 7–18). Institute for Defense Analyses. . Ragowsky, Arik, and Toni M. Somers. “Special Section: Enterprise Resource Planning.” Journal of Management Information Systems, vol. 19, no. 1, Taylor & Francis, Ltd., 2002, pp. 11–15, . LIEDTKA, JEANNE, ANDREW KING, and KEVIN BENNETT. “Rethinking Strategic Planning at SAP.” In Solving Problems with Design Thinking: Ten Stories of What Works, 74–91. Columbia University Press, 2013. . Morris, Michael G., and Viswanath Venkatesh. “Job Characteristics and Job Satisfaction: Understanding the Role of Enterprise Resource Planning System Implementation.” MIS Quarterly, vol. 34, no. 1, Management Information Systems Research Center, University of Minnesota, 2010, pp. 143–61, . Tsai, Bi-Huei, and Shin-Bin Chou. “APPLICATION OF MULTIPLE OUTPUT DATA ENVELOPMENT ANALYSIS IN INTERPRETING EFFICIENCY IMPROVEMENT OF ENTERPRISE RESOURCE PLANNING IN INTEGRATED CIRCUIT FIRMS.” The Journal of Developing Areas, vol. 49, no. 1, College of Business, Tennessee State University, 2015, pp. 285–304, . van Merode GG, Groothuis S, Hasman A. Enterprise resource planning for hospitals. Int J Med Inform. 2004 Jun 30;73(6):493-501. . . Zerbino P, Aloini D, Dulmin R, Mininno V. Why enterprise resource planning initiatives do succeed in the long run: A case-based causal network. PLoS One. 2021 Dec 16;16(12):e0260798. . ; . Lee CW, Kwak NK. Strategic enterprise resource planning in a health-care system using a multicriteria decision-making model. J Med Syst. 2011 Apr;35(2):265-75. . Epub 2009 Sep 10. . Grove S. Enterprise resource planning: case history. Optimizing the supply chain. Health Manag Technol. 2004 Jan;25(1):24-7. . Schuerenberg BK. Making connections across an enterprise. Enterprise resource planning systems are tough to implement, but can provide a big payback. Health Data Manag. 2003 Jun;11(6):72-4, 76, 78. . Tian, Feng, and Sean Xin Xu. “How Do Enterprise Resource Planning Systems Affect Firm Risk? Post-Implementation Impact.” MIS Quarterly, vol. 39, no. 1, Management Information Systems Research Center, University of Minnesota, 2015, pp. 39–60, . Ferratt, Thomas W., et al. “Achieving Success in Large Projects: Implications from a Study of ERP Implementations.” Interfaces, vol. 36, no. 5, INFORMS, 2006, pp. 458–69, . Karimi, Jahangir, Toni M. Somers, and Anol Bhattacherjee. “The Impact of ERP Implementation on Business Process Outcomes: A Factor-Based Study.” Journal of Management Information Systems 24, no. 1 (2007): 101–34. . Ranganathan, C., and Carol V. Brown. “ERP Investments and the Market Value of Firms: Toward an Understanding of Influential ERP Project Variables.” Information Systems Research, vol. 17, no. 2, INFORMS, 2006, pp. 145–61, . Benco, Daniel C., and Larry Prather. “Market Reaction to Announcements to Invest in ERP Systems.” Quarterly Journal of Finance and Accounting, vol. 47, no. 4, University of Nebraska-Lincoln College of Business Administration, 2008, pp. 145–69, . Stephenson, Stephen V., and Andrew P. Sage. "Information and knowledge perspectives in systems engineering and management for innovation and productivity through enterprise resource planning." Information Resources Management Journal 20, no. 2 (2007): 44+. Gale Academic OneFile (accessed January 26, 2022). . McGaughey, Ronald E., and Angappa Gunasekaran. "Enterprise Resource Planning (ERP): past, present and future." International Journal of Enterprise Information Systems 3, no. 3 (2007): 23+. Gale Academic OneFile (accessed January 26, 2022). . Cordova, Ronald S., Rolou Lyn R. Maata, Ferdinand J. Epoc, and Marwan Alshar'e. "Challenges and Opportunities of Using Blockchain in Supply Chain Management." Global Business and Management Research: An International Journal 13, no. 3 (2021): 204+. Gale Academic OneFile (accessed January 26, 2022). . Muscatello, Joseph R., and Injazz J. Chen. "Enterprise resource planning (ERP) implementations: theory and practice." International Journal of Enterprise Information Systems 4, no. 1 (2008): 63+. Gale Academic OneFile (accessed January 26, 2022). . Muscatello, Joseph R., and Diane H. Parente. "Enterprise resource planning (ERP): a postimplementation cross-case analysis." Information Resources Management Journal 19, no. 3 (2006): 61+. Gale Academic OneFile (accessed January 26, 2022). . Farzaneh, Mandana, Iman Raeesi Vanani, and Babak Sohrabi. "A survey study of influential factors in the implementation of enterprise resource planning systems." International Journal of Enterprise Information Systems 9, no. 1 (2013): 76+. Gale Academic OneFile (accessed January 26, 2022). . AlMuhayfith, Sara, and Hani Shaiti. "The Impact of Enterprise Resource Planning on Business Performance: With the Discussion on Its Relationship with Open Innovation." Journal of Open Innovation: Technology, Market, and Complexity 6, no. 3 (2020). Gale Academic OneFile (accessed January 26, 2022). . Wenrich, Kristi I., and Norita Ahmad. "Lessons learned during a decade of ERP experience: a case study." International Journal of Enterprise Information Systems 5, no. 1 (2009): 55+. Gale Academic OneFile (accessed January 26, 2022). . Subramanian, Girish H., and Christopher S. Hoffer. "An exploratory case study of enterprise resource planning implementation." International Journal of Enterprise Information Systems 1, no. 1 (2005): 23+. Gale Academic OneFile (accessed January 26, 2022). . W. Yang, H. Liu and J. Shi, "The design of printing enterprise resources planning (ERP) software," 2010 2nd IEEE International Conference on Information Management and Engineering, 2010, pp. 151–154, . Cronan, Timothy Paul, and David E. Douglas. "Assessing ERP learning (management, business process, and skills) and attitudes." Journal of Organizational and End User Computing, April–June 2013, 59+. Gale Academic OneFile (accessed January 26, 2022). . Hayes, David C., James E. Hunton, and Jacqueline L. Reck. "Market Reaction to ERP Implementation Announcements." Journal of Information Systems 15, no. 1 (2001): 3. Gale Academic OneFile (accessed January 26, 2022). . Alves, Maria do Ceu, and Sergio Ivo Amaral Matos. "ERP adoption by public and private organizations--a comparative analysis of successful implementations." Journal of Business Economics and Management 14, no. 3 (2013): 500. Gale Academic OneFile (accessed January 26, 2022). . Elsayed, N., Ammar, S., and Mardini, G. H., “The impact of ERP utilisation experience and segmental reporting on corporate performance in the UK context”, Enterprise Information Systems, vol. 15, no. 1, pp. 61–86, 2021. . . Sutduean, J., Singsa, A., Sriyakul, T., and Jermsittiparsert, K., “Supply Chain Integration, Enterprise Resource Planning, and Organizational Performance: The Enterprise Resource Planning Implementation Approach”, Journal of Computational and Theoretical Nanoscience, vol. 16, no. 7, pp. 2975–2981, 2019. . . Alfaris, M. F., Edikuncoro, G. Y., Savitri, A. L., Yogiari, D., and Sulistio, J., “A Literature Review of Sustain Enterprise Resource Planning”, in Materials Science and Engineering Conference Series, 2019, vol. 598, no. 1, p. 012128. . . G. Chattopadhyay, "Development of a learning package for interactive learning in enterprise resources planning (ERP)," 2004 IEEE International Engineering Management Conference (IEEE Cat. No.04CH37574), 2004, pp. 848–850 Vol.2, . D. Reuther and G. Chattopadhyay, "Critical factors for enterprise resources planning system selection and implementation projects within small to medium enterprises," 2004 IEEE International Engineering Management Conference (IEEE Cat. No.04CH37574), 2004, pp. 851–855 Vol.2, . Lv, T., Zhang, J., and Chen, Y., “Research of ERP Platform based on Cloud Computing”, in Materials Science and Engineering Conference Series, 2018, vol. 394, no. 4, p. 042004. . . Hasan, N., Miah, S. J., Bao, Y., and Hoque, M. R., “Factors affecting post-implementation success of enterprise resource planning systems: a perspective of business process performance”, Enterprise Information Systems, vol. 13, no. 9, pp. 1217–1244, 2019. . . Sardjono, W., Sudirwan, J., Priatna, W., and Putra, G. R., “Application of factor analysis method to support the users acceptance model of ERP systems implementation”, in Journal of Physics: Conference Series, 2021, vol. 1836, no. 1. . . Meiryani, Erick Fernando, Setiani Putri Hendratno, Kriswanto, and Septi Wifasari. 2021. Enterprise Resource Planning Systems: The Business Backbone. 2021 The 5th International Conference on E-Commerce, E-Business and E-Government. Association for Computing Machinery, New York, NY, USA, 43–48. T. Yang, J. Choi, Z. Xi, Y. Sun, C. Ouyang and Y. Huang, "Research of Enterprise Resource Planning in a Specific Enterprise," 2006 IEEE International Conference on Systems, Man and Cybernetics, 2006, pp. 418–422, . Komala, A. R. and Gunanda, I., “Development of Enterprise Resource Planning using Blockchain”, in Materials Science and Engineering Conference Series, 2020, vol. 879, no. 1, p. 012141. . . Tsai, W.-H., Lee, K.-C., Liu, J.-Y., Lin, S.-J., and Chou, Y.-W., “The influence of enterprise resource planning (ERP) systems' performance on earnings management”, Enterprise Information Systems, vol. 6, no. 4, pp. 491–517, 2012. . . Kapulin, D. V., Russkikh, P. A., and Moor, I. A., “Integration capabilities of business process models and ERP-systems”, in Journal of Physics: Conference Series, 2019, vol. 1333, no. 7. . . Sebayang, P., Tarigan, Z. J. H., and Panjaitan, T. W. S., “ERP compatibility on business performance through the inventory system and internal integration”, in Materials Science and Engineering Conference Series, 2021, vol. 1010, no. 1, p. 012008. . . August-Wilhelm Scheer and Frank Habermann. 2000. Enterprise resource planning: making ERP a success. Communications of the ACM 43, 4 (April 2000), 57–61. Kuldeep Kumar and Jos van Hillegersberg. 2000. Enterprise resource planning: introduction. Communications of the ACM 43, 4 (April 2000), 22–26. External links ERP software Computer-related introductions in 1990 Computer-aided engineering Computer occupations Computational fields of study Enterprise resource planning terminology Office and administrative support occupations Automation Automation software Business models Business terms Production planning Business planning Business process Customer relationship management Financial management Human resource management Supply chain management Product lifecycle management 20th-century inventions Management cybernetics
Enterprise resource planning
[ "Technology", "Engineering" ]
7,772
[ "Computing terminology", "Computational fields of study", "Computer occupations", "Automation", "Computer-aided engineering", "Construction", "Industrial engineering", "Automation software", "Control engineering", "Computing and society", "Enterprise resource planning terminology" ]
9,311
https://en.wikipedia.org/wiki/Endocrinology
Endocrinology (from endocrine + -ology) is a branch of biology and medicine dealing with the endocrine system, its diseases, and its specific secretions known as hormones. It is also concerned with the integration of developmental events proliferation, growth, and differentiation, and the psychological or behavioral activities of metabolism, growth and development, tissue function, sleep, digestion, respiration, excretion, mood, stress, lactation, movement, reproduction, and sensory perception caused by hormones. Specializations include behavioral endocrinology and comparative endocrinology. The endocrine system consists of several glands, all in different parts of the body, that secrete hormones directly into the blood rather than into a duct system. Therefore, endocrine glands are regarded as ductless glands. Hormones have many different functions and modes of action; one hormone may have several effects on different target organs, and, conversely, one target organ may be affected by more than one hormone. The endocrine system Endocrinology is the study of the endocrine system in the human body. This is a system of glands which secrete hormones. Hormones are chemicals that affect the actions of different organ systems in the body. Examples include thyroid hormone, growth hormone, and insulin. The endocrine system involves a number of feedback mechanisms, so that often one hormone (such as thyroid stimulating hormone) will control the action or release of another secondary hormone (such as thyroid hormone). If there is too much of the secondary hormone, it may provide negative feedback to the primary hormone, maintaining homeostasis. In the original 1902 definition by Bayliss and Starling (see below), they specified that, to be classified as a hormone, a chemical must be produced by an organ, be released (in small amounts) into the blood, and be transported by the blood to a distant organ to exert its specific function. This definition holds for most "classical" hormones, but there are also paracrine mechanisms (chemical communication between cells within a tissue or organ), autocrine signals (a chemical that acts on the same cell), and intracrine signals (a chemical that acts within the same cell). A neuroendocrine signal is a "classical" hormone that is released into the blood by a neurosecretory neuron (see article on neuroendocrinology). Hormones Griffin and Ojeda identify three different classes of hormones based on their chemical composition: Amines Amines, such as norepinephrine, epinephrine, and dopamine (catecholamines), are derived from single amino acids, in this case tyrosine. Thyroid hormones such as 3,5,3'-triiodothyronine (T3) and 3,5,3',5'-tetraiodothyronine (thyroxine, T4) make up a subset of this class because they derive from the combination of two iodinated tyrosine amino acid residues. Peptide and protein Peptide hormones and protein hormones consist of three (in the case of thyrotropin-releasing hormone) to more than 200 (in the case of follicle-stimulating hormone) amino acid residues and can have a molecular mass as large as 31,000 grams per mole. All hormones secreted by the pituitary gland are peptide hormones, as are leptin from adipocytes, ghrelin from the stomach, and insulin from the pancreas. Steroid Steroid hormones are converted from their parent compound, cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Some forms of vitamin D, such as calcitriol, are steroid-like and bind to homologous receptors, but lack the characteristic fused ring structure of true steroids. As a profession Although every organ system secretes and responds to hormones (including the brain, lungs, heart, intestine, skin, and the kidneys), the clinical specialty of endocrinology focuses primarily on the endocrine organs, meaning the organs whose primary function is hormone secretion. These organs include the pituitary, thyroid, adrenals, ovaries, testes, and pancreas. An endocrinologist is a physician who specializes in treating disorders of the endocrine system, such as diabetes, hyperthyroidism, and many others (see list of diseases). Work The medical specialty of endocrinology involves the diagnostic evaluation of a wide variety of symptoms and variations and the long-term management of disorders of deficiency or excess of one or more hormones. The diagnosis and treatment of endocrine diseases are guided by laboratory tests to a greater extent than for most specialties. Many diseases are investigated through excitation/stimulation or inhibition/suppression testing. This might involve injection with a stimulating agent to test the function of an endocrine organ. Blood is then sampled to assess the changes of the relevant hormones or metabolites. An endocrinologist needs extensive knowledge of clinical chemistry and biochemistry to understand the uses and limitations of the investigations. A second important aspect of the practice of endocrinology is distinguishing human variation from disease. Atypical patterns of physical development and abnormal test results must be assessed as indicative of disease or not. Diagnostic imaging of endocrine organs may reveal incidental findings called incidentalomas, which may or may not represent disease. Endocrinology involves caring for the person as well as the disease. Most endocrine disorders are chronic diseases that need lifelong care. Some of the most common endocrine diseases include diabetes mellitus, hypothyroidism and the metabolic syndrome. Care of diabetes, obesity and other chronic diseases necessitates understanding the patient at the personal and social level as well as the molecular, and the physician–patient relationship can be an important therapeutic process. Apart from treating patients, many endocrinologists are involved in clinical science and medical research, teaching, and hospital management. Training Endocrinologists are specialists of internal medicine or pediatrics. Reproductive endocrinologists deal primarily with problems of fertility and menstrual function—often training first in obstetrics. Most qualify as an internist, pediatrician, or gynecologist for a few years before specializing, depending on the local training system. In the U.S. and Canada, training for board certification in internal medicine, pediatrics, or gynecology after medical school is called residency. Further formal training to subspecialize in adult, pediatric, or reproductive endocrinology is called a fellowship. Typical training for a North American endocrinologist involves 4 years of college, 4 years of medical school, 3 years of residency, and 2 years of fellowship. In the US, adult endocrinologists are board certified by the American Board of Internal Medicine (ABIM) or the American Osteopathic Board of Internal Medicine (AOBIM) in Endocrinology, Diabetes and Metabolism. Diseases treated by endocrinologists Diabetes mellitus: This is a chronic condition that affects how your body regulates blood sugar. There are two main types: type 1 diabetes, which is an autoimmune disease that occurs when the body attacks the cells that produce insulin, and type 2 diabetes, which is a condition in which the body either doesn't produce enough insulin or doesn't use it effectively. Thyroid disorders: These are conditions that affect the thyroid gland, a butterfly-shaped gland located in the front of your neck. The thyroid gland produces hormones that regulate your metabolism, heart rate, and body temperature. Common thyroid disorders include hyperthyroidism (overactive thyroid) and hypothyroidism (underactive thyroid). Adrenal disorders: The adrenal glands are located on top of your kidneys. They produce hormones that help regulate blood pressure, blood sugar, and the body's response to stress. Common adrenal disorders include Cushing syndrome (excess cortisol production) and Addison's disease (adrenal insufficiency). Pituitary disorders: The pituitary gland is a pea-sized gland located at the base of the brain. It produces hormones that control many other hormone-producing glands in the body. Common pituitary disorders include acromegaly (excess growth hormone production) and Cushing's disease (excess ACTH production). Metabolic disorders: These are conditions that affect how your body processes food into energy. Common metabolic disorders include obesity, high cholesterol, and gout. Calcium and bone disorders: Endocrinologists also treat conditions that affect calcium levels in the blood, such as hyperparathyroidism (too much parathyroid hormone) and osteoporosis (weak bones). Sexual and reproductive disorders: Endocrinologists can also help diagnose and treat hormonal problems that affect sexual development and function, such as polycystic ovary syndrome (PCOS) and erectile dysfunction. Endocrine cancers: These are cancers that develop in the endocrine glands. Endocrinologists can help diagnose and treat these cancers. Diseases and medicine Diseases See main article at Endocrine diseases Endocrinology also involves the study of the diseases of the endocrine system. These diseases may relate to too little or too much secretion of a hormone, too little or too much action of a hormone, or problems with receiving the hormone. Societies and Organizations Because endocrinology encompasses so many conditions and diseases, there are many organizations that provide education to patients and the public. The Hormone Foundation is the public education affiliate of The Endocrine Society and provides information on all endocrine-related conditions. Other educational organizations that focus on one or more endocrine-related conditions include the American Diabetes Association, Human Growth Foundation, American Menopause Foundation, Inc., and American Thyroid Association. In North America the principal professional organizations of endocrinologists include The Endocrine Society, the American Association of Clinical Endocrinologists, the American Diabetes Association, the Lawson Wilkins Pediatric Endocrine Society, and the American Thyroid Association. In Europe, the European Society of Endocrinology (ESE) and the European Society for Paediatric Endocrinology (ESPE) are the main organisations representing professionals in the fields of adult and paediatric endocrinology, respectively. In the United Kingdom, the Society for Endocrinology and the British Society for Paediatric Endocrinology and Diabetes are the main professional organisations. The European Society for Paediatric Endocrinology is the largest international professional association dedicated solely to paediatric endocrinology. There are numerous similar associations around the world. History The earliest study of endocrinology began in China. The Chinese were isolating sex and pituitary hormones from human urine and using them for medicinal purposes by 200 BC. They used many complex methods, such as sublimation of steroid hormones. Another method specified by Chinese texts—the earliest dating to 1110—specified the use of saponin (from the beans of Gleditsia sinensis) to extract hormones, but gypsum (containing calcium sulfate) was also known to have been used. Although most of the relevant tissues and endocrine glands had been identified by early anatomists, a more humoral approach to understanding biological function and disease was favoured by the ancient Greek and Roman thinkers such as Aristotle, Hippocrates, Lucretius, Celsus, and Galen, according to Freeman et al., and these theories held sway until the advent of germ theory, physiology, and organ basis of pathology in the 19th century. In 1849, Arnold Berthold noted that castrated cockerels did not develop combs and wattles or exhibit overtly male behaviour. He found that replacement of testes back into the abdominal cavity of the same bird or another castrated bird resulted in normal behavioural and morphological development, and he concluded (erroneously) that the testes secreted a substance that "conditioned" the blood that, in turn, acted on the body of the cockerel. In fact, one of two other things could have been true: that the testes modified or activated a constituent of the blood or that the testes removed an inhibitory factor from the blood. It was not proven that the testes released a substance that engenders male characteristics until it was shown that the extract of testes could replace their function in castrated animals. Pure, crystalline testosterone was isolated in 1935. Graves' disease was named after Irish doctor Robert James Graves, who described a case of goiter with exophthalmos in 1835. The German Karl Adolph von Basedow also independently reported the same constellation of symptoms in 1840, while earlier reports of the disease were also published by the Italians Giuseppe Flajani and Antonio Giuseppe Testa, in 1802 and 1810 respectively, and by the English physician Caleb Hillier Parry (a friend of Edward Jenner) in the late 18th century. Thomas Addison was first to describe Addison's disease in 1849. In 1902 William Bayliss and Ernest Starling performed an experiment in which they observed that acid instilled into the duodenum caused the pancreas to begin secretion, even after they had removed all nervous connections between the two. The same response could be produced by injecting extract of jejunum mucosa into the jugular vein, showing that some factor in the mucosa was responsible. They named this substance "secretin" and coined the term hormone for chemicals that act in this way. Joseph von Mering and Oskar Minkowski made the observation in 1889 that removing the pancreas surgically led to an increase in blood sugar, followed by a coma and eventual death—symptoms of diabetes mellitus. In 1922, Banting and Best realized that homogenizing the pancreas and injecting the derived extract reversed this condition. Neurohormones were first identified by Otto Loewi in 1921. He incubated a frog's heart (innervated with its vagus nerve attached) in a saline bath, and left in the solution for some time. The solution was then used to bathe a non-innervated second heart. If the vagus nerve on the first heart was stimulated, negative inotropic (beat amplitude) and chronotropic (beat rate) activity were seen in both hearts. This did not occur in either heart if the vagus nerve was not stimulated. The vagus nerve was adding something to the saline solution. The effect could be blocked using atropine, a known inhibitor to heart vagal nerve stimulation. Clearly, something was being secreted by the vagus nerve and affecting the heart. The "vagusstuff" (as Loewi called it) causing the myotropic (muscle enhancing) effects was later identified to be acetylcholine and norepinephrine. Loewi won the Nobel Prize for his discovery. Recent work in endocrinology focuses on the molecular mechanisms responsible for triggering the effects of hormones. The first example of such work being done was in 1962 by Earl Sutherland. Sutherland investigated whether hormones enter cells to evoke action, or stayed outside of cells. He studied norepinephrine, which acts on the liver to convert glycogen into glucose via the activation of the phosphorylase enzyme. He homogenized the liver into a membrane fraction and soluble fraction (phosphorylase is soluble), added norepinephrine to the membrane fraction, extracted its soluble products, and added them to the first soluble fraction. Phosphorylase activated, indicating that norepinephrine's target receptor was on the cell membrane, not located intracellularly. He later identified the compound as cyclic AMP (cAMP) and with his discovery created the concept of second-messenger-mediated pathways. He, like Loewi, won the Nobel Prize for his groundbreaking work in endocrinology. See also Comparative endocrinology Endocrine disease Hormone Hormone replacement therapy Neuroendocrinology Pediatric endocrinology Reproductive endocrinology and infertility Wildlife endocrinology List of instruments used in endocrinology References Endocrine system Hormones
Endocrinology
[ "Biology" ]
3,413
[ "Organ systems", "Endocrine system" ]
9,312
https://en.wikipedia.org/wiki/Endocrine%20system
The endocrine system is a messenger system in an organism comprising feedback loops of hormones that are released by internal glands directly into the circulatory system and that target and regulate distant organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid, parathyroid, pituitary, pineal, and adrenal glands, and the (male) testis and (female) ovaries. The hypothalamus, pancreas, and thymus also function as endocrine glands, among other functions. (The hypothalamus and pituitary glands are organs of the neuroendocrine system. One of the most important functions of the hypothalamusit is located in the brain adjacent to the pituitary glandis to link the endocrine system to the nervous system via the pituitary gland.) Other organs, such as the kidneys, also have roles within the endocrine system by secreting certain hormones. The study of the endocrine system and its disorders is known as endocrinology. The thyroid secretes thyroxine, the pituitary secretes growth hormone, the pineal secretes melatonin, the testis secretes testosterone, and the ovaries secrete estrogen and progesterone. Glands that signal each other in sequence are often referred to as an axis, such as the hypothalamic–pituitary–adrenal axis. In addition to the specialized endocrine organs mentioned above, many other organs that are part of other body systems have secondary endocrine functions, including bone, kidneys, liver, heart and gonads. For example, the kidney secretes the endocrine hormone erythropoietin. Hormones can be amino acid complexes, steroids, eicosanoids, leukotrienes, or prostaglandins. The endocrine system is contrasted both to exocrine glands, which secrete hormones to the outside of the body, and to the system known as paracrine signalling between cells over a relatively short distance. Endocrine glands have no ducts, are vascular, and commonly have intracellular vacuoles or granules that store their hormones. In contrast, exocrine glands, such as salivary glands, mammary glands, and submucosal glands within the gastrointestinal tract, tend to be much less vascular and have ducts or a hollow lumen. Endocrinology is a branch of internal medicine. Structure Major endocrine systems The human endocrine system consists of several systems that operate via feedback loops. Several important feedback systems are mediated via the hypothalamus and pituitary. TRH – TSH – T3/T4 GnRH – LH/FSH – sex hormones CRH – ACTH – cortisol Renin – angiotensin – aldosterone Leptin vs. ghrelin Glands Endocrine glands are glands of the endocrine system that secrete their products, hormones, directly into interstitial spaces where they are absorbed into blood rather than through a duct. The major glands of the endocrine system include the pineal gland, pituitary gland, pancreas, ovaries, testes, thyroid gland, parathyroid gland, hypothalamus and adrenal glands. The hypothalamus and pituitary gland are neuroendocrine organs. The hypothalamus and the anterior pituitary are two out of the three endocrine glands that are important in cell signaling. They are both part of the HPA axis which is known to play a role in cell signaling in the nervous system. Hypothalamus: The hypothalamus is a key regulator of the autonomic nervous system. The endocrine system has three sets of endocrine outputs which include the magnocellular system, the parvocellular system, and autonomic intervention. The magnocellular is involved in the expression of oxytocin or vasopressin. The parvocellular is involved in controlling the secretion of hormones from the anterior pituitary. Anterior Pituitary: The main role of the anterior pituitary gland is to produce and secrete tropic hormones. Some examples of tropic hormones secreted by the anterior pituitary gland include TSH, ACTH, GH, LH, and FSH. Cells There are many types of cells that make up the endocrine system and these cells typically make up larger tissues and organs that function within and outside of the endocrine system. Hypothalamus Anterior pituitary gland Pineal gland Posterior pituitary gland The posterior pituitary gland is a section of the pituitary gland. This organ does not produce any hormone but stores and secretes hormones such as antidiuretic hormone (ADH) which is synthesized by supraoptic nucleus of hypothalamus and oxytocin which is synthesized by paraventricular nucleus of hypothalamus. ADH functions to help the body to retain water; this is important in maintaining a homeostatic balance between blood solutions and water. Oxytocin functions to induce uterine contractions, stimulate lactation, and allows for ejaculation. Thyroid gland follicular cells of the thyroid gland produce and secrete T3 and T4 in response to elevated levels of TRH, produced by the hypothalamus, and subsequent elevated levels of TSH, produced by the anterior pituitary gland, which further regulates the metabolic activity and rate of all cells, including cell growth and tissue differentiation. Parathyroid gland The endocrine system can control all emotions and can control temperature. Epithelial cells of the parathyroid glands are richly supplied with blood from the inferior and superior thyroid arteries and secrete parathyroid hormone (PTH). PTH acts on bone, the kidneys, and the GI tract to increase calcium reabsorption and phosphate excretion. In addition, PTH stimulates the conversion of Vitamin D to its most active variant, 1,25-dihydroxyvitamin D3, which further stimulates calcium absorption in the GI tract. Thymus Gland Adrenal glands Adrenal cortex Adrenal medulla Pancreas Pancreas contain nearly 1 to 2 million islets of Langerhans (a tissue which consists cells that secrete hormones) and acini. Acini secretes digestive enzymes. Alpha cells The alpha cells of the pancreas secrete hormones to maintain homeostatic blood sugar. Insulin is produced and excreted to lower blood sugar to normal levels. Glucagon, another hormone produced by alpha cells, is secreted in response to low blood sugar levels; glucagon stimulates glycogen stores in the liver to release sugar into the bloodstream to raise blood sugar to normal levels. Beta cells 60% of the cells present in islet of Langerhans are beta cells. Beta cells secrete insulin. Along with glucagon, insulin helps in maintaining glucose levels in our body. Insulin decreases blood glucose level ( a hypoglycemic hormone) whereas glucagon increases blood glucose level. Delta cells F Cells Ovaries Granulosa cells Testis Leydig cells Development The fetal endocrine system is one of the first systems to develop during prenatal development. Adrenal glands The fetal adrenal cortex can be identified within four weeks of gestation. The adrenal cortex originates from the thickening of the intermediate mesoderm. At five to six weeks of gestation, the mesonephros differentiates into a tissue known as the genital ridge. The genital ridge produces the steroidogenic cells for both the gonads and the adrenal cortex. The adrenal medulla is derived from ectodermal cells. Cells that will become adrenal tissue move retroperitoneally to the upper portion of the mesonephros. At seven weeks of gestation, the adrenal cells are joined by sympathetic cells that originate from the neural crest to form the adrenal medulla. At the end of the eighth week, the adrenal glands have been encapsulated and have formed a distinct organ above the developing kidneys. At birth, the adrenal glands weigh approximately eight to nine grams (twice that of the adult adrenal glands) and are 0.5% of the total body weight. At 25 weeks, the adult adrenal cortex zone develops and is responsible for the primary synthesis of steroids during the early postnatal weeks. Thyroid gland The thyroid gland develops from two different clusterings of embryonic cells. One part is from the thickening of the pharyngeal floor, which serves as the precursor of the thyroxine (T4) producing follicular cells. The other part is from the caudal extensions of the fourth pharyngobranchial pouches which results in the parafollicular calcitonin-secreting cells. These two structures are apparent by 16 to 17 days of gestation. Around the 24th day of gestation, the foramen cecum, a thin, flask-like diverticulum of the median anlage develops. At approximately 24 to 32 days of gestation the median anlage develops into a bilobed structure. By 50 days of gestation, the medial and lateral anlage have fused together. At 12 weeks of gestation, the fetal thyroid is capable of storing iodine for the production of TRH, TSH, and free thyroid hormone. At 20 weeks, the fetus is able to implement feedback mechanisms for the production of thyroid hormones. During fetal development, T4 is the major thyroid hormone being produced while triiodothyronine (T3) and its inactive derivative, reverse T3, are not detected until the third trimester. Parathyroid glands A lateral and ventral view of an embryo showing the third (inferior) and fourth (superior) parathyroid glands during the 6th week of embryogenesis Once the embryo reaches four weeks of gestation, the parathyroid glands begins to develop. The human embryo forms five sets of endoderm-lined pharyngeal pouches. The third and fourth pouch are responsible for developing into the inferior and superior parathyroid glands, respectively. The third pharyngeal pouch encounters the developing thyroid gland and they migrate down to the lower poles of the thyroid lobes. The fourth pharyngeal pouch later encounters the developing thyroid gland and migrates to the upper poles of the thyroid lobes. At 14 weeks of gestation, the parathyroid glands begin to enlarge from 0.1 mm in diameter to approximately 1 – 2 mm at birth. The developing parathyroid glands are physiologically functional beginning in the second trimester. Studies in mice have shown that interfering with the HOX15 gene can cause parathyroid gland aplasia, which suggests the gene plays an important role in the development of the parathyroid gland. The genes, TBX1, CRKL, GATA3, GCM2, and SOX3 have also been shown to play a crucial role in the formation of the parathyroid gland. Mutations in TBX1 and CRKL genes are correlated with DiGeorge syndrome, while mutations in GATA3 have also resulted in a DiGeorge-like syndrome. Malformations in the GCM2 gene have resulted in hypoparathyroidism. Studies on SOX3 gene mutations have demonstrated that it plays a role in parathyroid development. These mutations also lead to varying degrees of hypopituitarism. Pancreas The human fetal pancreas begins to develop by the fourth week of gestation. Five weeks later, the pancreatic alpha and beta cells have begun to emerge. Reaching eight to ten weeks into development, the pancreas starts producing insulin, glucagon, somatostatin, and pancreatic polypeptide. During the early stages of fetal development, the number of pancreatic alpha cells outnumbers the number of pancreatic beta cells. The alpha cells reach their peak in the middle stage of gestation. From the middle stage until term, the beta cells continue to increase in number until they reach an approximate 1:1 ratio with the alpha cells. The insulin concentration within the fetal pancreas is 3.6 pmol/g at seven to ten weeks, which rises to 30 pmol/g at 16–25 weeks of gestation. Near term, the insulin concentration increases to 93 pmol/g. The endocrine cells have dispersed throughout the body within 10 weeks. At 31 weeks of development, the islets of Langerhans have differentiated. While the fetal pancreas has functional beta cells by 14 to 24 weeks of gestation, the amount of insulin that is released into the bloodstream is relatively low. In a study of pregnant women carrying fetuses in the mid-gestation and near term stages of development, the fetuses did not have an increase in plasma insulin levels in response to injections of high levels of glucose. In contrast to insulin, the fetal plasma glucagon levels are relatively high and continue to increase during development. At the mid-stage of gestation, the glucagon concentration is 6 μg/g, compared to 2 μg/g in adult humans. Just like insulin, fetal glucagon plasma levels do not change in response to an infusion of glucose. However, a study of an infusion of alanine into pregnant women was shown to increase the cord blood and maternal glucagon concentrations, demonstrating a fetal response to amino acid exposure. As such, while the fetal pancreatic alpha and beta islet cells have fully developed and are capable of hormone synthesis during the remaining fetal maturation, the islet cells are relatively immature in their capacity to produce glucagon and insulin. This is thought to be a result of the relatively stable levels of fetal serum glucose concentrations achieved via maternal transfer of glucose through the placenta. On the other hand, the stable fetal serum glucose levels could be attributed to the absence of pancreatic signaling initiated by incretins during feeding. In addition, the fetal pancreatic islets cells are unable to sufficiently produce cAMP and rapidly degrade cAMP by phosphodiesterase necessary to secrete glucagon and insulin. During fetal development, the storage of glycogen is controlled by fetal glucocorticoids and placental lactogen. Fetal insulin is responsible for increasing glucose uptake and lipogenesis during the stages leading up to birth. Fetal cells contain a higher amount of insulin receptors in comparison to adults cells and fetal insulin receptors are not downregulated in cases of hyperinsulinemia. In comparison, fetal haptic glucagon receptors are lowered in comparison to adult cells and the glycemic effect of glucagon is blunted. This temporary physiological change aids the increased rate of fetal development during the final trimester. Poorly managed maternal diabetes mellitus is linked to fetal macrosomia, increased risk of miscarriage, and defects in fetal development. Maternal hyperglycemia is also linked to increased insulin levels and beta cell hyperplasia in the post-term infant. Children of diabetic mothers are at an increased risk for conditions such as: polycythemia, renal vein thrombosis, hypocalcemia, respiratory distress syndrome, jaundice, cardiomyopathy, congenital heart disease, and improper organ development. Gonads The reproductive system begins development at four to five weeks of gestation with germ cell migration. The bipotential gonad results from the collection of the medioventral region of the urogenital ridge. At the five-week point, the developing gonads break away from the adrenal primordium. Gonadal differentiation begins 42 days following conception. Male gonadal development For males, the testes form at six fetal weeks and the sertoli cells begin developing by the eight week of gestation. SRY, the sex-determining locus, serves to differentiate the Sertoli cells. The Sertoli cells are the point of origin for anti-Müllerian hormone. Once synthesized, the anti-Müllerian hormone initiates the ipsilateral regression of the Müllerian tract and inhibits the development of female internal features. At 10 weeks of gestation, the Leydig cells begin to produce androgen hormones. The androgen hormone dihydrotestosterone is responsible for the development of the male external genitalia. The testicles descend during prenatal development in a two-stage process that begins at eight weeks of gestation and continues through the middle of the third trimester. During the transabdominal stage (8 to 15 weeks of gestation), the gubernacular ligament contracts and begins to thicken. The craniosuspensory ligament begins to break down. This stage is regulated by the secretion of insulin-like 3 (INSL3), a relaxin-like factor produced by the testicles, and the INSL3 G-coupled receptor, LGR8. During the transinguinal phase (25 to 35 weeks of gestation), the testicles descend into the scrotum. This stage is regulated by androgens, the genitofemoral nerve, and calcitonin gene-related peptide. During the second and third trimester, testicular development concludes with the diminution of the fetal Leydig cells and the lengthening and coiling of the seminiferous cords. Female gonadal development For females, the ovaries become morphologically visible by the 8th week of gestation. The absence of testosterone results in the diminution of the Wolffian structures. The Müllerian structures remain and develop into the fallopian tubes, uterus, and the upper region of the vagina. The urogenital sinus develops into the urethra and lower region of the vagina, the genital tubercle develops into the clitoris, the urogenital folds develop into the labia minora, and the urogenital swellings develop into the labia majora. At 16 weeks of gestation, the ovaries produce FSH and LH/hCG receptors. At 20 weeks of gestation, the theca cell precursors are present and oogonia mitosis is occurring. At 25 weeks of gestation, the ovary is morphologically defined and folliculogenesis can begin. Studies of gene expression show that a specific complement of genes, such as follistatin and multiple cyclin kinase inhibitors are involved in ovarian development. An assortment of genes and proteins - such as WNT4, RSPO1, FOXL2, and various estrogen receptors - have been shown to prevent the development of testicles or the lineage of male-type cells. Pituitary gland The pituitary gland is formed within the rostral neural plate. The Rathke's pouch, a cavity of ectodermal cells of the oropharynx, forms between the fourth and fifth week of gestation and upon full development, it gives rise to the anterior pituitary gland. By seven weeks of gestation, the anterior pituitary vascular system begins to develop. During the first 12 weeks of gestation, the anterior pituitary undergoes cellular differentiation. At 20 weeks of gestation, the hypophyseal portal system has developed. The Rathke's pouch grows towards the third ventricle and fuses with the diverticulum. This eliminates the lumen and the structure becomes Rathke's cleft. The posterior pituitary lobe is formed from the diverticulum. Portions of the pituitary tissue may remain in the nasopharyngeal midline. In rare cases this results in functioning ectopic hormone-secreting tumors in the nasopharynx. The functional development of the anterior pituitary involves spatiotemporal regulation of transcription factors expressed in pituitary stem cells and dynamic gradients of local soluble factors. The coordination of the dorsal gradient of pituitary morphogenesis is dependent on neuroectodermal signals from the infundibular bone morphogenetic protein 4 (BMP4). This protein is responsible for the development of the initial invagination of the Rathke's pouch. Other essential proteins necessary for pituitary cell proliferation are Fibroblast growth factor 8 (FGF8), Wnt4, and Wnt5. Ventral developmental patterning and the expression of transcription factors is influenced by the gradients of BMP2 and sonic hedgehog protein (SHH). These factors are essential for coordinating early patterns of cell proliferation. Six weeks into gestation, the corticotroph cells can be identified. By seven weeks of gestation, the anterior pituitary is capable of secreting ACTH. Within eight weeks of gestation, somatotroph cells begin to develop with cytoplasmic expression of human growth hormone. Once a fetus reaches 12 weeks of development, the thyrotrophs begin expression of Beta subunits for TSH, while gonadotrophs being to express beta-subunits for LH and FSH. Male fetuses predominately produced LH-expressing gonadotrophs, while female fetuses produce an equal expression of LH and FSH expressing gonadotrophs. At 24 weeks of gestation, prolactin-expressing lactotrophs begin to emerge. Function Hormones A hormone is any of a class of signaling molecules produced by cells in glands in multicellular organisms that are transported by the circulatory system to target distant organs to regulate physiology and behaviour. Hormones have diverse chemical structures, mainly of 3 classes: eicosanoids, steroids, and amino acid/protein derivatives (amines, peptides, and proteins). The glands that secrete hormones comprise the endocrine system. The term hormone is sometimes extended to include chemicals produced by cells that affect the same cell (autocrine or intracrine signalling) or nearby cells (paracrine signalling). Hormones are used to communicate between organs and tissues for physiological regulation and behavioral activities, such as digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress, growth and development, movement, reproduction, and mood. Hormones affect distant cells by binding to specific receptor proteins in the target cell resulting in a change in cell function. This may lead to cell type-specific responses that include rapid changes to the activity of existing proteins, or slower changes in the expression of target genes. Amino acid–based hormones (amines and peptide or protein hormones) are water-soluble and act on the surface of target cells via signal transduction pathways; steroid hormones, being lipid-soluble, move through the plasma membranes of target cells to act within their nuclei. Cell signalling The typical mode of cell signalling in the endocrine system is endocrine signaling, that is, using the circulatory system to reach distant target organs. However, there are also other modes, i.e., paracrine, autocrine, and neuroendocrine signaling. Purely neurocrine signaling between neurons, on the other hand, belongs completely to the nervous system. Autocrine Autocrine signaling is a form of signaling in which a cell secretes a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on the same cell, leading to changes in the cells. Paracrine Some endocrinologists and clinicians include the paracrine system as part of the endocrine system, but there is not consensus. Paracrines are slower acting, targeting cells in the same tissue or organ. An example of this is somatostatin which is released by some pancreatic cells and targets other pancreatic cells. Juxtacrine Juxtacrine signaling is a type of intercellular communication that is transmitted via oligosaccharide, lipid, or protein components of a cell membrane, and may affect either the emitting cell or the immediately adjacent cells. It occurs between adjacent cells that possess broad patches of closely opposed plasma membrane linked by transmembrane channels known as connexons. The gap between the cells can usually be between only 2 and 4 nm. Clinical significance Disease Diseases of the endocrine system are common, including conditions such as diabetes mellitus, thyroid disease, and obesity. Endocrine disease is characterized by misregulated hormone release (a productive pituitary adenoma), inappropriate response to signaling (hypothyroidism), lack of a gland (diabetes mellitus type 1, diminished erythropoiesis in chronic kidney failure), or structural enlargement in a critical site such as the thyroid (toxic multinodular goitre). Hypofunction of endocrine glands can occur as a result of loss of reserve, hyposecretion, agenesis, atrophy, or active destruction. Hyperfunction can occur as a result of hypersecretion, loss of suppression, hyperplastic or neoplastic change, or hyperstimulation. Endocrinopathies are classified as primary, secondary, or tertiary. Primary endocrine disease inhibits the action of downstream glands. Secondary endocrine disease is indicative of a problem with the pituitary gland. Tertiary endocrine disease is associated with dysfunction of the hypothalamus and its releasing hormones. As the thyroid, and hormones have been implicated in signaling distant tissues to proliferate, for example, the estrogen receptor has been shown to be involved in certain breast cancers. Endocrine, paracrine, and autocrine signaling have all been implicated in proliferation, one of the required steps of oncogenesis. Other common diseases that result from endocrine dysfunction include Addison's disease, Cushing's disease and Graves' disease. Cushing's disease and Addison's disease are pathologies involving the dysfunction of the adrenal gland. Dysfunction in the adrenal gland could be due to primary or secondary factors and can result in hypercortisolism or hypocortisolism. Cushing's disease is characterized by the hypersecretion of the adrenocorticotropic hormone (ACTH) due to a pituitary adenoma that ultimately causes endogenous hypercortisolism by stimulating the adrenal glands. Some clinical signs of Cushing's disease include obesity, moon face, and hirsutism. Addison's disease is an endocrine disease that results from hypocortisolism caused by adrenal gland insufficiency. Adrenal insufficiency is significant because it is correlated with decreased ability to maintain blood pressure and blood sugar, a defect that can prove to be fatal. Graves' disease involves the hyperactivity of the thyroid gland which produces the T3 and T4 hormones. Graves' disease effects range from excess sweating, fatigue, heat intolerance and high blood pressure to swelling of the eyes that causes redness, puffiness and in rare cases reduced or double vision. Other animals A neuroendocrine system has been observed in all animals with a nervous system and all vertebrates have a hypothalamus–pituitary axis. All vertebrates have a thyroid, which in amphibians is also crucial for transformation of larvae into adult form. All vertebrates have adrenal gland tissue, with mammals unique in having it organized into layers. All vertebrates have some form of a renin–angiotensin axis, and all tetrapods have aldosterone as a primary mineralocorticoid. Additional images See also Endocrine disease Endocrinology List of human endocrine organs and actions Neuroendocrinology Nervous system Paracrine signalling Releasing hormones Tropic hormone References External links Endocrine cells Endocrine-related cutaneous conditions
Endocrine system
[ "Biology" ]
5,929
[ "Organ systems", "Endocrine system" ]
9,331
https://en.wikipedia.org/wiki/Euclid
Euclid (; ; BC) was an ancient Greek mathematician active as a geometer and logician. Considered the "father of geometry", he is chiefly known for the Elements treatise, which established the foundations of geometry that largely dominated the field until the early 19th century. His system, now referred to as Euclidean geometry, involved innovations in combination with a synthesis of theories from earlier Greek mathematicians, including Eudoxus of Cnidus, Hippocrates of Chios, Thales and Theaetetus. With Archimedes and Apollonius of Perga, Euclid is generally considered among the greatest mathematicians of antiquity, and one of the most influential in the history of mathematics. Very little is known of Euclid's life, and most information comes from the scholars Proclus and Pappus of Alexandria many centuries later. Medieval Islamic mathematicians invented a fanciful biography, and medieval Byzantine and early Renaissance scholars mistook him for the earlier philosopher Euclid of Megara. It is now generally accepted that he spent his career in Alexandria and lived around 300 BC, after Plato's students and before Archimedes. There is some speculation that Euclid studied at the Platonic Academy and later taught at the Musaeum; he is regarded as bridging the earlier Platonic tradition in Athens with the later tradition of Alexandria. In the Elements, Euclid deduced the theorems from a small set of axioms. He also wrote works on perspective, conic sections, spherical geometry, number theory, and mathematical rigour. In addition to the Elements, Euclid wrote a central early text in the optics field, Optics, and lesser-known works including Data and Phaenomena. Euclid's authorship of On Divisions of Figures and Catoptrics has been questioned. He is thought to have written many lost works. Life Traditional narrative The English name 'Euclid' is the anglicized version of the Ancient Greek name (). It is derived from 'eu-' (εὖ; 'well') and 'klês' (-κλῆς; 'fame'), meaning "renowned, glorious". In English, by metonymy, 'Euclid' can mean his most well-known work, Euclid's Elements, or a copy thereof, and is sometimes synonymous with 'geometry'. As with many ancient Greek mathematicians, the details of Euclid's life are mostly unknown. He is accepted as the author of four mostly extant treatises—the Elements, Optics, Data, Phaenomena—but besides this, there is nothing known for certain of him. The traditional narrative mainly follows the 5th century AD account by Proclus in his Commentary on the First Book of Euclid's Elements, as well as a few anecdotes from Pappus of Alexandria in the early 4th century. According to Proclus, Euclid lived shortly after several of Plato's ( BC) followers and before the mathematician Archimedes ( BC); specifically, Proclus placed Euclid during the rule of Ptolemy I ( BC). Euclid's birthdate is unknown; some scholars estimate around 330 or 325 BC, but others refrain from speculating. It is presumed that he was of Greek descent, but his birthplace is unknown. Proclus held that Euclid followed the Platonic tradition, but there is no definitive confirmation for this. It is unlikely he was a contemporary of Plato, so it is often presumed that he was educated by Plato's disciples at the Platonic Academy in Athens. Historian Thomas Heath supported this theory, noting that most capable geometers lived in Athens, including many of those whose work Euclid built on; historian Michalis Sialaros considers this a mere conjecture. In any event, the contents of Euclid's work demonstrate familiarity with the Platonic geometry tradition. In his Collection, Pappus mentions that Apollonius studied with Euclid's students in Alexandria, and this has been taken to imply that Euclid worked and founded a mathematical tradition there. The city was founded by Alexander the Great in 331 BC, and the rule of Ptolemy I from 306 BC onwards gave it a stability which was relatively unique amid the chaotic wars over dividing Alexander's empire. Ptolemy began a process of hellenization and commissioned numerous constructions, building the massive Musaeum institution, which was a leading center of education. Euclid is speculated to have been among the Musaeum's first scholars. Euclid's date of death is unknown; it has been speculated that he died . Identity and historicity Euclid is often referred to as 'Euclid of Alexandria' to differentiate him from the earlier philosopher Euclid of Megara, a pupil of Socrates included in dialogues of Plato with whom he was historically conflated. Valerius Maximus, the 1st century AD Roman compiler of anecdotes, mistakenly substituted Euclid's name for Eudoxus (4th century BC) as the mathematician to whom Plato sent those asking how to double the cube. Perhaps on the basis of this mention of a mathematical Euclid roughly a century early, Euclid became mixed up with Euclid of Megara in medieval Byzantine sources (now lost), eventually leading Euclid the mathematician to be ascribed details of both men's biographies and described as (). The Byzantine scholar Theodore Metochites () explicitly conflated the two Euclids, as did printer Erhard Ratdolt's 1482 of Campanus of Novara's Latin translation of the Elements. After the mathematician appended most of the extant biographical fragments about either Euclid to the preface of his 1505 translation of the Elements, subsequent publications passed on this identification. Later Renaissance scholars, particularly Peter Ramus, reevaluated this claim, proving it false via issues in chronology and contradiction in early sources. Medieval Arabic sources give vast amounts of information concerning Euclid's life, but are completely unverifiable. Most scholars consider them of dubious authenticity; Heath in particular contends that the fictionalization was done to strengthen the connection between a revered mathematician and the Arab world. There are also numerous anecdotal stories concerning to Euclid, all of uncertain historicity, which "picture him as a kindly and gentle old man". The best known of these is Proclus' story about Ptolemy asking Euclid if there was a quicker path to learning geometry than reading his Elements, which Euclid replied with "there is no royal road to geometry". This anecdote is questionable since a very similar interaction between Menaechmus and Alexander the Great is recorded from Stobaeus. Both accounts were written in the 5th century AD, neither indicates its source, and neither appears in ancient Greek literature. Any firm dating of Euclid's activity is called into question by a lack of contemporary references. The earliest original reference to Euclid is in Apollonius' prefatory letter to the Conics (early 2nd century BC): "The third book of the Conics contains many astonishing theorems that are useful for both the syntheses and the determinations of number of solutions of solid loci. Most of these, and the finest of them, are novel. And when we discovered them we realized that Euclid had not made the synthesis of the locus on three and four lines but only an accidental fragment of it, and even that was not felicitously done." The Elements is speculated to have been at least partly in circulation by the 3rd century BC, as Archimedes and Apollonius take several of its propositions for granted; however, Archimedes employs an older variant of the theory of proportions than the one found in the Elements. The oldest physical copies of material included in the Elements, dating from roughly 100 AD, can be found on papyrus fragments unearthed in an ancient rubbish heap from Oxyrhynchus, Roman Egypt. The oldest extant direct citations to the Elements in works whose dates are firmly known are not until the 2nd century AD, by Galen and Alexander of Aphrodisias; by this time it was a standard school text. Some ancient Greek mathematicians mention Euclid by name, but he is usually referred to as "ὁ στοιχειώτης" ("the author of Elements"). In the Middle Ages, some scholars contended Euclid was not a historical personage and that his name arose from a corruption of Greek mathematical terms. Works Elements Euclid is best known for his thirteen-book treatise, the Elements (; ), considered his magnum opus. Much of its content originates from earlier mathematicians, including Eudoxus, Hippocrates of Chios, Thales and Theaetetus, while other theorems are mentioned by Plato and Aristotle. It is difficult to differentiate the work of Euclid from that of his predecessors, especially because the Elements essentially superseded much earlier and now-lost Greek mathematics. The classicist Markus Asper concludes that "apparently Euclid's achievement consists of assembling accepted mathematical knowledge into a cogent order and adding new proofs to fill in the gaps" and the historian Serafina Cuomo described it as a "reservoir of results". Despite this, Sialaros furthers that "the remarkably tight structure of the Elements reveals authorial control beyond the limits of a mere editor". The Elements does not exclusively discuss geometry as is sometimes believed. It is traditionally divided into three topics: plane geometry (books 1–6), basic number theory (books 7–10) and solid geometry (books 11–13)—though book 5 (on proportions) and 10 (on irrational lines) do not exactly fit this scheme. The heart of the text is the theorems scattered throughout. Using Aristotle's terminology, these may be generally separated into two categories: "first principles" and "second principles". The first group includes statements labeled as a "definition" ( or ), "postulate" (), or a "common notion" (); only the first book includes postulates—later known as axioms—and common notions. The second group consists of propositions, presented alongside mathematical proofs and diagrams. It is unknown if Euclid intended the Elements as a textbook, but its method of presentation makes it a natural fit. As a whole, the authorial voice remains general and impersonal. Contents Book 1 of the Elements is foundational for the entire text. It begins with a series of 20 definitions for basic geometric concepts such as lines, angles and various regular polygons. Euclid then presents 10 assumptions (see table, right), grouped into five postulates (axioms) and five common notions. These assumptions are intended to provide the logical basis for every subsequent theorem, i.e. serve as an axiomatic system. The common notions exclusively concern the comparison of magnitudes. While postulates 1 through 4 are relatively straightforward, the 5th is known as the parallel postulate and particularly famous. Book 1 also includes 48 propositions, which can be loosely divided into those concerning basic theorems and constructions of plane geometry and triangle congruence (1–26); parallel lines (27–34); the area of triangles and parallelograms (35–45); and the Pythagorean theorem (46–48). The last of these includes the earliest surviving proof of the Pythagorean theorem, described by Sialaros as "remarkably delicate". Book 2 is traditionally understood as concerning "geometric algebra", though this interpretation has been heavily debated since the 1970s; critics describe the characterization as anachronistic, since the foundations of even nascent algebra occurred many centuries later. The second book has a more focused scope and mostly provides algebraic theorems to accompany various geometric shapes. It focuses on the area of rectangles and squares (see Quadrature), and leads up to a geometric precursor of the law of cosines. Book 3 focuses on circles, while the 4th discusses regular polygons, especially the pentagon. Book 5 is among the work's most important sections and presents what is usually termed as the "general theory of proportion". Book 6 utilizes the "theory of ratios" in the context of plane geometry. It is built almost entirely of its first proposition: "Triangles and parallelograms which are under the same height are to one another as their bases". From Book 7 onwards, the mathematician notes that "Euclid starts afresh. Nothing from the preceding books is used". Number theory is covered by books 7 to 10, the former beginning with a set of 22 definitions for parity, prime numbers and other arithmetic-related concepts. Book 7 includes the Euclidean algorithm, a method for finding the greatest common divisor of two numbers. The 8th book discusses geometric progressions, while book 9 includes the proposition, now called Euclid's theorem, that there are infinitely many prime numbers. Of the Elements, book 10 is by far the largest and most complex, dealing with irrational numbers in the context of magnitudes. The final three books (11–13) primarily discuss solid geometry. By introducing a list of 37 definitions, Book 11 contextualizes the next two. Although its foundational character resembles Book 1, unlike the latter it features no axiomatic system or postulates. The three sections of Book 11 include content on solid geometry (1–19), solid angles (20–23) and parallelepipedal solids (24–37). Other works In addition to the Elements, at least five works of Euclid have survived to the present day. They follow the same logical structure as Elements, with definitions and proved propositions. Catoptrics concerns the mathematical theory of mirrors, particularly the images formed in plane and spherical concave mirrors, though the attribution is sometimes questioned. The Data (), is a somewhat short text which deals with the nature and implications of "given" information in geometrical problems. On Divisions () survives only partially in Arabic translation, and concerns the division of geometrical figures into two or more equal parts or into parts in given ratios. It includes thirty-six propositions and is similar to Apollonius' Conics. The Optics () is the earliest surviving Greek treatise on perspective. It includes an introductory discussion of geometrical optics and basic rules of perspective. The Phaenomena () is a treatise on spherical astronomy, survives in Greek; it is similar to On the Moving Sphere by Autolycus of Pitane, who flourished around 310 BC. Lost works Four other works are credibly attributed to Euclid, but have been lost. The Conics () was a four-book survey on conic sections, which was later superseded by Apollonius' more comprehensive treatment of the same name. The work's existence is known primarily from Pappus, who asserts that the first four books of Apollonius' Conics are largely based on Euclid's earlier work. Doubt has been cast on this assertion by the historian , owing to sparse evidence and no other corroboration of Pappus' account. The Pseudaria (; ), was—according to Proclus in (70.1–18)—a text in geometrical reasoning, written to advise beginners in avoiding common fallacies. Very little is known of its specific contents aside from its scope and a few extant lines. The Porisms (; ) was, based on accounts from Pappus and Proclus, probably a three-book treatise with approximately 200 propositions. The term 'porism' in this context does not refer to a corollary, but to "a third type of proposition—an intermediate between a theorem and a problem—the aim of which is to discover a feature of an existing geometrical entity, for example, to find the centre of a circle". The mathematician Michel Chasles speculated that these now-lost propositions included content related to the modern theories of transversals and projective geometry. The Surface Loci () is of virtually unknown contents, aside from speculation based on the work's title. Conjecture based on later accounts has suggested it discussed cones and cylinders, among other subjects. Legacy Euclid is generally considered with Archimedes and Apollonius of Perga as among the greatest mathematicians of antiquity. Many commentators cite him as one of the most influential figures in the history of mathematics. The geometrical system established by the Elements long dominated the field; however, today that system is often referred to as 'Euclidean geometry' to distinguish it from other non-Euclidean geometries discovered in the early 19th century. Among Euclid's many namesakes are the European Space Agency's (ESA) Euclid spacecraft, the lunar crater Euclides, and the minor planet 4354 Euclides. The Elements is often considered after the Bible as the most frequently translated, published, and studied book in the Western World's history. With Aristotle's Metaphysics, the Elements is perhaps the most successful ancient Greek text, and was the dominant mathematical textbook in the Medieval Arab and Latin worlds. The first English edition of the Elements was published in 1570 by Henry Billingsley and John Dee. The mathematician Oliver Byrne published a well-known version of the Elements in 1847 entitled The First Six Books of the Elements of Euclid in Which Coloured Diagrams and Symbols Are Used Instead of Letters for the Greater Ease of Learners, which included colored diagrams intended to increase its pedagogical effect. David Hilbert authored a modern axiomatization of the Elements. Edna St. Vincent Millay wrote that "Euclid alone has looked on Beauty bare." References Notes Citations Sources Books Articles Online External links Works Euclid Collection at University College London (c.500 editions of works by Euclid), available online through the Stavros Niarchos Foundation Digital Library. Scans of Johan Heiberg's edition of Euclid at wilbourhall.org The Elements PDF copy, with the original Greek and an English translation on facing pages, University of Texas. All thirteen books, in several languages as Spanish, Catalan, English, German, Portuguese, Arabic, Italian, Russian and Chinese. 4th-century BC births 4th-century BC Egyptian people 4th-century BC Greek people 4th-century BC writers 3rd-century BC deaths 3rd-century BC Egyptian people 3rd-century BC Greek people 3rd-century BC mathematicians 3rd-century BC writers Ancient Alexandrians Ancient Greek geometers Number theorists Philosophers of mathematics
Euclid
[ "Mathematics" ]
3,846
[ "Number theorists", "Number theory" ]
9,417
https://en.wikipedia.org/wiki/Euclidean%20geometry
Euclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry, Elements. Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates) and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated earlier, Euclid was the first to organize these propositions into a logical system in which each result is proved from axioms and previously proved theorems. The Elements begins with plane geometry, still taught in secondary school (high school) as the first axiomatic system and the first examples of mathematical proofs. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, explained in geometrical language. For more than two thousand years, the adjective "Euclidean" was unnecessary because Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that theorems proved from them were deemed absolutely true, and thus no other sorts of geometry were possible. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only over short distances (relative to the strength of the gravitational field). Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects. This is in contrast to analytic geometry, introduced almost 2,000 years later by René Descartes, which uses coordinates to express geometric properties by means of algebraic formulas. The Elements The Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost. There are 13 books in the Elements: Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, for example, "In any triangle, two angles taken together in any manner are less than two right angles." (Book I proposition 17) and the Pythagorean theorem "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." (Book I, proposition 47) Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of surface regions. Notions such as prime numbers and rational and irrational numbers are introduced. It is proved that there are infinitely many prime numbers. Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base. The platonic solids are constructed. Axioms Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of simple axioms. Until the advent of non-Euclidean geometry, these axioms were considered to be obviously true in the physical world, so that all the theorems would be equally true. However, Euclid's reasoning from assumptions to conclusions remains valid independently from the physical reality. Near the beginning of the first book of the Elements, Euclid gives five postulates (axioms) for plane geometry, stated in terms of constructions (as translated by Thomas Heath): Let the following be postulated: To draw a straight line from any point to any point. To produce (extend) a finite straight line continuously in a straight line. To describe a circle with any centre and distance (radius). That all right angles are equal to one another. [The parallel postulate]: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles. Although Euclid explicitly only asserts the existence of the constructed objects, in his reasoning he also implicitly assumes them to be unique. The Elements also include the following five "": Things that are equal to the same thing are also equal to one another (the transitive property of a Euclidean relation). If equals are added to equals, then the wholes are equal (Addition property of equality). If equals are subtracted from equals, then the differences are equal (subtraction property of equality). Things that coincide with one another are equal to one another (reflexive property). The whole is greater than the part. Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more extensive and complete sets of axioms. Parallel postulate To the ancients, the parallel postulate seemed less obvious than the others. They aspired to create a system of absolutely certain propositions, and to them, it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible since one can construct consistent systems of geometry (obeying the other axioms) in which the parallel postulate is true, and others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: his first 28 propositions are those that can be proved without it. Many alternative axioms can be formulated which are logically equivalent to the parallel postulate (in the context of the other axioms). For example, Playfair's axiom states: In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line. The "at most" clause is all that is needed since it can be proved from the remaining axioms that at least one parallel line exists. Methods of proof Euclidean Geometry is constructive. Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are models of the objects defined within the formal system, rather than instances of those objects. For example, a Euclidean straight line has no width, but any real drawn line will have. Though nearly all modern mathematicians consider nonconstructive proofs just as sound as constructive ones, they are often considered less elegant, intuitive, or practically useful. Euclid's constructive proofs often supplanted fallacious nonconstructive ones, e.g. some Pythagorean proofs that assumed all numbers are rational, usually requiring a statement such as "Find the greatest common measure of ..." Euclid often used proof by contradiction. Notation and terminology Naming of points and figures Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C. Complementary and supplementary angles Angles whose sum is a right angle are called complementary. Complementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays is infinite. Angles whose sum is a straight angle are supplementary. Supplementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the straight angle (180 degree angle). The number of rays in between the two original rays is infinite. Modern versions of Euclid's notation In modern terminology, angles would normally be measured in degrees or radians. Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length", although he occasionally referred to "infinite lines". A "line" for Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary. Some important or well known results Pons asinorum The pons asinorum (bridge of asses) states that in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another. Its name may be attributed to its frequent role as the first real test in the Elements of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross. Congruence of triangles Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent. Triangle angle sum The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have three interior angles of 60 degrees. Also, it causes every triangle to have at least two acute angles and up to one obtuse or right angle. Pythagorean theorem The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). Thales' theorem Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid Book I, Prop. 32 after the manner of Euclid Book III, Prop. 31. Scaling of area and volume In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, , and the volume of a solid to the cube, . Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. For instance, it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder. System of measurement and arithmetic Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, for example, a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain nonzero length as the unit, and other distances are expressed in relation to it. Addition of distances is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction. Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, for example in the proof of book IX, proposition 20. Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal respectively, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are equal and corresponding sides are in proportion to each other. In engineering Design and Analysis Stress Analysis: Stress Analysis - Euclidean geometry is pivotal in determining stress distribution in mechanical components, which is essential for ensuring structural integrity and durability. Gear Design: Gear - The design of gears, a crucial element in many mechanical systems, relies heavily on Euclidean geometry to ensure proper tooth shape and engagement for efficient power transmission. Heat Exchanger Design: Heat exchanger - In thermal engineering, Euclidean geometry is used to design heat exchangers, where the geometric configuration greatly influences thermal efficiency. See shell-and-tube heat exchangers and plate heat exchangers for more details. Lens Design: Lens - In optical engineering, Euclidean geometry is critical in the design of lenses, where precise geometric shapes determine the focusing properties. Geometric optics analyzes the focusing of light by lenses and mirrors. Dynamics Vibration Analysis: Vibration - Euclidean geometry is essential in analyzing and understanding the vibrations in mechanical systems, aiding in the design of systems that can withstand or utilize these vibrations effectively. Wing Design: Aircraft Wing Design - The application of Euclidean geometry in aerodynamics is evident in aircraft wing design, airfoils, and hydrofoils where geometric shape directly impacts lift and drag characteristics. Satellite Orbits: Satellite Orbits - Euclidean geometry helps in calculating and predicting the orbits of satellites, essential for successful space missions and satellite operations. Also see astrodynamics, celestial mechanics, and elliptic orbit. CAD Systems 3D Modeling: In CAD (computer-aided design) systems, Euclidean geometry is fundamental for creating accurate 3D models of mechanical parts. These models are crucial for visualizing and testing designs before manufacturing. Design and Manufacturing: Much of CAM (computer-aided manufacturing) relies on Euclidean geometry. The design geometry in CAD/CAM typically consists of shapes bounded by planes, cylinders, cones, tori, and other similar Euclidean forms. Today, CAD/CAM is essential in the design of a wide range of products, from cars and airplanes to ships and smartphones. Evolution of Drafting Practices: Historically, advanced Euclidean geometry, including theorems like Pascal's theorem and Brianchon's theorem, was integral to drafting practices. However, with the advent of modern CAD systems, such in-depth knowledge of these theorems is less necessary in contemporary design and manufacturing processes. Circuit Design PCB Layouts: Printed Circuit Board (PCB) Design utilizes Euclidean geometry for the efficient placement and routing of components, ensuring functionality while optimizing space. Efficient layout of electronic components on PCBs is critical for minimizing signal interference and optimizing circuit performance. Electromagnetic and Fluid Flow Fields Antenna Design: Antenna Design - Euclidean geometry of antennas helps in designing antennas, where the spatial arrangement and dimensions directly affect antenna and array performance in transmitting and receiving electromagnetic waves. Field Theory: Complex Potential Flow - In the study of inviscid flow fields and electromagnetic fields, Euclidean geometry aids in visualizing and solving potential flow problems. This is essential for understanding fluid velocity field and electromagnetic field interactions in three-dimensional space. The relationship of which is characterized by an irrotational solenoidal field or a conservative vector field. Controls Control System Analysis: Control Systems - The application of Euclidean geometry in control theory helps in the analysis and design of control systems, particularly in understanding and optimizing system stability and response. Calculation Tools: Jacobian - Euclidean geometry is integral in using Jacobian matrices for transformations and control systems in both mechanical and electrical engineering fields, providing insights into system behavior and properties. The Jacobian serves as a linearized design matrix in statistical regression and curve fitting; see non-linear least squares. The Jacobian is also used in random matrices, moment, statistics, and diagnostics. Other general applications Because of Euclidean geometry's fundamental status in mathematics, it is impractical to give more than a representative sampling of applications here. As suggested by the etymology of the word, one of the earliest reasons for interest in and also one of the most common current uses of geometry is surveying. In addition it has been used in classical mechanics and the cognitive and computational approaches to visual perception of objects. Certain practical results from Euclidean geometry (such as the right-angle property of the 3-4-5 triangle) were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, both of which can be measured directly by a surveyor. Historically, distances were often measured by chains, such as Gunter's chain, and angles using graduated circles and, later, the theodolite. An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction. Geometry is used extensively in architecture. Geometry can be used to design origami. Some classical construction problems of geometry are impossible using compass and straightedge, but can be solved using origami. Later history Archimedes and Apollonius Archimedes (), a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers. Apollonius of Perga () is mainly known for his investigation of conic sections. 17th century: Descartes René Descartes (1596–1650) developed analytic geometry, an alternative method for formalizing geometry which focused on turning geometry into algebra. In this approach, a point on a plane is represented by its Cartesian (x, y) coordinates, a line is represented by its equation, and so on. In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems. The equation defining the distance between two points P = (px, py) and Q = (qx, qy) is then known as the Euclidean metric, and other metrics define non-Euclidean geometries. In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., y = 2x + 1 (a line), or x2 + y2 = 7 (a circle). Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced. 18th century Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763, at least 28 different proofs had been published, but all were found incorrect. Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve equations whose order is an integral power of two, while doubling a cube requires the solution of a third-order equation. Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint). 19th century In the early 19th century, Carnot and Möbius systematically developed the use of signed angles and line segments as a way of simplifying and unifying results. Higher dimensions In the 1840s William Rowan Hamilton developed the quaternions, and John T. Graves and Arthur Cayley the octonions. These are normed algebras which extend the complex numbers. Later it was understood that the quaternions are also a Euclidean geometric system with four real Cartesian coordinates. Cayley used quaternions to study rotations in 4-dimensional Euclidean space. At mid-century Ludwig Schläfli developed the general concept of Euclidean space, extending Euclidean geometry to higher dimensions. He defined polyschemes, later called polytopes, which are the higher-dimensional analogues of polygons and polyhedra. He developed their theory and discovered all the regular polytopes, i.e. the -dimensional analogues of regular polygons and Platonic solids. He found there are six regular convex polytopes in dimension four, and three in all higher dimensions. Schläfli performed this work in relative obscurity and it was published in full only posthumously in 1901. It had little influence until it was rediscovered and fully documented in 1948 by H.S.M. Coxeter. In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions. The Clifford torus on the surface of the 3-sphere is the simplest and most symmetric flat embedding of the Cartesian product of two circles (in the same sense that the surface of a cylinder is "flat"). Non-Euclidean geometry The century's most influential development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates. In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of the theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the Elements, shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski. 20th century and relativity Einstein's theory of special relativity involves a four-dimensional space-time, the Minkowski space, which is non-Euclidean. This shows that non-Euclidean geometries, which had been introduced a few years earlier for showing that the parallel postulate cannot be proved, are also useful for describing the physical world. However, the three-dimensional "space part" of the Minkowski space remains the space of Euclidean geometry. This is not the case with general relativity, for which the geometry of the space part of space-time is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the Sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting these deviations in rays of light from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system. As a description of the structure of space Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called Euclidean motions, which include translations, reflections and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries; postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature). As discussed above, Albert Einstein's theory of relativity significantly modifies this view. The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1–4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry). Treatment of infinity Infinite objects Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite. The notion of infinitesimal quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals. Later ancient commentators, such as Proclus (410–485 CE), treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it. At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work. Infinite processes Ancient geometers may have considered the parallel postulate – that two parallel lines do not ever intersect – less certain than the others because it makes a statement about infinitely remote regions of space, and so cannot be physically verified. The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes. Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite. Logical basis Classical logic Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true. Modern standards of rigor Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference: That is, mathematics is context-independent knowledge within a hierarchical framework. As said by Bertrand Russell: Such foundational approaches range between foundationalism and formalism. Axiomatic formulations Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time. It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean. Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate. Birkhoff's axioms: Birkhoff proposed four postulates for Euclidean geometry that can be confirmed experimentally with scale and protractor. This system relies heavily on the properties of the real numbers. The notions of angle and distance become primitive concepts. Tarski's axioms: Alfred Tarski (1902–1983) and his students defined elementary Euclidean geometry as the geometry that can be expressed in first-order logic and does not depend on set theory for its logical basis, in contrast to Hilbert's axioms, which involve point sets. Tarski proved that his axiomatic formulation of elementary Euclidean geometry is consistent and complete in a certain sense: there is an algorithm that, for every proposition, can be shown either true or false. (This does not violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) This is equivalent to the decidability of real closed fields, of which elementary Euclidean geometry is a model. See also Absolute geometry Analytic geometry Birkhoff's axioms Cartesian coordinate system Hilbert's axioms Incidence geometry List of interactive geometry software Metric space Non-Euclidean geometry Ordered geometry Parallel postulate Type theory Classical theorems Angle bisector theorem Butterfly theorem Ceva's theorem Heron's formula Menelaus' theorem Nine-point circle Pythagorean theorem Notes References In 3 vols.: vol. 1 , vol. 2 , vol. 3 . Heath's authoritative translation of Euclid's Elements, plus his extensive historical research and detailed commentary throughout the text. External links Kiran Kedlaya, Geometry Unbound (a treatment using analytic geometry; PDF format, GFDL licensed) Greek inventions
Euclidean geometry
[ "Mathematics" ]
6,977
[ "Elementary mathematics", "Elementary geometry" ]
9,424
https://en.wikipedia.org/wiki/Ericsson
(), commonly known as Ericsson (), is a Swedish multinational networking and telecommunications company headquartered in Stockholm, Sweden. The company sells infrastructure, software, and services in information and communications technology for telecommunications service providers and enterprises, including, among others, 3G, 4G, and 5G equipment, and Internet Protocol (IP) and optical transport systems. The company employs around 100,000 people and operates in more than 180 countries. Ericsson has over 57,000 granted patents. Ericsson has been a major contributor to the development of the telecommunications industry and is one of the leaders in 5G. The company was founded in 1876 by Lars Magnus Ericsson and is jointly controlled by the Wallenberg family through its holding company Investor AB, and the universal bank Handelsbanken through its investment company Industrivärden. The Wallenbergs and the Handelsbanken sphere acquired their voting-strong A-shares, and thus the control of Ericsson, after the fall of the Kreuger empire in the early 1930s. Ericsson is the inventor of Bluetooth technology. History Foundation Lars Magnus Ericsson began his association with telephones in his youth as an instrument maker. He worked for a firm that made telegraph equipment for the Swedish government agency Telegrafverket. In 1876, at the age of 30, he started a telegraph repair shop with help from his friend Carl Johan Andersson in central Stockholm and repaired foreign-made telephones. In 1878, Ericsson began making and selling his own telephone equipment. His telephones were not technically innovative. In 1878, he agreed to supply telephones and switchboards to Sweden's first telecommunications operating company, Stockholms Allmänna Telefonaktiebolag. International expansion As production grew in the late 1890s, and the Swedish market seemed to be reaching saturation, Ericsson expanded into foreign markets through a number of agents. The UK (Ericsson Telephones Ltd.) and Russia were early markets, where factories were later established to improve the chances of gaining local contracts and augment the output of the Swedish factory. In the UK, the National Telephone Company was a major customer; by 1897 sold 28% of its output in the UK. The Nordic countries were also Ericsson customers; they were encouraged by the growth of telephone services in Sweden. Other countries and colonies were exposed to Ericsson products through the influence of their parent countries. These included Australia and New Zealand, which by the late 1890s were Ericsson's largest non-European markets. Mass production techniques were now firmly established; telephones were losing some of their ornate finish and decoration. Despite their successes elsewhere, Ericsson did not make significant sales in the United States. AT&T’s Western Electric Company (via the Bell System), Kellogg and Automatic Electric dominated the market. Ericsson eventually sold its U.S. assets. Sales in Mexico led to inroads into South American countries. South Africa and China were also generating significant sales. With his company now multinational, Lars Ericsson stepped down from the company in 1901. Automatic equipment Ericsson ignored the growth of automatic telephony in the United States and concentrated on manual exchange designs. Their first dial telephone was produced in 1921, although sales of the early automatic switching systems were slow until the equipment had proven itself on the world's markets. Telephones of this period had a simpler design and finish, and many of the early automatic desk telephones in Ericsson's catalogues were magneto styles with a dial on the front and appropriate changes to the electronics. Elaborate decals decorated the cases. World War I, the subsequent Great Depression and the loss of its Russian assets after the Revolution slowed the company's development while sales to other countries fell by about half. Shareholding changes The acquisition of other telecommunications companies put pressure on Ericsson's finances; in 1925, Karl Fredric Wincrantz took control of the company by acquiring most of the shares. Wincrantz was partly funded by Ivar Kreuger, an international financier. The company was renamed Telefonaktiebolaget L M Ericsson. Kreuger started showing interest in the company, being a major owner of Wincrantz holding companies. Wallenberg era begins Ericsson was saved from bankruptcy and closure with the help of banks including Stockholms Enskilda Bank (now Skandinaviska Enskilda Banken) and other Swedish investment banks controlled by the Wallenberg family, and some Swedish government backing. Marcus Wallenberg Jr. negotiated a deal with several Swedish banks to rebuild Ericsson financially. The banks gradually increased their possession of LM Ericsson "A" shares, while International Telephone & Telegraph (ITT) was still the largest shareholder. In 1960, the Wallenberg family bought ITT's shares in Ericsson, and has since controlled the company. Market development In the 1920s and 1930s, the world telephone markets were being organized and stabilized by many governments. The fragmented town-by-town systems serviced by small, private companies that had evolved were integrated and offered for lease to a single company. Ericsson obtained some leases, which represented further sales of equipment to the growing networks. Ericsson got almost one-third of its sales under the control of its telephone operating companies. Further development Ericsson introduced the world's first fully automatic mobile telephone system, MTA, in 1956. It released one of the world's first hands-free speaker telephones in the 1960s. In 1954, it released the Ericofon. Ericsson crossbar switching equipment was used in telephone administrations in many countries. In 1983 the company introduced the ERIPAX suite of network products and services. Emergence of the Internet (1995–2003) In the 1990s, during the emergence of the Internet, Ericsson was regarded as slow to realize its potential and falling behind in the area of IP technology. But the company had established an Internet project in 1995 called Infocom Systems to exploit opportunities leading from fixed-line telecom and IT. CEO Lars Ramqvist wrote in the 1996 annual report that in all three of its business areas – Mobile Telephones and Terminals, Mobile Systems, and Infocom Systems – "we will expand our operations as they relate to customer service and Internet Protocol (IP) access (Internet and intranet access)". The growth of GSM, which became a de facto world standard, combined with Ericsson's other mobile standards, such as D-AMPS and PDC, meant that by the start of 1997, Ericsson had an estimated 40% share of the world's mobile market, with around 54 million subscribers. There were also around 188 million AXE lines in place or on order in 117 countries. Telecom and chip companies worked in the 1990s to provide Internet access over mobile telephones. Early versions such as Wireless Application Protocol (WAP) used packet data over the existing GSM network, in a form known as GPRS (General Packet Radio Service), but these services, known as 2.5G, were fairly rudimentary and did not achieve much mass-market success. The International Telecommunication Union (ITU) had prepared the specifications for a 3G mobile service that included several technologies. Ericsson pushed hard for the WCDMA (wideband CDMA) form based on the GSM standard and began testing it in 1996. Japanese operator NTT Docomo signed deals to partner with Ericsson and Nokia, who came together in 1997 to support WCDMA over rival standards. DoCoMo was the first operator with a live 3G network, using its own version of WCDMA called FOMA. Ericsson was a significant developer of the WCDMA version of GSM, while US-based chip developer Qualcomm promoted the alternative system CDMA2000, building on the popularity of CDMA in the US market. This resulted in a patent infringement lawsuit that was resolved in March 1999 when the two companies agreed to pay each other royalties for the use of their respective technologies and Ericsson purchased Qualcomm's wireless infrastructure business and some R&D resources. Ericsson issued a profit warning in March 2001. Over the coming year, sales to operators halved. Mobile telephones became a burden; the company's telephones unit made a loss of SEK 24 billion in 2000. A fire in a Philips chip factory in New Mexico in March 2000 caused severe disruption to Ericsson's phone production, dealing a coup de grâce to Ericsson's mobile phone hopes. Mobile phones would be spun off into a joint venture with Sony, Sony Ericsson Mobile Communications, in October 2001. Ericsson launched several rounds of restructuring, refinancing and job-cutting; during 2001, staff numbers fell from 107,000 to 85,000. A further 20,000 went the next year, and 11,000 more in 2003. A new rights issue raised SEK 30 billion to keep the company afloat. The company had survived as mobile Internet started growing. With record profits, it was in better shape than many of its competitors. Rebuilding and growing (2003–2018) The emergence of full mobile Internet began a period of growth for the global telecom industry, including Ericsson. After the launch of 3G services in 2003, people started to access the Internet using their telephones. Ericsson was working on ways to improve WCDMA as operators were buying and rolling it out; it was the first generation of 3G access. New advances included IMS (IP Multimedia Subsystem) and the next evolution of WCDMA, called High-Speed Packet Access (HSPA). It was initially deployed in the download version called HSDPA; the technology spread from the first test calls in the US in late 2005 to 59 commercial networks in September 2006. HSPA would provide the world's first mobile broadband. In July 2016, Hans Vestberg stepped down as Ericsson's CEO after heading the company for six years. Jan Frykhammar, who had been working for the company since 1991 stepped in as interim CEO while Ericsson searched for a full-time replacement. On 16 January 2017, following Ericsson's announcement on 26 October 2016, new CEO Börje Ekholm started and interim CEO Jan Frykhammar stepped down the following day. In June 2018, Ericsson, Inc. and Ericsson AB have agreed to pay $145,893 to settle potential civil liability for an apparent violation of the International Emergency Economic Powers Act (IEEPA) and the Sudanese Sanctions Regulations, 31 C.F.R. part 538 (SSR).1 Acquisitions and cooperation Around 2000, companies and governments began to push for standards for mobile Internet. In May 2000, the European Commission created the Wireless Strategic Initiative, a consortium of four telecommunications suppliers in Europe – Ericsson, Nokia, Alcatel (France) and Siemens (Germany) – to develop and test new prototypes for advanced wireless communications systems. Later that year, the consortium partners invited other companies to join them in a Wireless World Research Forum in 2001. In December 1999, Microsoft and Ericsson announced a strategic partnership to combine the former's web browser and server software with the latter's mobile-internet technologies. In 2000, the Dot-com bubble burst with marked economic implications for Sweden. Ericsson, the world's largest producer of mobile telecommunications equipment, shed thousands of jobs, as did the country's Internet consulting firms and dot-com start-ups. In the same year, Intel, the world's largest semiconductor chip manufacturer, signed a $1.5 billion deal to supply flash memory to Ericsson over the next three years. The short-lived partnership, called Ericsson Microsoft Mobile Venture, owned 70/30 percent by Ericsson and Microsoft respectively, ended in October 2001 when Ericsson announced it would absorb the former joint venture and adopt a licensing agreement with Microsoft instead. The same month, Ericsson and Sony announced the creation of the mobile phone manufacturing joint venture: Sony Ericsson Mobile Communications. Ten years later, in February 2012, Ericsson sold its stake in the joint venture; Ericsson said it wanted to focus on the global wireless market as a whole. Lower stock prices and job losses affected many telecommunications companies in 2001. The major equipment manufacturers – Motorola (U.S.), Lucent Technologies (U.S.), Cisco Systems (U.S.), Marconi (UK), Siemens (Germany), Nokia (Finland), as well as Ericsson – all announced job cuts in their home countries and subsidiaries around the world. Ericsson's workforce worldwide fell during 2001 from 107,000 to 85,000. In September 2001, Ericsson purchased the remaining shares in EHPT from Hewlett-Packard. Founded in 1993, Ericsson Hewlett Packard Telecom (EHPT) was a joint venture made up of 60% Ericsson interests and 40% Hewlett-Packard interests. In 2002, ICT investor losses topped $2 trillion and share prices fell by 95% until August that year. More than half a million people lost their jobs in the global telecom industry over the two years. The collapse of U.S. carrier WorldCom, with more than $107 billion in assets, was the biggest in U.S. history. The sector's problems caused bankruptcies and job losses, and led to changes in the leadership of several major companies. Ericsson made 20,000 more staff redundant and raised about $3 billion from its shareholders. In June 2002, Infineon Technologies (then the sixth-largest semiconductor supplier and a subsidiary of Siemens) bought Ericsson's microelectronics unit for $400 million. Ericsson was an official backer in the 2005 launch of the .mobi top-level domain created specifically for the mobile internet. Co-operation with Hewlett-Packard did not end with EHPT; in 2003 Ericsson outsourced its IT to HP, which included Managed Services, Help Desk Support, Data Center Operations, and HP Utility Data Center. The contract was extended in 2008. In October 2005, Ericsson acquired the bulk of the troubled UK telecommunications manufacturer Marconi Company, including its brand name that dates back to the creation of the original Marconi Company by the "father of radio" Guglielmo Marconi. In September 2006, Ericsson sold the greater part of its defense business Ericsson Microwave Systems, which mainly produced sensor and radar systems, to Saab AB, which renamed the company to Saab Microwave Systems. In 2007, Ericsson acquired carrier edge-router maker Redback Networks, and then Entrisphere, a US-based company providing fiber-access technology. In September 2007, Ericsson acquired an 84% interest in German customer-care and billing software firm LHS, a stake later raised to 100%. In 2008, Ericsson sold its enterprise PBX division to Aastra Technologies, and acquired Tandberg Television, the television technology division of Norwegian company Tandberg. == In 2009, Ericsson bought the CDMA2000 and LTE business of Nortel's carrier networks division for US$1.18 billion; Bizitek, a Turkish business support systems integrator; the Estonian manufacturing operations of electronic manufacturing company Elcoteq; and completed its acquisition of LHS. Acquisitions in 2010 included assets from the Strategy and Technology Group of inCode, a North American business and consulting-services company; Nortel's majority shareholding (50% plus one share) in LG-Nortel, a joint venture between LG Electronics and Nortel Networks providing sales, R&D and industrial capacity in South Korea, now known as Ericsson-LG; further Nortel carrier-division assets, relating from Nortel's GSM business in the United States and Canada; Optimi Corporation, a U.S.–Spanish telecommunications vendor specializing in network optimization and management; and Pride, a consulting and systems-integration company operating in Italy. In 2011, Ericsson acquired manufacturing and research facilities, and staff from the Guangdong Nortel Telecommunication Equipment Company (GDNT) as well as Nortel's Multiservice Switch business. Ericsson acquired U.S. company Telcordia Technologies in January 2012, an operations and business support systems (OSS/BSS) company. In March, Ericsson announced it was buying the broadcast-services division of Technicolor, a media broadcast technology company. In April 2012 Ericsson completed the acquisition of BelAir Networks a strong Wi-Fi network technology company. On 3 May 2013, Ericsson announced it would divest its power cable operations to Danish company NKT Holding. On 1 July 2013, Ericsson announced it would acquire the media management company Red Bee Media, subject to regulatory approval. The acquisition was completed on 9 May 2014. In September 2013, Ericsson completed its acquisition of Microsoft's Mediaroom business and televisions services, originally announced in April the same year. The acquisition makes Ericsson the largest provider of IPTV and multi-screen services in the world, by market share; it was renamed Ericsson Mediaroom. In September 2014, Ericsson acquired majority stake in Apcera for cloud policy compliance. In October 2015, Ericsson completed the acquisition of Envivio, a software encoding company. In April 2016, Ericsson acquired Polish and Ukrainian operations of software development company Ericpol, a long-time supplier to Ericsson. Approximately 2,300 Ericpol employees joined Ericsson, bringing software development competence in radio, cloud, and IP. On 20 June 2017, Bloomberg disclosed that Ericsson hired Morgan Stanley to explore the sale of its media businesses. The Red Bee Media business was kept in-house as an independent subsidiary company, as no suitable buyer was found, but a 51% stake of the remainder of the Media Solution division was sold to private equity firm One Equity Partners, the new company being named MediaKind. The transaction was completed on 31 January 2019. In February 2018, Ericsson acquired the location-based mobile data management platform Placecast. Ericsson has since integrated Placecast's platform and capabilities with its programmatic mobile ad subsidiary, Emodo. In May 2018, SoftBank partnered with Ericsson to trial new radio technology. In September 2020, Ericsson acquired US-based carrier equipment manufacturer Cradlepoint for $1.1 billion. In November 2021, Ericsson announced it had reached an agreement to acquire Vonage for $6.2 billion. The acquisition completed in July 2022. In January 2024, Ericson and MTN Group announced expansion of their partnership to boost their mobile financial services on Africa market, as the company appointed Michael Wallis-Brown as vice president responsible for global mobile financial services. In December 2024, Ericsson secured a multi-year extension deal worth billions with Bharti Airtel for the provision of 4G and 5G radio access network products and solutions. This agreement underscores the growing demand for advanced telecommunications infrastructure as the industry transitions to 5G technologies. Corporate governance , members of the board of directors of LM Ericsson were: Leif Johansson, Jacob Wallenberg, Kristin S. Rinne, Helena Stjernholm, Sukhinder Singh Cassidy, Börje Ekholm, Ulf J. Johansson, Mikael Lännqvist, Zlatko Hadzic, Kjell-Åke Soting, Nora Denzel, Kristin Skogen Lund, Pehr Claesson, Karin Åberg and Roger Svensson. Research and development Ericsson has structured its R&D in three levels depending on when products or technologies will be introduced to customers and users. Its research and development organization is part of 'Group Function Technology' and addresses several facets of network architecture: wireless access networks; radio access technologies; broadband technologies; packet technologies; multimedia technologies; services software; EMF safety and sustainability; security; and global services. The head of research since 2012 is Sara Mazur. Group Function Technology holds research co-operations with several major universities and research institutes including Lund University in Sweden, Eötvös Loránd University in Hungary and Beijing Institute of Technology in China. Ericsson also holds research co-operations within several European research programs such as GigaWam and OASE. Ericsson holds 33,000 granted patents and is the number-one holder of GSM/GPRS/EDGE, WCDMA/HSPA, and LTE essential patents. In 2023, the World Intellectual Property Organization (WIPO)’s Annual PCT Review ranked Ericsson's number of patent applications published under the PCT System as 7th in the world, with 1,863 patent applications being published during 2023. Ericsson hosts a developer program called Ericsson Developer Connection designed to encourage development of applications and services. Ericsson also has an open innovation initiative for beta applications and beta API's & tools called Ericsson Labs. The company hosts several internal innovation competitions among its employees. In May 2022, it was announced that Ericsson and Intel are pooling R&D excellence to create high-performing Cloud RAN solutions. The organisations have pooled to launch a tech hub in California, USA. The hub focuses on the benefits that Ericsson Cloud RAN and Intel technology can bring to: improving energy efficiency and network performance, reducing time to market, and monetizing new business opportunities such as enterprise applications. Products and services Ericsson's business includes technology research, development, network systems and software development, and running operations for telecom service providers. and software Ericsson offers end-to-end services for all major mobile communication standards, and has three main business units. Business Area Networks Business Area Networks, previously called Business Unit Networks, develop network infrastructure for communication needs over mobile and fixed connections. Its products include radio base stations, radio network controllers, mobile switching centers and service application nodes. Operators use Ericsson products to migrate from 2G to 3G and, most recently, to 4G networks. The company's network division has been described as a driver in the development of 2G, 3G, 4G/LTE and 5G technology, and the evolution towards all-IP, and it develops and deploys advanced LTE systems, but it is still developing the older GSM, WCDMA, and CDMA technologies. The company's networks portfolio also includes microwave transport, Internet Protocol (IP) networks, fixed-access services for copper and fiber, and mobile broadband modules, several levels of fixed broadband access, radio access networks from small pico cells to high-capacity macro cells and controllers for radio base stations. Network services Ericsson's network rollout services employ in-house capabilities, subcontractors and central resources to make changes to live networks. Services such as technology deployment, network transformation, support services and network optimization are also provided. Business Area Digital Services This unit provides core networks, Operations Support Systems such as network management and analytics, and Business Support Systems such as billing and mediation. Within the Digital Services unit, there is an m-Commerce offering, which focuses on service providers and facilitates their working with financial institutions and intermediaries. Ericsson has announced m-commerce deals with Western Union and African wireless carrier MTN. Business Area Managed Services The unit is active in 180 countries; it supplies managed services, systems integration, consulting, network rollout, design and optimization, broadcast services, learning services and support. The company also works with television and media, public safety, and utilities. Ericsson claims to manage networks that serve more than 1 billion subscribers worldwide, and to support customer networks that serve more than 2.5 billion subscribers. Broadcast services Ericsson's Broadcast Services unit was evolved into a unit called Red Bee Media, which has been spun out into a joint venture. It deals with the playout of live and pre-recorded, commercial and public service television programmes, including presentation (continuity announcements), trailers, and ancillary access services such as closed-caption subtitles, audio description and in-vision sign language interpreters. Its media management services consist of Managed Media Preparation and Managed Media Internet Delivery. Divested businesses Sony Ericsson Mobile Communications AB (Sony Ericsson) was a joint venture with Sony that merged the previous mobile telephone operations of both companies. It manufactured mobile telephones, accessories and personal computer (PC) cards. Sony Ericsson was responsible for product design and development, marketing, sales, distribution and customer services. On 16 February 2012, Sony announced it had completed the full acquisition of Sony Ericsson, after which it changed name to Sony Mobile Communications, and nearly a year later it moved headquarters from Sweden to Japan. Mobile phones As a joint venture with Sony, Ericsson's mobile telephone production was moved into the company Sony Ericsson in 2001. The following is a list of mobile phones marketed under the brand name Ericsson. Ericsson GS88 – Cancelled mobile telephone Ericsson invented the "Smartphone" name for Ericsson GA628 – Known for its Z80 CPU Ericsson SH888 – First mobile telephone to have wireless modem capabilities Ericsson A1018 – Dualband cellphone, notably easy to hack Ericsson A2618 & Ericsson A2628 – Dualband cellphones. Use graphical LCD display based on PCF8548 I²C controller. Ericsson PF768 Ericsson GF768 Ericsson DH318 - One of the earliest TDMA/AMPS phones in the USA Ericsson GH388 Ericsson T10 – Colourful cellphone Ericsson T18 – Business model of the T10, with active flip Ericsson T28 – Very slim telephone. Uses lithium polymer batteries. Ericsson T28 FAQ use graphical LCD display based on PCF8558 I²C controller. Ericsson T20s Ericsson T29s – Similar to the T28s, but with WAP support Ericsson T29m – Pre-alpha prototype for the T39m Ericsson T36m – Prototype for the T39m. Announced in yellow and blue. Never hit the market due to release T39m Ericsson T39 – Similar to the T28, but with a GPRS modem, Bluetooth and triband capabilities Ericsson T65 Ericsson T66 Ericsson T68m – The first Ericsson handset to have a color display, later branded as Sony Ericsson T68i Ericsson R250s Pro – Fully dust- and water resistant telephone Ericsson R310s Ericsson R320s Ericsson R320s Titan – Limited Edition with titanium front Ericsson R320s GPRS – Prototype for testing GPRS networks Ericsson R360m – Pre-alpha prototype for the R520m Ericsson R380 – First cellphone to use the Symbian OS Ericsson R520m – Similar to the T39, but in a candy bar form factor and with added features such as a built-in speakerphone and an optical proximity sensor Ericsson R520m UMTS – Prototype to test UMTS networks Ericsson R520m SyncML – Prototype to test the SyncML implementation Ericsson R580m – Announced in several press releases. Supposed to be a successor of the R380s without external antenna and with color display Ericsson R600 Telephones Ericsson Dialog Ericofon Ericsson Mobile Platforms Ericsson Mobile Platforms existed for eight years; on 12 February 2009, Ericsson announced it would be merged with the mobile platform company of STMicroelectronics, ST-NXP Wireless, to create a 50/50 joint venture owned by Ericsson and STMicroelectronics. This joint venture was divested in 2013 and remaining activities can be found in Ericsson Modems and STMicroelectronics. Ericsson Mobile Platform ceased being a legal entity early 2009. Ericsson Enterprise Starting in 1983 Ericsson Enterprise provided communications systems and services for businesses, public entities and educational institutions. It produced products for voice over Internet protocol (VoIP)-based private branch exchanges (PBX), wireless local area networks (WLAN), and mobile intranets. Ericsson Enterprise operated mainly from Sweden but also operated through regional units and other partners/distributors. In 2008 it was sold to Aastra. Corruption On 7 December 2019, Ericsson agreed to pay more than $1.2 billion (€1.09 billion) to settle U.S. Department of Justice FCPA criminal and civil investigations into foreign corruption. US authorities accused the company of conducting a campaign of corruption between 2000 and 2016 across China, Indonesia, Vietnam, Kuwait and Djibouti. Ericsson admitted to paying bribes, falsifying books and records and failing to implement reasonable internal accounting controls in an attempt to strengthen its position in the telecommunications industry. In 2022, an internal investigation into corruption inside the company was leaked by the International Consortium of Investigative Journalists. It detailed corruption in at least 10 countries. Ericsson has admitted "serious breaches of compliance rules". The leak also revealed that some subcontractors working on behalf of Ericsson paid bribes to the Islamic State in order to continue operating the telecom network in occupied regions of Iraq. See also Cedergren Damovo Ericsson Nikola Tesla Erlang (programming language) Investor AB List of networking hardware vendors List of Sony Ericsson products Red Jade Tandberg Television References Further reading John Meurling; Richard Jeans (1994). A switch in time: AXE – creating a foundation for the information age. London: Communications Week International. . John Meurling; Richard Jeans (1997). The ugly duckling. Stockholm: Ericsson Mobile Communications. . John Meurling; Richard Jeans (2000). The Ericsson Chronicle: 125 years in telecommunications. Stockholm: Informationsförlaget. . The Mobile Phone Book: The Invention of the Mobile Telephone Industry. . Mobile media and applications – from concept to cash: successful service creation and launch. . Anders Pehrsson (1996). International Strategies in Telecommunications. London: Routledge Research. . External links General reference to all chapters on the Ericsson history: Ericsson Logo Companies based in Stockholm Companies formerly listed on the London Stock Exchange Companies listed on the Nasdaq Companies listed on Nasdaq Stockholm Companies in the OMX Stockholm 30 Companies in the OMX Nordic 40 Companies related to the Wallenberg family Computer companies of Sweden Computer hardware companies Electronics companies of Sweden Manufacturing companies established in 1876 Mobile phone manufacturers Multinational companies headquartered in Sweden Networking hardware companies Swedish brands Swedish companies established in 1876 Telecommunications equipment vendors Video equipment manufacturers Companies related to the Engwall family
Ericsson
[ "Technology" ]
6,256
[ "Computer hardware companies", "Computers" ]
9,425
https://en.wikipedia.org/wiki/Ethology
Ethology is a branch of zoology that studies the behaviour of non-human animals. It has its scientific roots in the work of Charles Darwin and of American and German ornithologists of the late 19th and early 20th century, including Charles O. Whitman, Oskar Heinroth, and Wallace Craig. The modern discipline of ethology is generally considered to have begun during the 1930s with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch, the three winners of the 1973 Nobel Prize in Physiology or Medicine. Ethology combines laboratory and field science, with a strong relation to neuroanatomy, ecology, and evolutionary biology. Etymology The modern term ethology derives from the Greek language: ἦθος, ethos meaning "character" and , -logia meaning "the study of". The term was first popularized by the American entomologist William Morton Wheeler in 1902. History The beginnings of ethology Ethologists have been concerned particularly with the evolution of behaviour and its understanding in terms of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose 1872 book The Expression of the Emotions in Man and Animals influenced many ethologists. He pursued his interest in behaviour by encouraging his protégé George Romanes, who investigated animal learning and intelligence using an anthropomorphic method, anecdotal cognitivism, that did not gain scientific support. Other early ethologists, such as Eugène Marais, Charles O. Whitman, Oskar Heinroth, Wallace Craig and Julian Huxley, instead concentrated on behaviours that can be called instinctive in that they occur in all members of a species under specified circumstances. Their starting point for studying the behaviour of a new species was to construct an ethogram, a description of the main types of behaviour with their frequencies of occurrence. This provided an objective, cumulative database of behaviour. Growth of the field Due to the work of Konrad Lorenz and Niko Tinbergen, ethology developed strongly in continental Europe during the years prior to World War II. After the war, Tinbergen moved to the University of Oxford, and ethology became stronger in the UK, with the additional influence of William Thorpe, Robert Hinde, and Patrick Bateson at the University of Cambridge. Lorenz, Tinbergen, and von Frisch were jointly awarded the Nobel Prize in Physiology or Medicine in 1973 for their work of developing ethology. Ethology is now a well-recognized scientific discipline, with its own journals such as Animal Behaviour, Applied Animal Behaviour Science, Animal Cognition, Behaviour, Behavioral Ecology and Ethology. In 1972, the International Society for Human Ethology was founded along with its journal, Human Ethology. Social ethology In 1972, the English ethologist John H. Crook distinguished comparative ethology from social ethology, and argued that much of the ethology that had existed so far was really comparative ethology—examining animals as individuals—whereas, in the future, ethologists would need to concentrate on the behaviour of social groups of animals and the social structure within them. E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that time, the study of behaviour has been much more concerned with social aspects. It has been driven by the Darwinism associated with Wilson, Robert Trivers, and W. D. Hamilton. The related development of behavioural ecology has helped transform ethology. Furthermore, a substantial rapprochement with comparative psychology has occurred, so the modern scientific study of behaviour offers a spectrum of approaches. In 2020, Tobias Starzak and Albert Newen from the Institute of Philosophy II at the Ruhr University Bochum postulated that animals may have beliefs. Determinants of behaviour Behaviour is determined by three major factors, namely inborn instincts, learning, and environmental factors. The latter include abiotic and biotic factors. Abiotic factors such as temperature or light conditions have dramatic effects on animals, especially if they are ectothermic or nocturnal. Biotic factors include members of the same species (e.g. sexual behavior), predators (fight or flight), or parasites and diseases. Instinct Webster's Dictionary defines instinct as "A largely inheritable and unalterable tendency of an organism to make a complex and specific response to environmental stimuli without involving reason". This covers fixed action patterns like beak movements of bird chicks, and the waggle dance of honeybees. Fixed action patterns An important development, associated with the name of Konrad Lorenz though probably due more to his teacher, Oskar Heinroth, was the identification of fixed action patterns. Lorenz popularized these as instinctive responses that would occur reliably in the presence of identifiable stimuli called sign stimuli or "releasing stimuli". Fixed action patterns are now considered to be instinctive behavioural sequences that are relatively invariant within the species and that almost inevitably run to completion. One example of a releaser is the beak movements of many bird species performed by newly hatched chicks, which stimulates the mother to regurgitate food for her offspring. Other examples are the classic studies by Tinbergen on the egg-retrieval behaviour and the effects of a "supernormal stimulus" on the behaviour of graylag geese. One investigation of this kind was the study of the waggle dance ("dance language") in bee communication by Karl von Frisch. Learning Habituation Habituation is a simple form of learning and occurs in many animal taxa. It is the process whereby an animal ceases responding to a stimulus. Often, the response is an innate behavior. Essentially, the animal learns not to respond to irrelevant stimuli. For example, prairie dogs (Cynomys ludovicianus) give alarm calls when predators approach, causing all individuals in the group to quickly scramble down burrows. When prairie dog towns are located near trails used by humans, giving alarm calls every time a person walks by is expensive in terms of time and energy. Habituation to humans is therefore an important behavior in this context. Associative learning Associative learning in animal behaviour is any learning process in which a new response becomes associated with a particular stimulus. The first studies of associative learning were made by the Russian physiologist Ivan Pavlov, who observed that dogs trained to associate food with the ringing of a bell would salivate on hearing the bell. Imprinting Imprinting enables the young to discriminate the members of their own species, vital for reproductive success. This important type of learning only takes place in a very limited period of time. Konrad Lorenz observed that the young of birds such as geese and chickens followed their mothers spontaneously from almost the first day after they were hatched, and he discovered that this response could be imitated by an arbitrary stimulus if the eggs were incubated artificially and the stimulus were presented during a critical period that continued for a few days after hatching. Cultural learning Observational learning Imitation Imitation is an advanced behavior whereby an animal observes and exactly replicates the behavior of another. The National Institutes of Health reported that capuchin monkeys preferred the company of researchers who imitated them to that of researchers who did not. The monkeys not only spent more time with their imitators but also preferred to engage in a simple task with them even when provided with the option of performing the same task with a non-imitator. Imitation has been observed in recent research on chimpanzees; not only did these chimps copy the actions of another individual, when given a choice, the chimps preferred to imitate the actions of the higher-ranking elder chimpanzee as opposed to the lower-ranking young chimpanzee. Stimulus and local enhancement Animals can learn using observational learning but without the process of imitation. One way is stimulus enhancement in which individuals become interested in an object as the result of observing others interacting with the object. Increased interest in an object can result in object manipulation which allows for new object-related behaviours by trial-and-error learning. Haggerty (1909) devised an experiment in which a monkey climbed up the side of a cage, placed its arm into a wooden chute, and pulled a rope in the chute to release food. Another monkey was provided an opportunity to obtain the food after watching a monkey go through this process on four occasions. The monkey performed a different method and finally succeeded after trial-and-error. In local enhancement, a demonstrator attracts an observer's attention to a particular location. Local enhancement has been observed to transmit foraging information among birds, rats and pigs. The stingless bee (Trigona corvina) uses local enhancement to locate other members of their colony and food resources. Social transmission A well-documented example of social transmission of a behaviour occurred in a group of macaques on Hachijojima Island, Japan. The macaques lived in the inland forest until the 1960s, when a group of researchers started giving them potatoes on the beach: soon, they started venturing onto the beach, picking the potatoes from the sand, and cleaning and eating them. About one year later, an individual was observed bringing a potato to the sea, putting it into the water with one hand, and cleaning it with the other. This behaviour was soon expressed by the individuals living in contact with her; when they gave birth, this behaviour was also expressed by their young—a form of social transmission. Teaching Teaching is a highly specialized aspect of learning in which the "teacher" (demonstrator) adjusts their behaviour to increase the probability of the "pupil" (observer) achieving the desired end-result of the behaviour. For example, orcas are known to intentionally beach themselves to catch pinniped prey. Mother orcas teach their young to catch pinnipeds by pushing them onto the shore and encouraging them to attack the prey. Because the mother orca is altering her behaviour to help her offspring learn to catch prey, this is evidence of teaching. Teaching is not limited to mammals. Many insects, for example, have been observed demonstrating various forms of teaching to obtain food. Ants, for example, will guide each other to food sources through a process called "tandem running," in which an ant will guide a companion ant to a source of food. It has been suggested that the pupil ant is able to learn this route to obtain food in the future or teach the route to other ants. This behaviour of teaching is also exemplified by crows, specifically New Caledonian crows. The adults (whether individual or in families) teach their young adolescent offspring how to construct and utilize tools. For example, Pandanus branches are used to extract insects and other larvae from holes within trees. Mating and the fight for supremacy Individual reproduction is the most important phase in the proliferation of individuals or genes within a species: for this reason, there exist complex mating rituals, which can be very complex even if they are often regarded as fixed action patterns. The stickleback's complex mating ritual, studied by Tinbergen, is regarded as a notable example. Often in social life, animals fight for the right to reproduce, as well as social supremacy. A common example of fighting for social and sexual supremacy is the so-called pecking order among poultry. Every time a group of poultry cohabitate for a certain time length, they establish a pecking order. In these groups, one chicken dominates the others and can peck without being pecked. A second chicken can peck all the others except the first, and so on. Chickens higher in the pecking order may at times be distinguished by their healthier appearance when compared to lower level chickens. While the pecking order is establishing, frequent and violent fights can happen, but once established, it is broken only when other individuals enter the group, in which case the pecking order re-establishes from scratch. Social behaviour Several animal species, including humans, tend to live in groups. Group size is a major aspect of their social environment. Social life is probably a complex and effective survival strategy. It may be regarded as a sort of symbiosis among individuals of the same species: a society is composed of a group of individuals belonging to the same species living within well-defined rules on food management, role assignments and reciprocal dependence. When biologists interested in evolution theory first started examining social behaviour, some apparently unanswerable questions arose, such as how the birth of sterile castes, like in bees, could be explained through an evolving mechanism that emphasizes the reproductive success of as many individuals as possible, or why, amongst animals living in small groups like squirrels, an individual would risk its own life to save the rest of the group. These behaviours may be examples of altruism. Not all behaviours are altruistic, as indicated by the table below. For example, revengeful behaviour was at one point claimed to have been observed exclusively in Homo sapiens. However, other species have been reported to be vengeful including chimpanzees, as well as anecdotal reports of vengeful camels. Altruistic behaviour has been explained by the gene-centred view of evolution. Benefits and costs of group living One advantage of group living is decreased predation. If the number of predator attacks stays the same despite increasing prey group size, each prey has a reduced risk of predator attacks through the dilution effect. Further, according to the selfish herd theory, the fitness benefits associated with group living vary depending on the location of an individual within the group. The theory suggests that conspecifics positioned at the centre of a group will reduce the likelihood predations while those at the periphery will become more vulnerable to attack. In groups, prey can also actively reduce their predation risk through more effective defence tactics, or through earlier detection of predators through increased vigilance. Another advantage of group living is an increased ability to forage for food. Group members may exchange information about food sources, facilitating the process of resource location. Honeybees are a notable example of this, using the waggle dance to communicate the location of flowers to the rest of their hive. Predators also receive benefits from hunting in groups, through using better strategies and being able to take down larger prey. Some disadvantages accompany living in groups. Living in close proximity to other animals can facilitate the transmission of parasites and disease, and groups that are too large may also experience greater competition for resources and mates. Group size Theoretically, social animals should have optimal group sizes that maximize the benefits and minimize the costs of group living. However, in nature, most groups are stable at slightly larger than optimal sizes. Because it generally benefits an individual to join an optimally-sized group, despite slightly decreasing the advantage for all members, groups may continue to increase in size until it is more advantageous to remain alone than to join an overly full group. Tinbergen's four questions for ethologists Tinbergen argued that ethology needed to include four kinds of explanation in any instance of behaviour: Function – How does the behaviour affect the animal's chances of survival and reproduction? Why does the animal respond that way instead of some other way? Causation – What are the stimuli that elicit the response, and how has it been modified by recent learning? Development – How does the behaviour change with age, and what early experiences are necessary for the animal to display the behaviour? Evolutionary history – How does the behaviour compare with similar behaviour in related species, and how might it have begun through the process of phylogeny? These explanations are complementary rather than mutually exclusive—all instances of behaviour require an explanation at each of these four levels. For example, the function of eating is to acquire nutrients (which ultimately aids survival and reproduction), but the immediate cause of eating is hunger (causation). Hunger and eating are evolutionarily ancient and are found in many species (evolutionary history), and develop early within an organism's lifespan (development). It is easy to confuse such questions—for example, to argue that people eat because they are hungry and not to acquire nutrients—without realizing that the reason people experience hunger is because it causes them to acquire nutrients. See also Animal behavior consultant Anthrozoology Behavioral ecology Cognitive ethology Deception in animals Human ethology List of abnormal behaviours in animals Tool use by non-human animals References Further reading Burkhardt, Richard W. Jr. "On the Emergence of Ethology as a Scientific Discipline." Conspectus of History 1.7 (1981). External links Ethology Articles containing video clips
Ethology
[ "Biology" ]
3,425
[ "Behavioural sciences", "Ethology", "Behavior", "Subfields of zoology" ]
9,426
https://en.wikipedia.org/wiki/Electromagnetic%20radiation
In physics, electromagnetic radiation (EMR) is the set of waves of an electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. There, depending on the frequency of oscillation, different wavelengths of electromagnetic spectrum are produced. In homogeneous, isotropic media, the oscillations of the two fields are on average perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. Electromagnetic radiation is commonly referred to as "light", EM, EMR, or electromagnetic waves. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength, the electromagnetic spectrum includes: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum, and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field, while the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and proportional to frequency according to Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is the Planck constant. Thus, higher frequency photons have more energy. For example, a gamma ray photon has times the energy of a extremely low frequency radio wave photon. The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of lower energy ultraviolet or lower frequencies (i.e., near ultraviolet, visible light, infrared, microwaves, and radio waves) is non-ionizing because its photons do not individually have enough energy to ionize atoms or molecules or to break chemical bonds. The effect of non-ionizing radiation on chemical systems and living tissue is primarily simply heating, through the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are ionizing – individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. Ionizing radiation can cause chemical reactions and damage living cells beyond simply heating, and can be a health hazard and dangerous. Physics Theory Maxwell's equations James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. Near and far fields Maxwell's equations established that some charges and currents (sources) produce local electromagnetic fields near them that do not radiate. Currents directly produce magnetic fields, but such fields of a magnetic-dipole–type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric-dipole–type electrical field, but this also declines with distance. These fields make up the near field. Neither of these behaviours is responsible for EM radiation. Instead, they only efficiently transfer energy to a receiver very close to the source, such as inside a transformer. The near field has strong effects its source, with any energy withdrawn by a receiver causing increased load (decreased electrical reactance) on the source. The near field does not propagate freely into space, carrying energy away without a distance limit, but rather oscillates, returning its energy to the transmitter if it is not absorbed by a receiver. By contrast, the far field is composed of radiation that is free of the transmitter, in the sense that the transmitter requires the same power to send changes in the field out regardless of whether anything absorbs the signal, e.g. a radio station does not need to increase its power when more receivers use the signal. This far part of the electromagnetic field is electromagnetic radiation. The far fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation from an isotropic source decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field, the near field, which varies in intensity according to an inverse cube power law, and thus does not transport a conserved amount of energy over distances but instead fades with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil). In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity are both associated with the near field, and do not comprise electromagnetic radiation. Properties Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves. The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect. In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount. EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Electromagnetic waves can be polarized, reflected, refracted, or diffracted, and can interfere with each other. Wave model In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation. The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). In the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a time-change in one type of field is proportional to the curl of the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below). An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion. A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation. The energy in electromagnetic waves is sometimes called radiant energy. Particle model and quantum theory An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years, and it later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by where h is the Planck constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect. As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence. Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality. Wave and particle effects of electromagnetic radiation Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the light beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift. Propagation speed When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is the Planck constant, 6.626 × 10−34 J·s, and f is the frequency of the wave. In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum. History of discovery Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared. In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions. In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves. Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties. The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers. Electromagnetic spectrum EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal waves (monochromatic radiation), which in turn can each be classified into these regions of the EMR spectrum. For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum. The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe. Radio and microwave When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. Electromagnetic radiation phenomena with wavelengths ranging from one meter to one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both. Infrared Like radio and microwave, infrared (IR) is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below). Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm). Visible light Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light. As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light. Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels. Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons. Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation. Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again. Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A. Ultraviolet As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects. At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV". Ionizing UV is strongly filtered by the Earth's atmosphere. X-rays and gamma rays Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter. Atmosphere and magnetosphere Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted. Visible light is well transmitted in air, a property known as an atmospheric window, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor and CO2. Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves. Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m). Thermal and electromagnetic radiation as a form of heat The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire. Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects. Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed. The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material. The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy. Biological effects Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to near ultraviolet) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect. Some research suggests that weaker non-thermal electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields can have biological effects, though the significance of this is unclear. The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B – possibly carcinogenic. This group contains possible carcinogens such as lead, DDT, and styrene. At higher frequencies (some of visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer. Thus, at UV frequencies and higher, electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum. Use as a weapon The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m). Derivation from electromagnetic theory Electromagnetic waves are predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. There are nontrivial solutions of the homogeneous Maxwell's equations (without charges or currents), describing waves of changing electric and magnetic fields. Beginning with Maxwell's equations in free space: where and are the electric field (measured in V/m or N/C) and the magnetic field (measured in T or Wb/m2), respectively; yields the divergence and the curl of a vector field and are partial derivatives (rate of change in time, with location fixed) of the magnetic and electric field; is the permeability of a vacuum (4 × 10−7 H/m), and is the permittivity of a vacuum (8.85 × 10−12 F/m); Besides the trivial solution useful solutions can be derived with the following vector identity, valid for all vectors in some vector field: Taking the curl of the second Maxwell equation () yields: Evaluating the left hand side of () with the above identity and simplifying using (), yields: Evaluating the right hand side of () by exchanging the sequence of derivatives and inserting the fourth yields: Combining () and () again, gives a vector-valued differential equation for the electric field, solving the homogeneous Maxwell equations: Taking the curl of the fourth Maxwell equation () results in a similar differential equation for a magnetic field solving the homogeneous Maxwell equations: Both differential equations have the form of the general wave equation for waves propagating with speed where is a function of time and location, which gives the amplitude of the wave at some time at a certain location: This is also written as: where denotes the so-called d'Alembert operator, which in Cartesian coordinates is given as: Comparing the terms for the speed of propagation, yields in the case of the electric and magnetic fields: This is the speed of light in vacuum. Thus Maxwell's equations connect the vacuum permittivity , the vacuum permeability , and the speed of light, c0, via the above equation. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light. These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field has the form Here, is a constant vector, is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. is a generic solution to the wave equation. In other words, for a generic wave traveling in the direction. From the first of Maxwell's equations, we get Thus, which implies that the electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field, namely, Thus, The remaining equations will be satisfied by this choice of . The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, , which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first-order in time, resulting in the same phase shift for both fields in each mathematical operation. From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field. More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation. See also Antenna measurement Bioelectromagnetics Bolometer CONELRAD Electromagnetic pulse Electromagnetic radiation and health Evanescent wave coupling Finite-difference time-domain method Gravitational wave Helicon Impedance of free space Radiation reaction Health effects of sunlight exposure Sinusoidal plane-wave solutions of the electromagnetic wave equation References Further reading External links The Feynman Lectures on Physics Vol. I Ch. 28: Electromagnetic Radiation Electromagnetic Waves from Maxwell's Equations on Project PHYSNET. Heinrich Hertz Radiation
Electromagnetic radiation
[ "Physics", "Chemistry" ]
8,551
[ "Transport phenomena", "Physical phenomena", "Electromagnetic radiation", "Waves", "Radiation" ]
9,476
https://en.wikipedia.org/wiki/Electron
The electron (, or in nuclear reactions) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, . Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge "electron" in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment. Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. History Discovery of effect of electric force The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the Neo-Latin term , to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin (also the root of the alloy of the same name), which came from the Greek word for amber, (). Discovery of two kinds of charges In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity". Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron. Discovery of free electrons outside matter While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. Furthermore, he also discovered that these rays are deflected by magnets just like lines of current. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons. Goldstein also experimented with double cathodes and hypothesized that one ray may repulse another, although he didn't believe that any particles might be involved. During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter, in which the mean free path of the particles is so long that collisions may be ignored. In 1883, not yet well-known German physicist Heinrich Hertz tried to prove that cathode rays are electrically neutral and got what he interpreted as a confident absence of deflection in electrostatic, as opposed to magnetic, field. However, as J. J. Thomson explained in 1897, Hertz placed the deflecting electrodes in a highly-conductive area of the tube, resulting in a strong screening effect close to their surface. The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct. In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge. While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms. In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. By 1899 he showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. Thomson measured m/e for cathode ray "corpuscles", and made good estimates of the charge e, leading to value for the mass m, finding a value 1400 times less massive than the least massive ion known: hydrogen. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but did not take the step of interpreting their results as showing a new particle, while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e ~  and m ~  The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered). The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time. Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons. Atomic theory By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms. Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. Quantum mechanics In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. Particle accelerators With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Confinement of individual electrons Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective-mass tensor. Characteristics Classification In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions because they all have half-odd integer spin; the electron has spin . Fundamental properties The invariant mass of an electron is approximately , or . Due to mass–energy equivalence, this corresponds to a rest energy of . The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by , and the positron is symbolized by . The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant that is equal to The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters. The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of  seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level. Quantum properties As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. Virtual particles In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most . While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. Interaction An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by which is approximately equal to . When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino–electron elastic scattering. Atoms and molecules An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital, called paired electrons, cancel each other out. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.<ref ></ref> Conductivity If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect. Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current. When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation. The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus. Formation The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron–electron pairs annihilated each other and emitted energetic photons: + ↔ + An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe. For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron–positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, → + + For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation. Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (). At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass–energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes. Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. → + A muon, in turn, can decay to form an electron or positron. → + + Observation Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes. The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined. In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant. The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time. The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material. Plasma applications Particle beams Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding. Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits. Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy. Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays. Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies. Imaging Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°. The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface. Other applications In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery. Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor. See also Notes References External links Leptons Elementary particles Quantum electrodynamics Spintronics Charge carriers 1897 in science
Electron
[ "Physics", "Chemistry", "Materials_science" ]
10,984
[ "Electron", "Physical phenomena", "Elementary particles", "Molecular physics", "Charge carriers", "Spintronics", "Electrical phenomena", "Subatomic particles", "Condensed matter physics", "Matter" ]
9,477
https://en.wikipedia.org/wiki/Europium
Europium is a chemical element; it has symbol Eu and atomic number 63. Europium is a silvery-white metal of the lanthanide series that reacts readily with air to form a dark oxide coating. It is the most chemically reactive, least dense, and softest of the lanthanide elements. It is soft enough to be cut with a knife. Europium was isolated in 1901 and named after the continent of Europe. Europium usually assumes the oxidation state +3, like other members of the lanthanide series, but compounds having oxidation state +2 are also common. All europium compounds with oxidation state +2 are slightly reducing. Europium has no significant biological role and is relatively non-toxic compared to other heavy metals. Most applications of europium exploit the phosphorescence of europium compounds. Europium is one of the rarest of the rare-earth elements on Earth. Etymology Its discoverer, Eugène-Anatole Demarçay named the element after the continent of Europe. Characteristics Physical properties Europium is a ductile metal with a hardness similar to that of lead. It crystallizes in a body-centered cubic lattice. Some properties of europium are strongly influenced by its half-filled electron shell. Europium has the second lowest melting point and the lowest density of all lanthanides. Chemical properties Europium is the most reactive rare-earth element. It rapidly oxidizes in air, so that bulk oxidation of a centimeter-sized sample occurs within several days. Its reactivity with water is comparable to that of calcium, and the reaction is 2 Eu + 6 H2O → 2 Eu(OH)3 + 3 H2 Because of the high reactivity, samples of solid europium rarely have the shiny appearance of the fresh metal, even when coated with a protective layer of mineral oil. Europium ignites in air at 150 to 180 °C to form europium(III) oxide: 4 Eu + 3 O2 → 2 Eu2O3 Europium dissolves readily in dilute sulfuric acid to form pale pink solutions of [Eu(H2O)9]3+: 2 Eu + 3 H2SO4 + 18 H2O → 2 [Eu(H2O)9]3+ + 3 + 3 H2 Eu(II) vs. Eu(III) Although usually trivalent, europium readily forms divalent compounds. This behavior is unusual for most lanthanides, which almost exclusively form compounds with an oxidation state of +3. The +2 state has an electron configuration 4f7 because the half-filled f-shell provides more stability. In terms of size and coordination number, europium(II) and barium(II) are similar. The sulfates of both barium and europium(II) are also highly insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the "negative europium anomaly", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is. Isotopes Naturally occurring europium is composed of two isotopes, 151Eu and 153Eu, which occur in almost equal proportions; 153Eu is slightly more abundant (52.2% natural abundance). While 153Eu is stable, 151Eu was found to be unstable to alpha decay with a half-life of in 2007, giving about one alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope 151Eu, 35 artificial radioisotopes have been characterized, the most stable being 150Eu with a half-life of 36.9 years, 152Eu with a half-life of 13.516 years, and 154Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds; the known isotopes of europium range from 130Eu to 170Eu. This element also has 17 meta states, with the most stable being 150mEu (t1/2=12.8 hours), 152m1Eu (t1/2=9.3116 hours) and 152m2Eu (t1/2=96 minutes). The primary decay mode for isotopes lighter than 153Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before 153Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd). Europium as a nuclear fission product Europium is produced by nuclear fission; 155Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons. The fission product yields of europium isotopes are low near the top of the mass range for fission products. As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like 152Eu, have high cross sections for neutron capture, often high enough to be neutron poisons. 151Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most 151Sm instead ends up as 152Sm. 152Eu (half-life 13.516 years) and 154Eu (half-life 8.593 years) cannot be beta decay products because 152Sm and 154Sm are non-radioactive, but 154Eu is the only long-lived "shielded" nuclide, other than 134Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of 154Eu is produced by neutron activation of a significant portion of the non-radioactive 153Eu; however, much of this is further converted to 155Eu. Occurrence Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith. Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The median crustal abundance of europium is 2 ppm; values of the less abundant elements may vary with location by several orders of magnitude. Divalent europium (Eu2+) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu3+ to Eu2+ is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause. In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers used the relative levels of europium to iron within the star LAMOST J112456.61+453531.3 to propose that the accretion process for star occurred late. Production Europium is associated with the other rare-earth elements and is, therefore, mined together with them. Separation of the rare-earth elements occurs during later processing. Rare-earth elements are found in the minerals bastnäsite, loparite-(Ce), xenotime, and monazite in mineable quantities. Bastnäsite is a group of related fluorocarbonates, Ln(CO3)(F,OH). Monazite is a group of related of orthophosphate minerals (Ln denotes a mixture of all the lanthanides except promethium), loparite-(Ce) is an oxide, and xenotime is an orthophosphate (Y,Yb,Er,...)PO4. Monazite also contains thorium and yttrium, which complicates handling because thorium and its decay products are radioactive. For the extraction from the ore and the isolation of individual lanthanides, several methods have been developed. The choice of method is based on the concentration and composition of the ore and on the distribution of the individual lanthanides in the resulting concentrate. Roasting the ore, followed by acidic and basic leaching, is used mostly to produce a concentrate of lanthanides. If cerium is the dominant lanthanide, then it is converted from cerium(III) to cerium(IV) and then precipitated. Further separation by solvent extractions or ion exchange chromatography yields a fraction which is enriched in europium. This fraction is reduced with zinc, zinc/amalgam, electrolysis or other methods converting the europium(III) to europium(II). Europium(II) reacts in a way similar to that of alkaline earth metals and therefore it can be precipitated as a carbonate or co-precipitated with barium sulfate. Europium metal is available through the electrolysis of a mixture of molten EuCl3 and NaCl (or CaCl2) in a graphite cell, which serves as cathode, using graphite as anode. The other product is chlorine gas. A few large deposits produce or produced a significant amount of the world production. The Bayan Obo iron ore deposit in Inner Mongolia contains significant amounts of bastnäsite and monazite and is, with an estimated 36 million tonnes of rare-earth element oxides, the largest known deposit. The mining operations at the Bayan Obo deposit made China the largest supplier of rare-earth elements in the 1990s. Only 0.2% of the rare-earth element content is europium. The second large source for rare-earth elements between 1965 and its closure in the late 1990s was the Mountain Pass rare earth mine in California. The bastnäsite mined there is especially rich in the light rare-earth elements (La-Gd, Sc, and Y) and contains only 0.1% of europium. Another large source for rare-earth elements is the loparite found on the Kola peninsula. It contains besides niobium, tantalum and titanium up to 30% rare-earth elements and is the largest source for these elements in Russia. Compounds Europium compounds tend to exist in a trivalent oxidation state under most conditions. Commonly these compounds feature Eu(III) bound by 6–9 oxygenic ligands. The Eu(III) sulfates, nitrates and chlorides are soluble in water or polar organic solvents. Lipophilic europium complexes often feature acetylacetonate-like ligands, such as EuFOD. Halides Europium metal reacts with all the halogens: 2 Eu + 3 X2 → 2 EuX3 (X = F, Cl, Br, I) This route gives white europium(III) fluoride (EuF3), yellow europium(III) chloride (EuCl3), gray europium(III) bromide (EuBr3), and colorless europium(III) iodide (EuI3). Europium also forms the corresponding dihalides: yellow-green europium(II) fluoride (EuF2), colorless europium(II) chloride (EuCl2) (although it has a bright blue fluorescence under UV light), colorless europium(II) bromide (EuBr2), and green europium(II) iodide (EuI2). Chalcogenides and pnictides Europium forms stable compounds with all of the chalcogens, but the heavier chalcogens (S, Se, and Te) stabilize the lower oxidation state. Three oxides are known: europium(II) oxide (EuO), europium(III) oxide (Eu2O3), and the mixed-valence oxide Eu3O4, consisting of both Eu(II) and Eu(III). Otherwise, the main chalcogenides are europium(II) sulfide (EuS), europium(II) selenide (EuSe) and europium(II) telluride (EuTe): all three of these are black solids. Europium(II) sulfide is prepared by sulfiding the oxide at temperatures sufficiently high to decompose the Eu2O3: Eu2O3 + 3 H2S → 2 EuS + 3 H2O + S The main nitride of europium is europium(III) nitride (EuN). History Although europium is present in most of the minerals containing the other rare elements, due to the difficulties in separating the elements it was not until the late 1800s that the element was isolated. William Crookes observed the phosphorescent spectra of the rare elements including those eventually assigned to europium. Europium was first found in 1892 by Paul Émile Lecoq de Boisbaudran, who obtained basic fractions from samarium-gadolinium concentrates which had spectral lines not accounted for by samarium or gadolinium. However, the discovery of europium is generally credited to French chemist Eugène-Anatole Demarçay, who suspected samples of the recently discovered element samarium were contaminated with an unknown element in 1896 and who was able to isolate it in 1901; he then named it europium. When the europium-doped yttrium orthovanadate red phosphor was discovered in the early 1960s, and understood to be about to cause a revolution in the color television industry, there was a scramble for the limited supply of europium on hand among the monazite processors, as the typical europium content in monazite is about 0.05%. However, the Molycorp bastnäsite deposit at the Mountain Pass rare earth mine, California, whose lanthanides had an unusually high europium content of 0.1%, was about to come on-line and provide sufficient europium to sustain the industry. Prior to europium, the color-TV red phosphor was very weak, and the other phosphor colors had to be muted, to maintain color balance. With the brilliant red europium phosphor, it was no longer necessary to mute the other colors, and a much brighter color TV picture was the result. Europium has continued to be in use in the TV industry ever since as well as in computer monitors. Californian bastnäsite now faces stiff competition from Bayan Obo, China, with an even "richer" europium content of 0.2%. Frank Spedding, celebrated for his development of the ion-exchange technology that revolutionized the rare-earth industry in the mid-1950s, once related the story of how he was lecturing on the rare earths in the 1930s, when an elderly gentleman approached him with an offer of a gift of several pounds of europium oxide. This was an unheard-of quantity at the time, and Spedding did not take the man seriously. However, a package duly arrived in the mail, containing several pounds of genuine europium oxide. The elderly gentleman had turned out to be Herbert Newby McCoy, who had developed a famous method of europium purification involving redox chemistry. Applications Relative to most other elements, commercial applications for europium are few and rather specialized. Almost invariably, its phosphorescence is exploited, either in the +2 or +3 oxidation state. It is a dopant in some types of glass in lasers and other optoelectronic devices. Europium oxide (Eu2O3) is widely used as a red phosphor in television sets and fluorescent lamps, and as an activator for yttrium-based phosphors. Color TV screens contain between 0.5 and 1 g of europium oxide. Whereas trivalent europium gives red phosphors, the luminescence of divalent europium depends strongly on the composition of the host structure. UV to deep red luminescence can be achieved. The two classes of europium-based phosphor (red and blue), combined with the yellow/green terbium phosphors give "white" light, the color temperature of which can be varied by altering the proportion or specific composition of the individual phosphors. This phosphor system is typically encountered in helical fluorescent light bulbs. Combining the same three classes is one way to make trichromatic systems in TV and computer screens, but as an additive, it can be particularly effective in improving the intensity of red phosphor. Europium is also used in the manufacture of fluorescent glass, increasing the general efficiency of fluorescent lamps. One of the more common persistent after-glow phosphors besides copper-doped zinc sulfide is europium-doped strontium aluminate. Europium fluorescence is used to interrogate biomolecular interactions in drug-discovery screens. It is also used in the anti-counterfeiting phosphors in euro banknotes. An application that has almost fallen out of use with the introduction of affordable superconducting magnets is the use of europium complexes, such as Eu(fod)3, as shift reagents in NMR spectroscopy. Chiral shift reagents, such as Eu(hfc)3, are still used to determine enantiomeric purity. Europium compounds are used to label antibodies for sensitive detection of antigens in body fluids, a form of immunoassay. When these europium-labeled antibodies bind to specific antigens, the resulting complex can be detected with laser excited fluorescence. Precautions There are no clear indications that europium is particularly toxic compared to other heavy metals. Europium chloride, nitrate and oxide have been tested for toxicity: europium chloride shows an acute intraperitoneal LD50 toxicity of 550 mg/kg and the acute oral LD50 toxicity is 5000 mg/kg. Europium nitrate shows a slightly higher intraperitoneal LD50 toxicity of 320 mg/kg, while the oral toxicity is above 5000 mg/kg. The metal dust presents a fire and explosion hazard. References External links It's Elemental – Europium Chemical elements Chemical elements with body-centered cubic structure Lanthanides Neutron poisons Reducing agents
Europium
[ "Physics", "Chemistry" ]
4,199
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
9,478
https://en.wikipedia.org/wiki/Erbium
Erbium is a chemical element; it has symbol Er and atomic number 68. A silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. It is a lanthanide, a rare-earth element, originally found in the gadolinite mine in Ytterby, Sweden, which is the source of the element's name. Erbium's principal uses involve its pink-colored Er3+ ions, which have optical fluorescent properties particularly useful in certain laser applications. Erbium-doped glasses or crystals can be used as optical amplification media, where Er3+ ions are optically pumped at around 980 or and then radiate light at in stimulated emission. This process results in an unusually mechanically simple laser optical amplifier for signals transmitted by fiber optics. The wavelength is especially important for optical communications because standard single mode optical fibers have minimal loss at this particular wavelength. In addition to optical fiber amplifier-lasers, a large variety of medical applications (e.g. dermatology, dentistry) rely on the erbium ion's emission (see Er:YAG laser) when lit at another wavelength, which is highly absorbed in water in tissues, making its effect very superficial. Such shallow tissue deposition of laser energy is helpful in laser surgery, and for the efficient production of steam which produces enamel ablation by common types of dental laser. Characteristics Physical properties A trivalent element, pure erbium metal is malleable (or easily shaped), soft yet stable in air, and does not oxidize as quickly as some other rare-earth metals. Its salts are rose-colored, and the element has characteristic sharp absorption spectra bands in visible light, ultraviolet, and near infrared. Otherwise it looks much like the other rare earths. Its sesquioxide is called erbia. Erbium's properties are to a degree dictated by the kind and amount of impurities present. Erbium does not play any known biological role, but is thought to be able to stimulate metabolism. Erbium is ferromagnetic below 19 K, antiferromagnetic between 19 and 80 K and paramagnetic above 80 K. Erbium can form propeller-shaped atomic clusters Er3N, where the distance between the erbium atoms is 0.35 nm. Those clusters can be isolated by encapsulating them into fullerene molecules, as confirmed by transmission electron microscopy. Like most rare-earth elements, erbium is usually found in the +3 oxidation state. However, it is possible for erbium to also be found in the 0, +1 and +2 oxidation states. Chemical properties Erbium metal retains its luster in dry air, however will tarnish slowly in moist air and burns readily to form erbium(III) oxide: 4 Er + 3 O2 → 2 Er2O3 Erbium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form erbium hydroxide: 2 Er (s) + 6 H2O (l) → 2 Er(OH)3 (aq) + 3 H2 (g) Erbium metal reacts with all the halogens: 2 Er (s) + 3 F2 (g) → 2 ErF3 (s) [pink] 2 Er (s) + 3 Cl2 (g) → 2 ErCl3 (s) [violet] 2 Er (s) + 3 Br2 (g) → 2 ErBr3 (s) [violet] 2 Er (s) + 3 I2 (g) → 2 ErI3 (s) [violet] Erbium dissolves readily in dilute sulfuric acid to form solutions containing hydrated Er(III) ions, which exist as rose red [Er(OH2)9]3+ hydration complexes: 2 Er (s) + 3 H2SO4 (aq) → 2 Er3+ (aq) + 3 (aq) + 3 H2 (g) Isotopes Naturally occurring erbium is composed of 6 stable isotopes, Er, Er, Er, Er, Er, and Er, with Er being the most abundant (33.503% natural abundance). 32 radioisotopes have been characterized, with the most stable being Er with a half-life of , Er with a half-life of , Er with a half-life of , Er with a half-life of , and Er with a half-life of . All of the remaining radioactive isotopes have half-lives that are less than , and the majority of these have half-lives that are less than 4 minutes. This element also has 26 meta states, with the most stable being Er with a half-life of . The isotopes of erbium range in Er to Er. The primary decay mode before the most abundant stable isotope, Er, is electron capture, and the primary mode after is beta decay. The primary decay products before Er are element 67 (holmium) isotopes, and the primary products after are element 69 (thulium) isotopes. Er has been identified as useful for use in Auger therapy, as it decays via electron capture and emits no gamma radiation. It can also be used as a radioactive tracer to label antibodies and peptides, though it cannot be detected by any kind of imaging for the study of its biological distribution. The isotope can be produced via the bombardment of Er with Tm or Er with Ho, the latter of which is more convenient due to Ho being a stable primordial isotope, though it requires an initial supply of Er. Compounds Oxides Erbium(III) oxide (also known as erbia) is the only known oxide of erbium, first isolated by Carl Gustaf Mosander in 1843, and first obtained in pure form in 1905 by Georges Urbain and Charles James. It has a cubic structure resembling the bixbyite motif. The Er3+ centers are octahedral. The formation of erbium oxide is accomplished by burning erbium metal, erbium oxalate or other oxyacid salts of erbium. Erbium oxide is insoluble in water and slightly soluble in heated mineral acids. The pink-colored compound is used as a phosphor activator and to produce infrared-absorbing glass. Halides Erbium(III) fluoride is a pinkish powder that can be produced by reacting erbium(III) nitrate and ammonium fluoride. It can be used to make infrared light-transmitting materials and up-converting luminescent materials, and is an intermediate in the production of erbium metal prior to its reduction with calcium. Erbium(III) chloride is a violet compounds that can be formed by first heating erbium(III) oxide and ammonium chloride to produce the ammonium salt of the pentachloride ([NH4]2ErCl5) then heating it in a vacuum at 350-400 °C. It forms crystals of the type, with monoclinic crystals and the point group C2/m. Erbium(III) chloride hexahydrate also forms monoclinic crystals with the point group of P2/n (P2/c) - C42h. In this compound, erbium is octa-coordinated to form ions with the isolated completing the structure. Erbium(III) bromide is a violet solid. It is used, like other metal bromide compounds, in water treatment, chemical analysis and for certain crystal growth applications. Erbium(III) iodide is a slightly pink compound that is insoluble in water. It can be prepared by directly reacting erbium with iodine. Organoerbium compounds Organoerbium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. History Erbium (for Ytterby, a village in Sweden) was discovered by Carl Gustaf Mosander in 1843. Mosander was working with a sample of what was thought to be the single metal oxide yttria, derived from the mineral gadolinite. He discovered that the sample contained at least two metal oxides in addition to pure yttria, which he named "erbia" and "terbia" after the village of Ytterby where the gadolinite had been found. Mosander was not certain of the purity of the oxides and later tests confirmed his uncertainty. Not only did the "yttria" contain yttrium, erbium, and terbium; in the ensuing years, chemists, geologists and spectroscopists discovered five additional elements: ytterbium, scandium, thulium, holmium, and gadolinium. Erbia and terbia, however, were confused at this time. Marc Delafontaine, a Swiss spectroscopist, mistakenly switched the names of the two elements in his work separating the oxides erbia and terbia. After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor. Occurrence The concentration of erbium in the Earth crust is about 2.8 mg/kg and in seawater 0.9 ng/L. (Concentration of less abundant elements may vary with location by several orders of magnitude making the relative abundance unreliable). Like other rare earths, this element is never found as a free element in nature but is found in monazite and bastnäsite ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly reduced the cost of production of all rare-earth metals and their chemical compounds. The principal commercial sources of erbium are from the minerals xenotime and euxenite, and most recently, the ion adsorption clays of southern China. Consequently, China has now become the principal global supplier of this element. In the high-yttrium versions of these ore concentrates, yttrium is about two-thirds of the total by weight, and erbia is about 4–5%. When the concentrate is dissolved in acid, the erbia liberates enough erbium ion to impart a distinct and characteristic pink color to the solution. This color behavior is similar to what Mosander and the other early workers in the lanthanides saw in their extracts from the gadolinite minerals of Ytterby. Production Crushed minerals are attacked by hydrochloric or sulfuric acid that transforms insoluble rare-earth oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda (sodium hydroxide) to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of rare-earth metals. The salts are separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent. Erbium metal is obtained from its oxide or salts by heating with calcium at under argon atmosphere. Applications Lasers and optics A large variety of medical applications (i.e., dermatology, dentistry) utilize erbium ion's emission (see Er:YAG laser), which is highly absorbed in water (absorption coefficient about ). Such shallow tissue deposition of laser energy is necessary for laser surgery, and the efficient production of steam for laser enamel ablation in dentistry. Common applications of erbium lasers in dentistry include ceramic cosmetic dentistry and removal of brackets in orthodontic braces; such laser applications have been noted as more time-efficient than performing the same procedures with rotary dental instruments. Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which are widely used in optical communications. The same fibers can be used to create fiber lasers. In order to work efficiently, erbium-doped fiber is usually co-doped with glass modifiers/homogenizers, often aluminium or phosphorus. These dopants help prevent clustering of Er ions and transfer the energy more efficiently between excitation light (also known as optical pump) and the signal. Co-doping of optical fiber with Er and Yb is used in high-power Er/Yb fiber lasers. Erbium can also be used in erbium-doped waveguide amplifiers. Other applications When added to vanadium as an alloy, erbium lowers hardness and improves workability. An erbium-nickel alloy Er3Ni has an unusually high specific heat capacity at liquid-helium temperatures and is used in cryocoolers; a mixture of 65% Er3Co and 35% Er0.9Yb0.1Ni by volume improves the specific heat capacity even more. Erbium oxide has a pink color, and is sometimes used as a colorant for glass, cubic zirconia and porcelain. The glass is then often used in sunglasses and jewellery, or where infrared absorption is needed. Erbium is used in nuclear technology in neutron-absorbing control rods. or as a burnable poison in nuclear fuel design. Biological role and precautions Erbium does not have a biological role, but erbium salts can stimulate metabolism. Humans consume 1 milligram of erbium a year on average. The highest concentration of erbium in humans is in the bones, but there is also erbium in the human kidneys and liver. Erbium is slightly toxic if ingested, but erbium compounds are generally not toxic. Ionic erbium behaves similar to ionic calcium, and can potentially bind to proteins such as calmodulin. When introduced into the body, nitrates of erbium, similar to other rare earth nitrates, increase triglyceride levels in the liver and cause leakage of hepatic (liver-related) enzymes to the blood, though they uniquely (along with gadolinium and dysprosium nitrates) increase RNA polymerase II activity. Ingestion and inhalation are the main routes of exposure to erbium and other rare earths, as they do not diffuse through unbroken skin. Metallic erbium in dust form presents a fire and explosion hazard. References Further reading Guide to the Elements – Revised Edition, Albert Stwertka (Oxford University Press; 1998), . External links It's Elemental – Erbium Chemical elements Chemical elements with hexagonal close-packed structure Ferromagnetic materials Lanthanides Reducing agents
Erbium
[ "Physics", "Chemistry" ]
3,294
[ "Chemical elements", "Redox", "Ferromagnetic materials", "Reducing agents", "Materials", "Atoms", "Matter" ]
9,479
https://en.wikipedia.org/wiki/Einsteinium
Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. It is named after Albert Einstein and is a member of the actinide series and the seventh transuranium element. Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253 (Es; half-life 20.47 days), is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of einsteinium produced and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955. Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of Es produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Studying its properties is difficult due to Es's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The longest-lived isotope of einsteinium, Es (half-life 471.7 days) would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253. Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion. History Einsteinium was first identified in December 1952 by Albert Ghiorso and co-workers at University of California, Berkeley in collaboration with the Argonne and Los Alamos National Laboratories, in the fallout from the Ivy Mike nuclear test. The test was done on November 1, 1952, at Enewetak Atoll in the Pacific Ocean and was the first successful test of a thermonuclear weapon. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, , which could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two beta decays. ^{238}_{92}U ->[\ce{+ 6(n,\gamma)}][-2\ \beta^-]{} ^{244}_{94}Pu At the time, the multiple neutron absorption was thought to be an extremely rare process, but the identification of Pu indicated that still more neutrons could have been captured by the uranium, producing new elements heavier than californium. Ghiorso and co-workers analyzed filter papers which had been flown through the explosion cloud on airplanes (the same sampling technique that had been used to discover Pu). Larger amounts of radioactive material were later isolated from coral debris of the atoll, and these were delivered to the U.S. The separation of suspected new elements was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH ≈ 3.5), using ion exchange at elevated temperatures; fewer than 200 atoms of einsteinium were recovered in the end. Nevertheless, element 99, einsteinium, and in particular Es, could be detected via its characteristic high-energy alpha decay at 6.6 MeV. It was produced by the capture of 15 neutrons by uranium-238 nuclei followed by seven beta decays, and had a half-life of 20.5 days. Such multiple neutron absorption was made possible by the high neutron flux density during the detonation, so that newly generated heavy isotopes had plenty of available neutrons to absorb before they could disintegrate into lighter elements. Neutron capture initially raised the mass number without changing the atomic number of the nuclide, and the concomitant beta-decays resulted in a gradual increase in the atomic number: ^{238}_{92}U ->[\ce{+15n}][6 \beta^-] ^{253}_{98}Cf ->[\beta^-] ^{253}_{99}Es Some U atoms, however, could absorb two additional neutrons (for a total of 17), resulting in Es, as well as in the Fm isotope of another new element, fermium. The discovery of the new elements and the associated new data on multiple neutron capture were initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions and competition with Soviet Union in nuclear technologies. However, the rapid capture of so many neutrons would provide needed direct experimental confirmation of the r-process multi-neutron absorption needed to explain the cosmic nucleosynthesis (production) of certain heavy elements (heavier than nickel) in supernovas, before beta decay. Such a process is needed to explain the existence of many stable elements in the universe. Meanwhile, isotopes of element 99 (as well as of new element 100, fermium) were produced in the Berkeley and Argonne laboratories, in a nuclear reaction between nitrogen-14 and uranium-238, and later by intense neutron irradiation of plutonium or californium: ^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es ->[\ce{(n,\gamma)}] ^{254}_{99}Es ->[\beta^-] ^{254}_{100}Fm These results were published in several articles in 1954 with the disclaimer that these were not the first studies that had been carried out on the elements. The Berkeley team also reported some results on the chemical properties of einsteinium and fermium. The Ivy Mike results were declassified and published in 1955. In their discovery of elements 99 and 100, the American teams had competed with a group at the Nobel Institute for Physics, Stockholm, Sweden. In late 1953 – early 1954, the Swedish group succeeded in synthesizing light isotopes of element 100, in particular Fm, by bombarding uranium with oxygen nuclei. These results were also published in 1954. Nevertheless, the priority of the Berkeley team was generally recognized, as its publications preceded the Swedish article, and they were based on the previously undisclosed results of the 1952 thermonuclear explosion; thus the Berkeley team was given the privilege to name the new elements. As the effort which had led to the design of Ivy Mike was codenamed Project PANDA, element 99 had been jokingly nicknamed "Pandemonium" but the official names suggested by the Berkeley group derived from two prominent scientists, Einstein and Fermi: "We suggest for the name for the element with the atomic number 99, einsteinium (symbol E) after Albert Einstein and for the name for the element with atomic number 100, fermium (symbol Fm), after Enrico Fermi." Both Einstein and Fermi died between the time the names were originally proposed and when they were announced. The discovery of these new elements was announced by Albert Ghiorso at the first Geneva Atomic Conference held on 8–20 August 1955. The symbol for einsteinium was first given as "E" and later changed to "Es" by IUPAC. Characteristics Physical Einsteinium is a synthetic, silvery, radioactive metal. In the periodic table, it is located to the right of the actinide californium, to the left of the actinide fermium and below the lanthanide holmium with which it shares many similarities in physical and chemical properties. Its density of 8.84 g/cm is lower than that of californium (15.1 g/cm) and is nearly the same as that of holmium (8.79 g/cm), despite einsteinium being much heavier per atom than holmium. Einsteinium's melting point (860 °C) is also relatively low – below californium (900 °C), fermium (1527 °C) and holmium (1461 °C). Einsteinium is a soft metal, with a bulk modulus of only 15 GPa, one of the lowest among non-alkali metals. Unlike the lighter actinides californium, berkelium, curium and americium, which crystallize in a double hexagonal structure at ambient conditions; einsteinium is believed to have a face-centered cubic (fcc) symmetry with the space group Fmm and the lattice constant . However, there is a report of room-temperature hexagonal einsteinium metal with and , which converted to the fcc phase upon heating to 300 °C. The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of 253Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, due to the small size of available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, surface effects in small samples could reduce the melting point. The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example HO+HCl for EsOCl so that the sample is partly regrown during its decomposition. Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day: ^{253}_{99}Es ->[\alpha][20 \ce{d}] ^{249}_{97}Bk ->[\beta^-][314 \ce{d}] ^{249}_{98}Cf Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties. Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as for EsO and for the EsF, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K. Chemical Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride. Isotopes Eighteen isotopes and four nuclear isomers are known for einsteinium, with mass numbers 240–257. All are radioactive; the most stable one, Es, has half-life 471.7 days. The next most stable isotopes are Es (half-life 275.7 days), Es (39.8 days), and Es (20.47 days). All the other isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the five isomers, the most stable is Es with a half-life of 39.3 hours. Nuclear fission Einsteinium has a high rate of nuclear fission that results in a low critical mass. This mass is 9.89 kilograms for a bare sphere of Es, and can be lowered to 2.9 kg by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kg with a 20-cm-thick reflector made of water. However, even this small critical mass far exceeds the total amount of einsteinium isolated so far, especially of the rare Es. Natural occurrence Due to the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could have been present on Earth at its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring uranium and thorium in the Earth's crust requires multiple neutron capture, an extremely unlikely event. Therefore, all einsteinium on Earth is produced in laboratories, high-power nuclear reactors, or nuclear testing, and exists only within a few years from the time of the synthesis. The transuranic elements americium to fermium, including einsteinium, were once created in the natural nuclear fission reactor at Oklo, but any quantities produced then would have long since decayed away. Einsteinium was theoretically observed in the spectrum of Przybylski's Star. However, the lead author of the studies finding einsteinium and other short-lived actinides in Przybylski's Star, Vera F. Gopka, admitted that "the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are not any atomic data for these lines except for their wavelengths (Sansonetti et al. 2004), enabling one to calculate their profiles with more or less real intensities." The signature spectra of einsteinium's isotopes have since been comprehensively analyzed experimentally (in 2021), though there is no published research confirming whether the theorized einsteinium signatures proposed to be found in the star's spectrum match the lab-determined results. Synthesis and extraction Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL), Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium (Z>96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, though the quantities produced at NIIAR are not widely reported. In a "typical processing campaign" at ORNL, tens of grams of curium are irradiated to produce decigram quantities of californium, milligrams of berkelium (Bk) and einsteinium and picograms of fermium. The first microscopic sample of Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly Es) of 0.48 milligram in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold. Laboratory synthesis Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: Es (α-emitter; half-life 20.47 days, spontaneous fission half-life 7×10 years); Es (β-emitter, half-life 39.3 hours), Es (α-emitter, half-life 276 days) and Es (β-emitter, half-life 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams. Es (half-life 4.55 min) was produced by irradiating Am with carbon or U with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize. Es was produced by irradiating Cf with deuterium ions. It mainly β-decays to Cf with a half-life of minutes, but also releases 6.87-MeV α-particles; the ratio of β's to α-particles is about 400. Es were obtained by bombarding Bk with α-particles. One to four neutrons are released, so four different isotopes are formed in one reaction. ^{249}_{97}Bk ->[+\alpha] ^{249,250,251,252}_{99}Es Es was produced by irradiating a 0.1–0.2 milligram Cf target with a thermal neutron flux of (2–5)×10 neutrons/(cm·s) for 500–900 hours: ^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es In 2020, scientists at ORNL created about 200 nanograms of Es; allowing some chemical properties of the element to be studied for the first time. Synthesis in nuclear explosions The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project. One of the goals was studying the efficiency of production of transuranic elements in high-power nuclear explosions. The motive for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, or about 10 neutrons/(cm·s). In comparison, the flux of HFIR is 5 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll. The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions in a confined space might give improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes. Of the nine underground tests between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranics. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only ~4 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only ~1 of the total charge. The amount of transuranic elements in this 500-kg batch was only 30 times higher than in a 0.4-kg rock picked up 7 days after the test which showed the highly non-linear dependence of the transuranics yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds of kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides. Though no new elements (except einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranics were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. Separation Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, Es, decays with a half-life of only 20 days to Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions. Trivalent actinides can be separated from lanthanide fission products by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant. The 3+ actinides can also be separated via solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column. Preparation of the metal Einsteinium is highly reactive, so strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium: EsF + 3 Li → Es + 3 LiF However, owing to its low melting point and high rate of self-radiation damage, einsteinium has a higher vapor pressure than lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal: EsO + 2 La → 2 Es + LaO Chemical compounds Oxides Einsteinium(III) oxide (EsO) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain EsO phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es ion is surrounded by a 6-coordinated group of O ions. Halides Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide. Einsteinium(III) fluoride (EsF) can be precipitated from Es(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure Es(III) oxide to chlorine trifluoride (ClF) or F gas at a pressure of 1–2 atmospheres and temperature 300–400°C. The EsF crystal structure is hexagonal, as in californium(III) fluoride (CfF) where the Es ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement. Es(III) chloride (EsCl) can be prepared by annealing Es(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500°C for some 20 minutes. It crystallizes upon cooling at about 425°C into an orange solid with a hexagonal structure of UCl type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr) is a pale-yellow solid with a monoclinic structure of AlCl type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6). The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen: 2 EsX + H → 2 EsX + 2 HX; X = F, Cl, Br, I Einsteinium(II) chloride (EsCl), einsteinium(II) bromide (EsBr), and einsteinium(II) iodide (EsI) have been produced and characterized by optical absorption, with no structural information available yet. Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl + HO/HCl to obtain EsOCl. Organoeinsteinium compounds Einsteinium's high radioactivity has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into β-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es ions were 1000 times diluted with Gd ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the 20 minutes required for the measurements. The resulting luminescence from Es was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es ions. Similar conclusion was drawn for americium, berkelium and fermium. Luminescence of Es ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es were associated with the stronger interaction of f-electrons with the inner Es electrons. Applications There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements. In 1955, mendelevium was synthesized by irradiating a target consisting of about 10 atoms of Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting Es(α,n)Md reaction yielded 17 atoms of the new element with the atomic number of 101. The rare isotope Es is favored for production of superheavy elements due to its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence Es was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns. {^{254}_{99}Es} + {^{48}_{20}Ca} -> {^{302}_{119}Uue^\ast} -> no\ atoms Es was used as the calibration marker in the chemical analysis spectrometer ("alpha-scattering surface analyzer") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface. Safety Most of the available einsteinium toxicity data is from research on animals. Upon ingestion by rats, only ~0.01% of it ends in the bloodstream. From there, about 65% goes to the bones, where it would remain for ~50 years if not for its radioactive decay, not to speak of the 3-year maximum lifespan of rats, 25% to the lungs (biological half-life ~20 years, though this is again rendered irrelevant by the short half-life of einsteinium), 0.035% to the testicles or 0.01% to the ovaries – where einsteinium stays indefinitely. About 10% of the ingested amount is excreted. The distribution of einsteinium over bone surfaces is uniform and is similar to that of plutonium. References Bibliography External links Einsteinium at The Periodic Table of Videos (University of Nottingham) Age-related factors in radionuclide metabolism and dosimetry: Proceedings – contains several health related studies of einsteinium Chemical elements Chemical elements with face-centered cubic structure Actinides Synthetic elements Albert Einstein
Einsteinium
[ "Physics", "Chemistry" ]
6,723
[ "Chemical elements", "Synthetic materials", "Synthetic elements", "Radioactivity", "Atoms", "Matter" ]
9,499
https://en.wikipedia.org/wiki/Ethernet
Ethernet ( ) is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET. The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both cheaper and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original to the latest , with rates up to under development. The Ethernet standards include several wiring and signaling variants of the OSI physical layer. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers. Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet. History Ethernet was developed at Xerox PARC between 1973 and 1974 as a means to allow Alto computers to communicate with each other. It was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation and was originally called the Alto Aloha Network. Metcalfe's idea was essentially to limit the Aloha-like signals inside a cable, instead of broadcasting into the air. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, and Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Ron Crane, Yogen Dalal, Robert Garner, Hal Murray, Roy Ogus, Dave Redell and John Shoch facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was released to the market in 1980. Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard. As part of that process Xerox agreed to relinquish their 'Ethernet' trademark. The first standard was published on September 30, 1980, as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications". This so-called DIX standard (Digital Intel Xerox) specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet initially competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market needs, and with 10BASE2 shift to inexpensive thin coaxial cable, and from 1990 to the now-ubiquitous twisted pair with 10BASE-T. By the end of the 1980s, Ethernet was clearly the dominant network technology. In the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, and that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. In the 1980s, IBM's own PC Network product competed with Ethernet for the PC, and through the 1980s, LAN hardware, in general, was not common on PCs. However, in the mid to late 1980s, PC networking did become popular in offices and schools for printer and fileserver sharing, and among the many diverse competing LAN technologies of that decade, Ethernet was one of the most popular. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that Ethernet ports began to appear on some PCs and most workstations. This process was greatly sped up with the introduction of 10BASE-T and its relatively small modular connector, at which point Ethernet ports appeared even on low-end motherboards. Since then, Ethernet technology has evolved to meet new bandwidth and market requirements. In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is quickly replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. Standardization In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The DIX group with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called Blue Book CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985. Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989. Evolution Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The multidrop coaxial cable was replaced with physical point-to-point links connected by Ethernet repeaters or switches. Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations. An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants. Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card. Shared medium Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name Ethernet was derived. Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable. Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable in diameter, later called thick Ethernet or thicknet. Its successor, 10BASE2, called thin Ethernet or thinnet, used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly. Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active. A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better. In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free. Repeaters and hubs For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called Fibernet) using optical fiber were published by 1978. Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted-pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s. Ethernet on unshielded twisted-pair cables (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network. Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed. Bridging and switching While repeaters can isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is one collision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible. To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants. In 1989, Motorola Codex introduced their 6310 EtherSpan, and Kalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches. Early switches such as this used cut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment. This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the original store and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, its frame check sequence verified and only then the packet is forwarded. In modern network equipment, this process is typically done using application-specific integrated circuits allowing packets to be forwarded at wire speed. When a twisted pair or fiber link segment is used and neither end is connected to a repeater, full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet). The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection. Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology. Advanced networking Simple switched Ethernet networks, while a great improvement over repeater-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it is not intended for it, scalability and security issues with regard to switching loops, broadcast radiation, and multicast traffic. Advanced networking features in switches use Shortest Path Bridging (SPB) or the Spanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of the link-state routing protocol IS-IS to allow larger networks with shortest path routes between devices. Advanced networking features also ensure port security, provide protection features such as MAC lockdown and broadcast radiation filtering, use VLANs to keep different classes of users separate while using the same physical infrastructure, employ multilayer switching to route between different classes, and use link aggregation to add bandwidth to overloaded links and to provide some redundancy. In 2016, Ethernet replaced InfiniBand as the most popular system interconnect of TOP500 supercomputers. Varieties The Ethernet physical layer evolved over a considerable time span and encompasses coaxial, twisted pair and fiber-optic physical media interfaces, with speeds from to . The first introduction of twisted-pair CSMA/CD was StarLAN, standardized as 802.3 1BASE5. While 1BASE5 had little market penetration, it defined the physical apparatus (wire, plug/jack, pin-out, and wiring plan) that would be carried over to 10BASE-T through 10GBASE-T. The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three use twisted-pair cables and 8P8C modular connectors. They run at , , and , respectively. Fiber optic variants of Ethernet (that commonly use SFP modules) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties. Frame structure In IEEE 802.3, a datagram is called a packet or frame. Packet is used to describe the overall transmission unit and includes the preamble, start frame delimiter (SFD) and carrier extension (if present). The frame begins after the start frame delimiter with a frame header featuring source and destination MAC addresses and the EtherType field giving either the protocol type for the payload protocol or the length of the payload. The middle section of the frame consists of payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check, which is used to detect corruption of data in transit. Notably, Ethernet packets have no time-to-live field, leading to possible problems in the presence of a switching loop. Autonegotiation Autonegotiation is the procedure by which two connected devices choose common transmission parameters, e.g. speed and duplex mode. Autonegotiation was initially an optional feature, first introduced with 100BASE-TX (1995 IEEE 802.3u Fast Ethernet standard), and is backward compatible with 10BASE-T. The specification was improved in the 1998 release of IEEE 802.3. Autonegotiation is mandatory for 1000BASE-T and faster. Error conditions Switching loop A switching loop or bridge loop occurs in computer networks when there is more than one Layer 2 (OSI model) path between two endpoints (e.g. multiple connections between two network switches or two ports on the same switch connected to each other). The loop creates broadcast storms as broadcasts and multicasts are forwarded by switches out every port, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the Layer 2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever. A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches. Jabber A node that is sending longer than the maximum transmission window for an Ethernet packet is considered to be jabbering. Depending on the physical topology, jabber detection and remedy differ somewhat. An MAU is required to detect and stop abnormally long transmission from the DTE (longer than 20–150 ms) in order to prevent permanent network disruption. On an electrically shared medium (10BASE5, 10BASE2, 1BASE5), jabber can only be detected by each end node, stopping reception. No further remedy is possible. A repeater/repeater hub uses a jabber timer that ends retransmission to the other ports when it expires. The timer runs for 25,000 to 50,000 bit times for 1 Mbit/s, 40,000 to 75,000 bit times for 10 and 100 Mbit/s, and 80,000 to 150,000 bit times for 1 Gbit/s. Jabbering ports are partitioned off the network until a carrier is no longer detected. End nodes utilizing a MAC layer will usually detect an oversized Ethernet frame and cease receiving. A bridge/switch will not forward the frame. A non-uniform frame size configuration in the network using jumbo frames may be detected as jabber by end nodes. Jumbo frames are not part of the official IEEE 802.3 Ethernet standard. A packet detected as jabber by an upstream repeater and subsequently cut off has an invalid frame check sequence and is dropped. Runt frames Runts are packets or frames smaller than the minimum allowed size. They are dropped and not propagated. See also 5-4-3 rule Chaosnet Ethernet Alliance Ethernet crossover cable Fiber media converter ISO/IEC 11801 Link Layer Discovery Protocol List of interface bit rates LocalTalk PHY Physical coding sublayer Power over Ethernet Point-to-Point Protocol over Ethernet (PPPoE) Sneakernet Wake-on-LAN (WoL) Notes References Further reading Version 1.0 of the DIX specification. External links IEEE 802.3 Ethernet working group IEEE 802.3-2015 – superseded IEEE 802.3-2018 standard American inventions IEEE standards Computer-related introductions in 1980
Ethernet
[ "Technology" ]
5,258
[ "Computer standards", "IEEE standards" ]
9,506
https://en.wikipedia.org/wiki/Edward%20Jenner
Edward Jenner (17 May 1749 – 26 January 1823) was an English physician and scientist who pioneered the concept of vaccines and created the smallpox vaccine, the world's first vaccine. The terms vaccine and vaccination are derived from Variolae vaccinae ('pustules of the cow'), the term devised by Jenner to denote cowpox. He used it in 1798 in the title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In the West, Jenner is often called "the father of immunology", and his work is said to have saved "more lives than any other man". In Jenner's time, smallpox killed around 10% of the global population, with the number as high as 20% in towns and cities where infection spread more easily. In 1821, he was appointed physician to King George IV, and was also made mayor of Berkeley and justice of the peace. He was a member of the Royal Society. In the field of zoology, he was among the first modern scholars to describe the brood parasitism of the cuckoo (Aristotle also noted this behaviour in his History of Animals). In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons. Early life Edward Jenner was born on 17 May 1749 in Berkeley, Gloucestershire, England as the eighth of nine children. His father, the Reverend Stephen Jenner, was the vicar of Berkeley, so Jenner received a strong basic education. Education and training When he was young, he went to school in Wotton-under-Edge at Katherine Lady Berkeley's School and in Cirencester. During this time, he was inoculated (by variolation) for smallpox, which had a lifelong effect upon his general health. At the age of 14, he was apprenticed for seven years to Daniel Ludlow, a surgeon of Chipping Sodbury, South Gloucestershire, where he gained most of the experience needed to become a surgeon himself. In 1770, aged 21, Jenner became apprenticed in surgery and anatomy under surgeon John Hunter and others at St George's Hospital, London. William Osler records that Hunter gave Jenner William Harvey's advice, well known in medical circles (and characteristic of the Age of Enlightenment), "Don't think; try." Hunter remained in correspondence with Jenner over natural history and proposed him for the Royal Society. Returning to his native countryside by 1773, Jenner became a successful family doctor and surgeon, practising on dedicated premises at Berkeley. In 1792, "with twenty years' experience of general practice and surgery, Jenner obtained the degree of MD from the University of St Andrews". Later life Jenner and others formed the Fleece Medical Society or Gloucestershire Medical Society, so called because it met in the parlour of the Fleece Inn, Rodborough, Gloucestershire. Members dined together and read papers on medical subjects. Jenner contributed papers on angina pectoris, ophthalmia, and cardiac valvular disease and commented on cowpox. He also belonged to a similar society which met in Alveston, near Bristol. He became a master mason on 30 December 1802, in Lodge of Faith and Friendship #449. From 1812 to 1813, he served as worshipful master of Royal Berkeley Lodge of Faith and Friendship. Zoology Jenner was elected fellow of the Royal Society in 1788, following his publication of a careful study of the previously misunderstood life of the nested cuckoo, a study that combined observation, experiment, and dissection. Jenner described how the newly hatched cuckoo pushed its host's eggs and fledgling chicks out of the nest (contrary to existing belief that the adult cuckoo did it). Having observed this behaviour, Jenner demonstrated an anatomical adaptation for itthe baby cuckoo has a depression in its back, not present after 12 days of life, that enables it to cup eggs and other chicks. The adult does not remain long enough in the area to perform this task. Jenner's findings were published in Philosophical Transactions of the Royal Society in 1788. "The singularity of its shape is well adapted to these purposes; for, different from other newly hatched birds, its back from the scapula downwards is very broad, with a considerable depression in the middle. This depression seems formed by nature for the design of giving a more secure lodgement to the egg of the Hedge-sparrow, or its young one, when the young Cuckoo is employed in removing either of them from the nest. When it is about twelve days old, this cavity is quite filled up, and then the back assumes the shape of nestling birds in general." Jenner's nephew assisted in the study. He was born on 30 June 1737. Jenner's understanding of the cuckoo's behaviour was not entirely believed until the artist Jemima Blackburn, a keen observer of birdlife, saw a blind nestling pushing out a host's egg. Blackburn's description and illustration were enough to convince Charles Darwin to revise a later edition of On the Origin of Species. Jenner's interest in zoology played a large role in his first experiment with inoculation. Not only did he have a profound understanding of human anatomy due to his medical training, but he also understood animal biology and its role in human-animal trans-species boundaries in disease transmission. At the time, there was no way of knowing how important this connection would be to the history and discovery of vaccinations. We see this connection now; many present-day vaccinations include animal parts from cows, rabbits, and chicken eggs, which can be attributed to the work of Jenner and his cowpox/smallpox vaccination. Marriage and human medicine Jenner married Catherine Kingscote (who died in 1815 from tuberculosis) in March 1788. He might have met her while he and other fellows were experimenting with balloons. Jenner's trial balloon descended into Kingscote Park, Gloucestershire, owned by Catherine's father Anthony Kingscote. They had three children: Edward Robert (1789–1810), Robert Fitzharding (1792–1854) and Catherine (1794–1833). He earned his MD from the University of St Andrews in 1792. He is credited with advancing the understanding of angina pectoris. In his correspondence with Heberden, he wrote: "How much the heart must suffer from the coronary arteries not being able to perform their functions". Invention of the vaccine Inoculation was already a standard practice in Asian and African medicine but involved serious risks, including the possibility that those inoculated would become contagious and spread the disease to others. In 1721, Lady Mary Wortley Montagu had imported variolation to Britain after having observed it in Istanbul. While Johnnie Notions had great success with his self-devised inoculation (and was reputed not to have lost a single patient), his method's practice was limited to the Shetland Isles. Voltaire wrote that at this time 60% of the population caught smallpox and 20% of the population died from it. Voltaire also states that the Circassians used the inoculation from times immemorial, and the custom may have been borrowed by the Turks from the Circassians. In 1766, Daniel Bernoulli analysed smallpox morbidity and mortality data to demonstrate the efficacy of inoculation. By 1768, English physician John Fewster had realised that prior infection with cowpox rendered a person immune to smallpox. In the years following 1770, at least five investigators in England and Germany (Sevel, Jensen, Jesty 1774, Rendell, Plett 1791) successfully tested in humans a cowpox vaccine against smallpox. For example, Dorset farmer Benjamin Jesty successfully vaccinated and presumably induced immunity with cowpox in his wife and two children during a smallpox epidemic in 1774, but it was not until Jenner's work that the procedure became widely understood. Jenner may have been aware of Jesty's procedures and success. A similar observation was later made in France by Jacques Antoine Rabaut-Pommier in 1780. Jenner postulated that the pus in the blisters that affected individuals affected by cowpox (a disease similar to smallpox, but much less virulent) protected them from smallpox. On 14 May 1796, Jenner tested his hypothesis by inoculating James Phipps, an eight-year-old boy who was the son of Jenner's gardener. He scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow called Blossom, whose hide now hangs on the wall of the St. George's Medical School library (now in Tooting). Phipps was the 17th case described in Jenner's first paper on vaccination. Jenner inoculated Phipps in both arms that day, subsequently producing in Phipps a fever and some uneasiness, but no full-blown infection. Later, he injected Phipps with variolous material, the routine method of immunization at that time. No disease followed. The boy was later challenged with variolous material and again showed no sign of infection. No unexpected side effects occurred, and neither Phipps nor any other recipients underwent any future 'breakthrough' cases. Jenner's biographer John Baron would later speculate that Jenner understood one could be inoculated against smallpox by being exposed to cowpox by observing the unblemished complexion of milkmaids, rather than building on the work of his predecessors. The milkmaids story is still widely repeated even though it appears to be a myth. Donald Hopkins has written, "Jenner's unique contribution was not that he inoculated a few persons with cowpox, but that he then proved [by subsequent challenges] that they were immune to smallpox. Moreover, he demonstrated that the protective cowpox pus could be effectively inoculated from person to person, not just directly from cattle." Jenner successfully tested his hypothesis on 23 additional subjects. Jenner continued his research and reported it to the Royal Society, which did not publish the initial paper. After revisions and further investigations, he published his findings on the 23 cases, including his 11-month-old son Robert. Some of his conclusions were correct, some erroneous; modern microbiological and microscopic methods would make his studies easier to reproduce. The medical establishment deliberated at length over his findings before accepting them. Eventually, vaccination was accepted, and in 1840, the British government banned variolationthe use of smallpox to induce immunityand provided vaccination using cowpox free of charge (see Vaccination Act). The success of his discovery soon spread around Europe and was used en masse in the Spanish Balmis Expedition (1803–1806), a three-year-long mission to the Americas, the Philippines, Macao, China, led by Francisco Javier de Balmis with the aim of giving thousands the smallpox vaccine. The expedition was successful, and Jenner wrote: "I don't imagine the annals of history furnish an example of philanthropy so noble, so extensive as this". Napoleon, who at the time was at war with Britain, had all his French troops vaccinated, awarded Jenner a medal, and at the request of Jenner, he released two English prisoners of war and permitted their return home. Napoleon remarked he could not "refuse anything to one of the greatest benefactors of mankind". Jenner's continuing work on vaccination prevented him from continuing his ordinary medical practice. He was supported by his colleagues and the King in petitioning Parliament, and was granted £10,000 in 1802 for his work on vaccination. In 1807, he was granted another £20,000 after the Royal College of Physicians confirmed the widespread efficacy of vaccination. Later life Jenner was later elected a foreign honorary member of the American Academy of Arts and Sciences in 1802, a member of the American Philosophical Society in 1804, and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1803 in London, he became president of the Jennerian Society, concerned with promoting vaccination to eradicate smallpox. The Jennerian ceased operations in 1809. Jenner became a member of the Medical and Chirurgical Society on its founding in 1805 (now the Royal Society of Medicine) and presented several papers there. In 1808, with government aid, the National Vaccine Establishment was founded, but Jenner felt dishonoured by the men selected to run it and resigned his directorship. Returning to London in 1811, Jenner observed a significant number of cases of smallpox after vaccination. He found that in these cases the severity of the illness was notably diminished by previous vaccination. In 1821, he was appointed physician extraordinary to King George IV, and was also made mayor of Berkeley and magistrate (justice of the peace). He continued to investigate natural history, and in 1823, the last year of his life, he presented his "Observations on the Migration of Birds" to the Royal Society. Jenner was a Freemason. Death Jenner was found in a state of apoplexy on 25 January 1823, with his right side paralysed. He did not recover and died the next day of an apparent stroke, his second, on 26 January 1823, aged 73. He was buried in the family vault at the Church of St Mary, Berkeley. Religious views Neither fanatic nor lax, Jenner was a Christian who in his personal correspondence showed himself quite spiritual. Some days before his death, he stated to a friend: "I am not surprised that men are not grateful to me; but I wonder that they are not grateful to God for the good which He has made me the instrument of conveying to my fellow creatures". Legacy In 1980, the World Health Organization declared smallpox an eradicated disease. This was the result of coordinated public health efforts, but vaccination was an essential component. Although the disease was declared eradicated, some pus samples still remain in laboratories in Centers for Disease Control and Prevention in Atlanta in the US, and in State Research Center of Virology and Biotechnology VECTOR in Koltsovo, Novosibirsk Oblast, Russia. Jenner's vaccine laid the foundation for contemporary discoveries in immunology. In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons following a UK-wide vote. Commemorated on postage stamps issued by the Royal Mail, in 1999 he featured in their World Changers issue along with Charles Darwin, Michael Faraday and Alan Turing. The lunar crater Jenner is named in his honour. Monuments and buildings Jenner's house in the village of Berkeley, Gloucestershire, is now a small museum, housing, among other things, the horns of the cow, Blossom. A statue of Jenner by Robert William Sievier was erected in the nave of Gloucester Cathedral. Another statue was erected in Trafalgar Square and later moved to Kensington Gardens. Near the Gloucestershire village of Uley, Downham Hill is locally known as "Smallpox Hill" for its possible role in Jenner's studies of the disease. London's St. George's Hospital Medical School has a Jenner Pavilion, where his bust may be found. A group of villages in Somerset County, Pennsylvania, United States, was named in Jenner's honour by early 19th-century English settlers, including Jenners, Jenner Township, Jenner Crossroads, and Jennerstown, Pennsylvania. Jennersville, Pennsylvania, is located in Chester County. The Edward Jenner Institute for Vaccine Research is an infectious disease vaccine research centre, also the Jenner Institute part of the University of Oxford. A section at Gloucestershire Royal Hospital is known as the Edward Jenner Unit; it is where blood is drawn. A ward at Northwick Park Hospital is called Jenner Ward. Jenner Gardens at Cheltenham, Gloucestershire, opposite one of the scientist's former offices, is a small garden and cemetery. A statue of Jenner was erected at the Tokyo National Museum in 1896 to commemorate the centenary of Jenner's discovery of vaccination. A monument outside the walls of the upper town of Boulogne sur Mer, France. A street in Stoke Newington, north London: Jenner Road, N16 Built around 1970, The Jenner Health Centre, 201 Stanstead Road, Forest Hill, London, SE23 1HU Jenner's name is featured on the Frieze of the London School of Hygiene & Tropical Medicine. Twenty-three names of public health and tropical medicine pioneers were chosen to feature on the Keppel Street building when it was constructed in 1926. Minor planet 5168 Jenner is named in his honour. Publications 1798 An Inquiry Into the Causes and Effects of the Variolæ Vaccinæ 1799 Further Observations on the Variolæ Vaccinæ, or Cow-Pox 1800 A Continuation of Facts and Observations relative to the Variolæ Vaccinæ 40pp. 1801 The Origin of the Vaccine Inoculation See also History of science Koyama Shisei, Japanese vaccinologist (1807–1862) who improved upon the Jennerian smallpox vaccine References Further reading Papers at the Royal College of Physicians Fisher, Richard B., Edward Jenner 1749–1823, Andre Deutsch, London, 1991. Bennett, Michael, War against smallpox: Edward Jenner and the global spread of vaccination, Cambridge University Press, Cambridge, 2020. Ordnance Survey showing reference to Smallpox Hil: http://explore.ordnancesurvey.co.uk/os_routes/show/1539 LeFanu WR. 1951 A bio-bibliography of Edward Jenner, 1749–1823. London: Harvey and Blythe; 1951. pp. 103–108. External links The Three Original Publications on Vaccination Against Smallpox A digitized copy of An inquiry into the causes and effects of the variola vaccine (1798), from the Posner Memorial Collection at Carnegie Mellon Dr Jenner's House, Museum and Garden, Berkeley The Evolution of Modern Medicine. Osler, W (FTP) 1749 births 1823 deaths 18th-century English medical doctors 19th-century English medical doctors Alumni of St George's, University of London Alumni of the University of St Andrews British immunologists English biologists English Freemasons English justices of the peace Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society History of medicine in the United Kingdom Mayors of places in Gloucestershire Members of the Royal Swedish Academy of Sciences People educated at Cirencester Grammar School People educated at Katharine Lady Berkeley's School People from Berkeley, Gloucestershire Smallpox vaccines Smallpox Vaccinologists
Edward Jenner
[ "Biology" ]
3,879
[ "Vaccination", "Vaccinologists" ]
9,510
https://en.wikipedia.org/wiki/Electronic%20music
Electronic music broadly is a group of music genres that employ electronic musical instruments, circuitry-based music technology and software, or general-purpose electronics (such as personal computers) in its creation. It includes both music made using electronic and electromechanical means (electroacoustic music). Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and electric guitar. The first electronic musical devices were developed at the end of the 19th century. During the 1920s and 1930s, some electronic instruments were introduced and the first compositions featuring them were written. By the 1940s, magnetic audio tape allowed musicians to tape sounds and then modify them by changing the tape speed or direction, leading to the development of electroacoustic tape music in the 1940s, in Egypt and France. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds. Music produced solely from electronic generators was first produced in Germany in 1953 by Karlheinz Stockhausen. Electronic music was also created in Japan and the United States beginning in the 1950s and algorithmic composition with computers was first demonstrated in the same decade. During the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI (Musical Instrument Digital Interface) was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music. Contemporary electronic music includes many varieties and ranges from experimental art music to popular forms such as electronic dance music. Pop electronic music is most recognizable in its 4/4 form and more connected with the mainstream than preceding forms which were popular in niche markets. Origins: late 19th century to early 20th century At the turn of the 20th century, experimentation with emerging electronics led to the first electronic musical instruments. These initial inventions were not sold, but were instead used in demonstrations and public performances. The audiences were presented with reproductions of existing music instead of new compositions for the instruments. While some were considered novelties and produced simple tones, the Telharmonium synthesized the sound of several orchestral instruments with reasonable precision. It achieved viable public interest and made commercial progress into streaming music through telephone networks. Critics of musical conventions at the time saw promise in these developments. Ferruccio Busoni encouraged the composition of microtonal music allowed for by electronic instruments. He predicted the use of machines in future music, writing the influential Sketch of a New Esthetic of Music (1907). Futurists such as Francesco Balilla Pratella and Luigi Russolo began composing music with acoustic noise to evoke the sound of machinery. They predicted expansions in timbre allowed for by electronics in the influential manifesto The Art of Noises (1913). Early compositions Developments of the vacuum tube led to electronic instruments that were smaller, amplified, and more practical for performance. In particular, the theremin, ondes Martenot and trautonium were commercially produced by the early 1930s. From the late 1920s, the increased practicality of electronic instruments influenced composers such as Joseph Schillinger and Maria Schuppel to adopt them. They were typically used within orchestras, and most composers wrote parts for the theremin that could otherwise be performed with string instruments. Avant-garde composers criticized the predominant use of electronic instruments for conventional purposes. The instruments offered expansions in pitch resources that were exploited by advocates of microtonal music such as Charles Ives, Dimitrios Levidis, Olivier Messiaen and Edgard Varèse. Further, Percy Grainger used the theremin to abandon fixed tonation entirely, while Russian composers such as Gavriil Popov treated it as a source of noise in otherwise-acoustic noise music. Recording experiments Developments in early recording technology paralleled that of electronic instruments. The first means of recording and reproducing audio was invented in the late 19th century with the mechanical phonograph. Record players became a common household item, and by the 1920s composers were using them to play short recordings in performances. The introduction of electrical recording in 1925 was followed by increased experimentation with record players. Paul Hindemith and Ernst Toch composed several pieces in 1930 by layering recordings of instruments and vocals at adjusted speeds. Influenced by these techniques, John Cage composed Imaginary Landscape No. 1 in 1939 by adjusting the speeds of recorded tones. Composers began to experiment with newly developed sound-on-film technology. Recordings could be spliced together to create sound collages, such as those by Tristan Tzara, Kurt Schwitters, Filippo Tommaso Marinetti, Walter Ruttmann and Dziga Vertov. Further, the technology allowed sound to be graphically created and modified. These techniques were used to compose soundtracks for several films in Germany and Russia, in addition to the popular Dr. Jekyll and Mr. Hyde in the United States. Experiments with graphical sound were continued by Norman McLaren from the late 1930s. Development: 1940s to 1950s Electroacoustic tape music The first practical audio tape recorder was unveiled in 1935. Improvements to the technology were made using the AC biasing technique, which significantly improved recording fidelity. As early as 1942, test recordings were being made in stereo. Although these developments were initially confined to Germany, recorders and tapes were brought to the United States following the end of World War II. These were the basis for the first commercially produced tape recorder in 1948. In 1944, before the use of magnetic tape for compositional purposes, Egyptian composer Halim El-Dabh, while still a student in Cairo, used a cumbersome wire recorder to record sounds of an ancient zaar ceremony. Using facilities at the Middle East Radio studios El-Dabh processed the recorded material using reverberation, echo, voltage controls and re-recording. What resulted is believed to be the earliest tape music composition. The resulting work was entitled The Expression of Zaar and it was presented in 1944 at an art gallery event in Cairo. While his initial experiments in tape-based composition were not widely known outside of Egypt at the time, El-Dabh is also known for his later work in electronic music at the Columbia-Princeton Electronic Music Center in the late 1950s. Musique concrète Following his work with Studio d'Essai at Radiodiffusion Française (RDF), during the early 1940s, Pierre Schaeffer is credited with originating the theory and practice of musique concrète. In the late 1940s, experiments in sound-based composition using shellac record players were first conducted by Schaeffer. In 1950, the techniques of musique concrete were expanded when magnetic tape machines were used to explore sound manipulation practices such as speed variation (pitch shift) and tape splicing. On 5 October 1948, RDF broadcast Schaeffer's Etude aux chemins de fer. This was the first "movement" of Cinq études de bruits, and marked the beginning of studio realizations and musique concrète (or acousmatic art). Schaeffer employed a disc cutting lathe, four turntables, a four-channel mixer, filters, an echo chamber, and a mobile recording unit. Not long after this, Pierre Henry began collaborating with Schaeffer, a partnership that would have profound and lasting effects on the direction of electronic music. Another associate of Schaeffer, Edgard Varèse, began work on Déserts, a work for chamber orchestra and tape. The tape parts were created at Pierre Schaeffer's studio and were later revised at Columbia University. In 1950, Schaeffer gave the first public (non-broadcast) concert of musique concrète at the École Normale de Musique de Paris. "Schaeffer used a PA system, several turntables, and mixers. The performance did not go well, as creating live montages with turntables had never been done before." Later that same year, Pierre Henry collaborated with Schaeffer on Symphonie pour un homme seul (1950) the first major work of musique concrete. In Paris in 1951, in what was to become an important worldwide trend, RTF established the first studio for the production of electronic music. Also in 1951, Schaeffer and Henry produced an opera, Orpheus, for concrete sounds and voices. By 1951 the work of Schaeffer, composer-percussionist Pierre Henry, and sound engineer Jacques Poullin had received official recognition and The Groupe de Recherches de Musique Concrète, Club d 'Essai de la Radiodiffusion-Télévision Française was established at RTF in Paris, the ancestor of the ORTF. Elektronische Musik, Germany Karlheinz Stockhausen worked briefly in Schaeffer's studio in 1952, and afterward for many years at the WDR Cologne's Studio for Electronic Music. 1954 saw the advent of what would now be considered authentic electric plus acoustic compositions—acoustic instrumentation augmented/accompanied by recordings of manipulated or electronically generated sound. Three major works were premiered that year: Varèse's Déserts, for chamber ensemble and tape sounds, and two works by Otto Luening and Vladimir Ussachevsky: Rhapsodic Variations for the Louisville Symphony and A Poem in Cycles and Bells, both for orchestra and tape. Because he had been working at Schaeffer's studio, the tape part for Varèse's work contains much more concrete sounds than electronic. "A group made up of wind instruments, percussion and piano alternate with the mutated sounds of factory noises and ship sirens and motors, coming from two loudspeakers." At the German premiere of Déserts in Hamburg, which was conducted by Bruno Maderna, the tape controls were operated by Karlheinz Stockhausen. The title Déserts suggested to Varèse not only "all physical deserts (of sand, sea, snow, of outer space, of empty streets), but also the deserts in the mind of man; not only those stripped aspects of nature that suggest bareness, aloofness, timelessness, but also that remote inner space no telescope can reach, where man is alone, a world of mystery and essential loneliness." In Cologne, what would become the most famous electronic music studio in the world, was officially opened at the radio studios of the NWDR in 1953, though it had been in the planning stages as early as 1950 and early compositions were made and broadcast in 1951. The brainchild of Werner Meyer-Eppler, Robert Beyer, and Herbert Eimert (who became its first director), the studio was soon joined by Karlheinz Stockhausen and Gottfried Michael Koenig. In his 1949 thesis Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache, Meyer-Eppler conceived the idea to synthesize music entirely from electronically produced signals; in this way, elektronische Musik was sharply differentiated from French musique concrète, which used sounds recorded from acoustical sources. In 1953, Stockhausen composed his Studie I, followed in 1954 by Elektronische Studie II—the first electronic piece to be published as a score. In 1955, more experimental and electronic studios began to appear. Notable were the creation of the Studio di fonologia musicale di Radio Milano, a studio at the NHK in Tokyo founded by Toshiro Mayuzumi, and the Philips studio at Eindhoven, the Netherlands, which moved to the University of Utrecht as the Institute of Sonology in 1960. "With Stockhausen and Mauricio Kagel in residence, [Cologne] became a year-round hive of charismatic avant-gardism." on two occasions combining electronically generated sounds with relatively conventional orchestras—in Mixtur (1964) and Hymnen, dritte Region mit Orchester (1967). Stockhausen stated that his listeners had told him his electronic music gave them an experience of "outer space", sensations of flying, or being in a "fantastic dream world". United States In the United States, electronic music was being created as early as 1939, when John Cage published Imaginary Landscape, No. 1, using two variable-speed turntables, frequency recordings, muted piano, and cymbal, but no electronic means of production. Cage composed five more "Imaginary Landscapes" between 1942 and 1952 (one withdrawn), mostly for percussion ensemble, though No. 4 is for twelve radios and No. 5, written in 1952, uses 42 recordings and is to be realized as a magnetic tape. According to Otto Luening, Cage also performed Williams Mix at Donaueschingen in 1954, using eight loudspeakers, three years after his alleged collaboration. Williams Mix was a success at the Donaueschingen Festival, where it made a "strong impression". The Music for Magnetic Tape Project was formed by members of the New York School (John Cage, Earle Brown, Christian Wolff, David Tudor, and Morton Feldman), and lasted three years until 1954. Cage wrote of this collaboration: "In this social darkness, therefore, the work of Earle Brown, Morton Feldman, and Christian Wolff continues to present a brilliant light, for the reason that at the several points of notation, performance, and audition, action is provocative." Cage completed Williams Mix in 1953 while working with the Music for Magnetic Tape Project. The group had no permanent facility, and had to rely on borrowed time in commercial sound studios, including the studio of Bebe and Louis Barron. Columbia-Princeton Center In the same year Columbia University purchased its first tape recorder—a professional Ampex machine—to record concerts. Vladimir Ussachevsky, who was on the music faculty of Columbia University, was placed in charge of the device, and almost immediately began experimenting with it. Herbert Russcol writes: "Soon he was intrigued with the new sonorities he could achieve by recording musical instruments and then superimposing them on one another." Ussachevsky said later: "I suddenly realized that the tape recorder could be treated as an instrument of sound transformation." On Thursday, 8 May 1952, Ussachevsky presented several demonstrations of tape music/effects that he created at his Composers Forum, in the McMillin Theatre at Columbia University. These included Transposition, Reverberation, Experiment, Composition, and Underwater Valse. In an interview, he stated: "I presented a few examples of my discovery in a public concert in New York together with other compositions I had written for conventional instruments." Otto Luening, who had attended this concert, remarked: "The equipment at his disposal consisted of an Ampex tape recorder . . . and a simple box-like device designed by the brilliant young engineer, Peter Mauzey, to create feedback, a form of mechanical reverberation. Other equipment was borrowed or purchased with personal funds." Just three months later, in August 1952, Ussachevsky traveled to Bennington, Vermont, at Luening's invitation to present his experiments. There, the two collaborated on various pieces. Luening described the event: "Equipped with earphones and a flute, I began developing my first tape-recorder composition. Both of us were fluent improvisors and the medium fired our imaginations." They played some early pieces informally at a party, where "a number of composers almost solemnly congratulated us saying, 'This is it' ('it' meaning the music of the future)." Word quickly reached New York City. Oliver Daniel telephoned and invited the pair to "produce a group of short compositions for the October concert sponsored by the American Composers Alliance and Broadcast Music, Inc., under the direction of Leopold Stokowski at the Museum of Modern Art in New York. After some hesitation, we agreed. . . . Henry Cowell placed his home and studio in Woodstock, New York, at our disposal. With the borrowed equipment in the back of Ussachevsky's car, we left Bennington for Woodstock and stayed two weeks. . . . In late September 1952, the travelling laboratory reached Ussachevsky's living room in New York, where we eventually completed the compositions." Two months later, on 28 October, Vladimir Ussachevsky and Otto Luening presented the first Tape Music concert in the United States. The concert included Luening's Fantasy in Space (1952)—"an impressionistic virtuoso piece" using manipulated recordings of flute—and Low Speed (1952), an "exotic composition that took the flute far below its natural range." Both pieces were created at the home of Henry Cowell in Woodstock, New York. After several concerts caused a sensation in New York City, Ussachevsky and Luening were invited onto a live broadcast of NBC's Today Show to do an interview demonstration—the first televised electroacoustic performance. Luening described the event: "I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations." The score for Forbidden Planet, by Louis and Bebe Barron, was entirely composed using custom-built electronic circuits and tape recorders in 1956 (but no synthesizers in the modern sense of the word). USSR In 1929, Nikolai Obukhov invented the "sounding cross" (la croix sonore), comparable to the principle of the theremin. In the 1930s, Nikolai Ananyev invented "sonar", and engineer Alexander Gurov — neoviolena, I. Ilsarov — ilston., and A. Ivanov — . Composer and inventor Arseny Avraamov was engaged in scientific work on sound synthesis and conducted a number of experiments that would later form the basis of Soviet electro-musical instruments. In 1956 Vyacheslav Mescherin created the , which used theremins, electric harps, electric organs, the first synthesizer in the USSR "Ekvodin", and also created the first Soviet reverb machine. The style in which Meshcherin's ensemble played is known as "Space age pop". In 1957, engineer Igor Simonov assembled a working model of a noise recorder (electroeoliphone), with the help of which it was possible to extract various timbres and consonances of a noise nature. In 1958, Evgeny Murzin designed ANS synthesizer, one of the world's first polyphonic musical synthesizers. Founded by Murzin in 1966, the Moscow Experimental Electronic Music Studio became the base for a new generation of experimenters – Eduard Artemyev, , Sándor Kallós, Sofia Gubaidulina, Alfred Schnittke, and Vladimir Martynov. By the end of the 1960s, musical groups playing light electronic music appeared in the USSR. At the state level, this music began to be used to attract foreign tourists to the country and for broadcasting to foreign countries. In the mid-1970s, composer Alexander Zatsepin designed an "orchestrolla" – a modification of the mellotron. The Baltic Soviet Republics also had their own pioneers: in Estonian SSR — Sven Grunberg, in Lithuanian SSR — Gedrus Kupriavicius, in Latvian SSR — Opus and Zodiac. Australia The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March, of which no known recordings exist, only the accurate reconstruction. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice. CSIRAC was never recorded, but the music played was accurately reconstructed. The oldest known recordings of computer-generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey. Japan Among the earliest group of electric musical instruments in Japan was the Yamaha Magna Organ, an electroacoustic instrument built in 1935. After World War II, Japanese composers such as Minao Shibata began to learn of the development of electronic musical instruments in other countries. By the late 1940s, Japanese composers began experimenting with electronic music, and institutional sponsorship enabled them to experiment with advanced equipment. Their infusion of Asian music into the emerging genre would eventually support Japan's popularity in the development of music technology several decades later. Following the foundation of electronics company Sony in 1946, composers Toru Takemitsu and Minao Shibata independently explored possible uses for electronic technology to produce music. Takemitsu had ideas similar to musique concrète, which he was unaware of, while Shibata foresaw the development of synthesizers and predicted a drastic change in music. Sony began producing popular magnetic tape recorders for government and public use. The avant-garde collective Jikken Kōbō (Experimental Workshop), founded in 1950, was offered access to emerging audio technology by Sony. The company hired Toru Takemitsu to demonstrate their tape recorders with compositions and performances of electronic tape music. The first electronic tape pieces by the group were "Toraware no Onna" ("Imprisoned Woman") and "Piece B", composed in 1951 by Kuniharu Akiyama. Many of the electroacoustic tape pieces they produced were used as incidental music for radio, film, and theatre. They also held concerts employing a slide show synchronized with a recorded soundtrack. Composers outside of the Jikken Kōbō, such as Yasushi Akutagawa, Saburo Tominaga, and Shirō Fukai, were also experimenting with radiophonic tape music between 1952 and 1953. Musique concrète was introduced to Japan by Toshiro Mayuzumi, who was influenced by a Pierre Schaeffer concert. From 1952, he composed tape music pieces for a comedy film, a radio broadcast, and a radio drama. However, Schaeffer's concept of sound object was not influential among Japanese composers, who were mainly interested in overcoming the restrictions of human performance. This led to several Japanese electroacoustic musicians making use of serialism and twelve-tone techniques, evident in Yoshirō Irino's 1951 dodecaphonic piece "Concerto da Camera", in the organization of electronic sounds in Mayuzumi's "X, Y, Z for Musique Concrète", and later in Shibata's electronic music by 1956. Modelling the NWDR studio in Cologne, an NHK electronic music studio was established by Mayuzumi in Tokyo in 1954, which became one of the world's leading electronic music facilities. The studio was equipped with technologies such as tone-generating and audio processing equipment, recording and radiophonic equipment, ondes Martenots, Monochords and Melochords, sine-wave oscillators, tape recorders, ring modulators, band-pass filters, and four- and eight-channel mixers. Musicians associated with the studio included Toshiro Mayuzumi, Minao Shibata, Joji Yuasa, Toshi Ichiyanagi, and Toru Takemitsu. The studio's first electronic compositions were completed in 1955, including Mayuzumi's five-minute pieces "Studie I: Music for Sine Wave by Proportion of Prime Number", "Music for Modulated Wave by Proportion of Prime Number" and "Invention for Square Wave and Sawtooth Wave" produced using the studio's various tone-generating capabilities, and Shibata's 20-minute stereo piece "Musique Concrète for Stereophonic Broadcast". Mid-to-late 1950s The impact of computers continued in 1956. Lejaren Hiller and Leonard Isaacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. "... Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly." Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program in 1957, one of the first computer programs to play electronic music. Vocoder technology was also a major development in this early era. In 1956, Stockhausen composed Gesang der Jünglinge, the first major work of the Cologne studio, based on a text from the Book of Daniel. An important technological development of that year was the invention of the Clavivox synthesizer by Raymond Scott with subassembly by Robert Moog. In 1957, Kid Baltan (Dick Raaymakers) and Tom Dissevelt released their debut album, Song Of The Second Moon, recorded at the Philips studio in the Netherlands. The public remained interested in the new sounds being created around the world, as can be deduced by the inclusion of Varèse's Poème électronique, which was played over four hundred loudspeakers at the Philips Pavilion of the 1958 Brussels World Fair. That same year, Mauricio Kagel, an Argentine composer, composed Transición II. The work was realized at the WDR studio in Cologne. Two musicians performed on the piano, one in the traditional manner, the other playing on the strings, frame, and case. Two other performers used tape to unite the presentation of live sounds with the future of prerecorded materials from later on and its past of recordings made earlier in the performance. In 1958, Columbia-Princeton developed the RCA Mark II Sound Synthesizer, the first programmable synthesizer. Prominent composers such as Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Charles Wuorinen, Halim El-Dabh, Bülent Arel and Mario Davidovsky used the RCA Synthesizer extensively in various compositions. One of the most influential composers associated with the early years of the studio was Egypt's Halim El-Dabh who, after having developed the earliest known electronic tape music in 1944, became more famous for Leiyla and the Poet, a 1959 series of electronic compositions that stood out for its immersion and fusion of electronic and folk music, in contrast to the more mathematical approach used by serial composers of the time such as Babbitt. El-Dabh's Leiyla and the Poet, released as part of the album Columbia-Princeton Electronic Music Center in 1961, would be cited as a strong influence by a number of musicians, ranging from Neil Rolnick, Charles Amirkhanian and Alice Shields to rock musicians Frank Zappa and The West Coast Pop Art Experimental Band. Following the emergence of differences within the GRMC (Groupe de Recherche de Musique Concrète) Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle. Expansion: 1960s These were fertile years for electronic music—not just for academia, but for independent artists as synthesizer technology became more accessible. By this time, a strong community of composers and musicians working with new sounds and instruments was established and growing. 1960 witnessed the composition of Otto Luening's Gargoyles for violin and tape as well as the premiere of Karlheinz Stockhausen's Kontakte for electronic sounds, piano, and percussion. This piece existed in two versions—one for 4-channel tape, and the other for tape with human performers. "In Kontakte, Stockhausen abandoned traditional musical form based on linear development and dramatic climax. This new approach, which he termed 'moment form', resembles the 'cinematic splice' techniques in early twentieth-century film." The theremin had been in use since the 1920s but it attained a degree of popular recognition through its use in science-fiction film soundtrack music in the 1950s (e.g., Bernard Herrmann's classic score for The Day the Earth Stood Still). In the UK in this period, the BBC Radiophonic Workshop (established in 1958) came to prominence, thanks in large measure to their work on the BBC science-fiction series Doctor Who. One of the most influential British electronic artists in this period was Workshop staffer Delia Derbyshire, who is now famous for her 1963 electronic realisation of the iconic Doctor Who theme, composed by Ron Grainer. Other composers of electronic music active in the UK included Ernest Berk (who established his first studio in 1955), Tristram Cary, Hugh Davies, Brian Dennis, George Newson, Daphne Oram and Peter Zinovieff. During the time of the UNESCO fellowship for studies in electronic music (1958) Israeli composer Josef Tal went on a study tour in the US and Canada. He summarized his conclusions in two articles that he submitted to UNESCO. In 1961, he established the Centre for Electronic Music in Israel at The Hebrew University of Jerusalem. In 1962, Canadian composer Hugh Le Caine arrived in Jerusalem to install his Creative Tape Recorder in the centre. In the 1990s Tal conducted, together with Dr. Shlomo Markel, in cooperation with the Technion – Israel Institute of Technology and the Volkswagen Foundation, a research project ('Talmark') aimed at the development of a novel musical notation system for electronic music. Milton Babbitt composed his first electronic work using the synthesizer—his Composition for Synthesizer (1961)—which he created using the RCA synthesizer at the Columbia-Princeton Electronic Music Center. Collaborations also occurred across oceans and continents. In 1961, American composer Vladimir Ussachevsky invited Edgar Varèse from France to the Columbia-Princeton Studio (CPEMC). Upon arrival, Varèse embarked upon a revision of his work Déserts. He was assisted by Mario Davidovsky and Bülent Arel. The intense activity occurring at CPEMC and elsewhere inspired the establishment of the San Francisco Tape Music Center in 1963 by Morton Subotnick, with additional members Pauline Oliveros, Ramon Sender, Anthony Martin, and Terry Riley. Later, the Center moved to Mills College, directed by Pauline Oliveros, and has since been renamed Center for Contemporary Music. Pietro Grossi was an Italian pioneer of computer composition and tape music, who first experimented with electronic techniques in the early sixties. Grossi was a cellist and composer, born in Venice in 1917. He founded the S 2F M (Studio de Fonologia Musicale di Firenze) in 1963 to experiment with electronic sound and composition. Simultaneously in San Francisco, composer Stan Shaff and equipment designer Doug McEachern, presented the first "Audium" concert at San Francisco State College (1962), followed by work at the San Francisco Museum of Modern Art (SFMOMA, 1963), conceived of as in time, controlled movement of sound in space. Twelve speakers surrounded the audience, and four speakers were mounted on a rotating, mobile-like construction above. In an SFMOMA performance the following year (1964), the San Francisco Chronicle music critic Alfred Frankenstein commented, "the possibilities of the space-sound continuum have seldom been so extensively explored". In 1967, the first Audium, a "sound-space continuum" opened, holding weekly performances through 1970. In 1975, enabled by seed money from the National Endowment for the Arts, a new Audium opened, designed floor to ceiling for spatial sound composition and performance. "In contrast, there are composers who manipulated sound space by locating multiple speakers at various locations in a performance space and then switching or panning the sound between the sources. In this approach, the composition of spatial manipulation is dependent on the location of the speakers and usually exploits the acoustical properties of the enclosure. Examples include Varese's Poeme Electronique (tape music performed in the Philips Pavilion of the 1958 World Fair, Brussels) and Stan Schaff's Audium installation, currently active in San Francisco." Through weekly programs (over 4,500 in 40 years), Shaff "sculpts" sound, performing now-digitized spatial works live through 176 speakers. Jean-Jacques Perrey experimented with Pierre Schaeffer's techniques on tape loops and was among the first to use the recently released Moog synthesizer developed by Robert Moog. With this instrument he composed some works with Gershon Kingsley and solo. A well-known example of the use of Moog's full-sized Moog modular synthesizer is the 1968 Switched-On Bach album by Wendy Carlos, which triggered a craze for synthesizer music. In 1969 David Tudor brought a Moog modular synthesizer and Ampex tape machines to the National Institute of Design in Ahmedabad with the support of the Sarabhai family, forming the foundation of India's first electronic music studio. Here a group of composers Jinraj Joshipura, Gita Sarabhai, SC Sharma, IS Mathur and Atul Desai developed experimental sound compositions between 1969 and 1973. Computer music Musical melodies were first generated by the computer CSIRAC in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it. The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard in the 1950s. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the "Colonel Bogey March" of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer-music practice. The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep", and "In the Mood" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud. The late 1950s, 1960s, and 1970s also saw the development of large mainframe computer synthesis. Starting in 1957, Max Mathews of Bell Labs developed the MUSIC programs, culminating in MUSIC V, a direct digital synthesis language. Laurie Spiegel developed the algorithmic musical composition software "Music Mouse" (1986) for Macintosh, Amiga, and Atari computers. Stochastic music An important new development was the advent of computers to compose music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a composing method that uses mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962), Morsima-Amorsima, ST/10, and Atrées. He developed the computer system UPIC for translating graphical images into musical results and composed Mycènes Alpha (1978) with it. Live electronics In Europe in 1964, Karlheinz Stockhausen composed Mikrophonie I for tam-tam, hand-held microphones, filters, and potentiometers, and Mixtur for orchestra, four sine-wave generators, and four ring modulators. In 1965 he composed Mikrophonie II for choir, Hammond organ, and ring modulators. In 1966–1967, Reed Ghazala discovered and began to teach "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage's aleatoric music concept. Cosey Fanni Tutti's performance art and musical career explored the concept of 'acceptable' music and she went on to explore the use of sound as a means of desire or discomfort. Wendy Carlos performed selections from her album Switched-On Bach on stage with a synthesizer with the St. Louis Symphony Orchestra; another live performance was with Kurzweil Baroque Ensemble for "Bach at the Beacon" in 1997. In June 2018, Suzanne Ciani released LIVE Quadraphonic, a live album documenting her first solo performance on a Buchla synthesizer in 40 years. It was one of the first quadraphonic vinyl releases in over 30 years. Japanese instruments In the 1950s, Japanese electronic musical instruments began influencing the international music industry. Ikutaro Kakehashi, who founded Ace Tone in 1960, developed his own version of electronic percussion that had been already popular on the overseas electronic organ. At the 1964 NAMM Show, he revealed it as the R-1 Rhythm Ace, a hand-operated percussion device that played electronic drum sounds manually as the user pushed buttons, in a similar fashion to modern electronic drum pads. In 1963, Korg released the Donca-Matic DA-20, an electro-mechanical drum machine. In 1965, Nippon Columbia patented a fully electronic drum machine. Korg released the Donca-Matic DC-11 electronic drum machine in 1966, which they followed with the Korg Mini Pops, which was developed as an option for the Yamaha Electone electric organ. Korg's Stageman and Mini Pops series were notable for "natural metallic percussion" sounds and incorporating controls for drum "breaks and fill-ins." In 1967, Ace Tone founder Ikutaro Kakehashi patented a preset rhythm-pattern generator using diode matrix circuit similar to the Seeburg's prior filed in 1964 (See Drum machine#History), which he released as the FR-1 Rhythm Ace drum machine the same year. It offered 16 preset patterns, and four buttons to manually play each instrument sound (cymbal, claves, cowbell and bass drum). The rhythm patterns could also be cascaded together by pushing multiple rhythm buttons simultaneously, and the possible combination of rhythm patterns were more than a hundred. Ace Tone's Rhythm Ace drum machines found their way into popular music from the late 1960s, followed by Korg drum machines in the 1970s. Kakehashi later left Ace Tone and founded Roland Corporation in 1972, with Roland synthesizers and drum machines becoming highly influential for the next several decades. The company would go on to have a big impact on popular music, and do more to shape popular electronic music than any other company. Turntablism has origins in the invention of direct-drive turntables. Early belt-drive turntables were unsuitable for turntablism, since they had a slow start-up time, and they were prone to wear-and-tear and breakage, as the belt would break from backspin or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. It was succeeded by the Technics SL-1100 and SL-1200 in the early 1970s, and they were widely adopted by hip hop musicians, with the SL-1200 remaining the most widely used turntable in DJ culture for several decades. Jamaican dub music In Jamaica, a form of popular electronic music emerged in the 1960s, dub music, rooted in sound system culture. Dub music was pioneered by studio engineers, such as Sylvan Morris, King Tubby, Errol Thompson, Lee "Scratch" Perry, and Scientist, producing reggae-influenced experimental music with electronic sound technology, in recording studios and at sound system parties. Their experiments included forms of tape-based composition comparable to aspects of musique concrète, an emphasis on repetitive rhythmic structures (often stripped of their harmonic elements) comparable to minimalism, the electronic manipulation of spatiality, the sonic electronic manipulation of pre-recorded musical materials from mass media, deejays toasting over pre-recorded music comparable to live electronic music, remixing music, turntablism, and the mixing and scratching of vinyl. Despite the limited electronic equipment available to dub pioneers such as King Tubby and Lee "Scratch" Perry, their experiments in remix culture were musically cutting-edge. King Tubby, for example, was a sound system proprietor and electronics technician, whose small front-room studio in the Waterhouse ghetto of western Kingston was a key site of dub music creation. Late 1960s to early 1980s Rise of popular electronic music In the late 1960s, pop and rock musicians, including the Beach Boys and the Beatles, began to use electronic instruments, like the theremin and Mellotron, to supplement and define their sound. The first bands to utilize the Moog synthesizer would be the Doors on Strange Days as well as the Monkees on Pisces, Aquarius, Capricorn & Jones Ltd. In his book Electronic and Experimental Music, Thom Holmes recognises the Beatles' 1966 recording "Tomorrow Never Knows" as the song that "ushered in a new era in the use of electronic music in rock and pop music" due to the band's incorporation of tape loops and reversed and speed-manipulated tape sounds. Also in the late 1960s, the music duos Silver Apples, Beaver and Krause, and experimental rock bands like White Noise, the United States of America, Fifty Foot Hose, and Gong are regarded as pioneers in the electronic rock and electronica genres for their work in melding psychedelic rock with oscillators and synthesizers. The 1969 instrumental "Popcorn" written by Gershon Kingsley for Music To Moog By became a worldwide success due to the 1972 version made by Hot Butter. The Moog synthesizer was brought to the mainstream in 1968 by Switched-On Bach, a bestselling album of Bach compositions arranged for Moog synthesizer by American composer Wendy Carlos. The album achieved critical and commercial success, winning the 1970 Grammy Awards for Best Classical Album, Best Classical Performance – Instrumental Soloist or Soloists (With or Without Orchestra), and Best Engineered Classical Recording. In 1969, David Borden formed the world's first synthesizer ensemble called the Mother Mallard's Portable Masterpiece Company in Ithaca, New York. By the end of the 1960s, the Moog synthesizer took a leading place in the sound of emerging progressive rock with bands including Pink Floyd, Yes, Emerson, Lake & Palmer, and Genesis making them part of their sound. Instrumental prog rock was particularly significant in continental Europe, allowing bands like Kraftwerk, Tangerine Dream, Cluster, Can, Neu!, and Faust to circumvent the language barrier. Their synthesiser-heavy "krautrock", along with the work of Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Ambient dub was pioneered by King Tubby and other Jamaican sound artists, using DJ-inspired ambient electronics, complete with drop-outs, echo, equalization and psychedelic electronic effects. It featured layering techniques and incorporated elements of world music, deep basslines and harmonic sounds. Techniques such as a long echo delay were also used. Other notable artists within the genre include Dreadzone, Higher Intelligence Agency, The Orb, Ott, Loop Guru, Woob and Transglobal Underground. Dub music influenced electronic musical techniques later adopted by hip hop music when Jamaican immigrant DJ Kool Herc in the early 1970s introduced Jamaica's sound system culture and dub music techniques to America. One such technique that became popular in hip hop culture was playing two copies of the same record on two turntables in alternation, extending the b-dancers' favorite section. The turntable eventually went on to become the most visible electronic musical instrument, and occasionally the most virtuosic, in the 1980s and 1990s. Electronic rock was also produced by several Japanese musicians, including Isao Tomita's Electric Samurai: Switched on Rock (1972), which featured Moog synthesizer renditions of contemporary pop and rock songs, and Osamu Kitajima's progressive rock album Benzaiten (1974). The mid-1970s saw the rise of electronic art music musicians such as Jean Michel Jarre, Vangelis, Tomita and Klaus Schulze who were significant influences on the development of new-age music. The hi-tech appeal of these works created for some years the trend of listing the electronic musical equipment employed in the album sleeves, as a distinctive feature. Electronic music began to enter regularly in radio programming and top-sellers charts, as the French band Space with their debut studio album Magic Fly or Jarre with Oxygène. Between 1977 and 1981, Kraftwerk released albums such as Trans-Europe Express, The Man-Machine and Computer World, which influenced subgenres of electronic music. In this era, the sound of rock musicians like Mike Oldfield and The Alan Parsons Project (who is credited the first rock song to feature a digital vocoder in 1975, The Raven) used to be arranged and blended with electronic effects and/or music as well, which became much more prominent in the mid-1980s. Jeff Wayne achieved a long-lasting success with his 1978 electronic rock musical version of The War of the Worlds. Film scores also benefit from the electronic sound. During the 1970s and 1980s, Wendy Carlos composed the score for A Clockwork Orange, The Shining and Tron. In 1977, Gene Page recorded a disco version of the hit theme by John Williams from Steven Spielberg film Close Encounters of the Third Kind. Page's version peaked on the R&B chart at #30. The score of 1978 film Midnight Express composed by Italian synth-pioneer Giorgio Moroder won the Academy Award for Best Original Score in 1979, as did it again in 1981 the score by Vangelis for Chariots of Fire. After the arrival of punk rock, a form of basic electronic rock emerged, increasingly using new digital technology to replace other instruments. The American duo Suicide, who arose from the punk scene in New York, utilized drum machines and synthesizers in a hybrid between electronics and punk on their eponymous 1977 album. Synth-pop pioneering bands which enjoyed success for years included Ultravox with their 1977 track "Hiroshima Mon Amour" on Ha!-Ha!-Ha!, Yellow Magic Orchestra with their self-titled album (1978), The Buggles with their prominent 1979 debut single Video Killed the Radio Star, Gary Numan with his solo debut album The Pleasure Principle and single Cars in 1979, Orchestral Manoeuvres in the Dark with their 1979 single Electricity featured on their eponymous debut album, Depeche Mode with their first single Dreaming of Me recorded in 1980 and released in 1981 album Speak & Spell, A Flock of Seagulls with their 1981 single Talking, New Order with Ceremony in 1981, and The Human League with their 1981 hit Don't You Want Me from their third album Dare. The definition of MIDI and the development of digital audio made the development of purely electronic sounds much easier, with audio engineers, producers and composers exploring frequently the possibilities of virtually every new model of electronic sound equipment launched by manufacturers. Synth-pop sometimes used synthesizers to replace all other instruments, but it was more common that bands had one or more keyboardists in their line-ups along with guitarists, bassists, and/or drummers. These developments led to the growth of synth-pop, which after it was adopted by the New Romantic movement, allowed synthesizers to dominate the pop and rock music of the early 1980s until the style began to fall from popularity in the mid-to-end of the decade. Along with the aforementioned successful pioneers, key acts included Yazoo, Duran Duran, Spandau Ballet, Culture Club, Talk Talk, Japan, and Eurythmics. Synth-pop was taken up across the world, with international hits for acts including Men Without Hats, Trans-X and Lime from Canada, Telex from Belgium, Peter Schilling, Sandra, Modern Talking, Propaganda and Alphaville from Germany, Yello from Switzerland and Azul y Negro from Spain. Also, the synth sound is a key feature of Italo-disco. Some synth-pop bands created futuristic visual styles of themselves to reinforce the idea of electronic sounds were linked primarily with technology, as Americans Devo and Spaniards Aviador Dro. Keyboard synthesizers became so common that even heavy metal rock bands, a genre often regarded as the opposite in aesthetics, sound and lifestyle from that of electronic pop artists by fans of both sides, achieved worldwide success with themes as 1983 Jump by Van Halen and 1986 The Final Countdown by Europe, which feature synths prominently. Proliferation of electronic music research institutions (EMS), formerly known as Electroacoustic Music in Sweden, is the Swedish national centre for electronic music and sound art. The research organisation started in 1964 and is based in Stockholm. STEIM (1969-2021) was a center for research and development of new musical instruments in the electronic performing arts, located in Amsterdam, Netherlands. It was founded by Misha Mengelberg, Louis Andriessen, Peter Schat, Dick Raaymakers, , Reinbert de Leeuw, and Konrad Boehmer. This group of Dutch composers had fought for the reformation of Amsterdam's feudal music structures; they insisted on Bruno Maderna's appointment as musical director of the Concertgebouw Orchestra and enforced the first public fundings for experimental and improvised electronic music in the Netherlands. From 1981-2008, Michel Waisvisz was artistic director, and his live-electronic instruments like the Cracklebox or The Hands inspired international artists to work at STEIM which entertained a residency program since 1992. IRCAM in Paris became a major center for computer music research and realization and development of the Sogitec 4X computer system, featuring then revolutionary real-time digital signal processing. Pierre Boulez's Répons (1981) for 24 musicians and 6 soloists used the 4X to transform and route soloists to a loudspeaker system. Barry Vercoe describes one of his experiences with early computer sounds: Keyboard synthesizers Released in 1970 by Moog Music, the Mini-Moog was among the first widely available, portable, and relatively affordable synthesizers. It became once the most widely used synthesizer at that time in both popular and electronic art music. Patrick Gleeson, playing live with Herbie Hancock at the beginning of the 1970s, pioneered the use of synthesizers in a touring context, where they were subject to stresses the early machines were not designed for. In 1974, the WDR studio in Cologne acquired an EMS Synthi 100 synthesizer, which many composers used to produce notable electronic works—including Rolf Gehlhaar's Fünf deutsche Tänze (1975), Karlheinz Stockhausen's Sirius (1975–1976), and John McGuire's Pulse Music III (1978). Thanks to miniaturization of electronics in the 1970s, by the start of the 1980s keyboard synthesizers, became lighter and affordable, integrating into a single slim unit all the necessary audio synthesis electronics and the piano-style keyboard itself, in sharp contrast with the bulky machinery and "cable spaguetty" employed along with the 1960s and 1970s. First, with analog synthesizers, the trend followed with digital synthesizers and samplers as well (see below). Digital synthesizers In 1975, the Japanese company Yamaha licensed the algorithms for frequency modulation synthesis (FM synthesis) from John Chowning, who had experimented with it at Stanford University since 1971. Yamaha's engineers began adapting Chowning's algorithm for use in a digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation. In 1980, Yamaha eventually released the first FM digital synthesizer, the Yamaha GS-1, but at an expensive price. In 1983, Yamaha introduced the first stand-alone digital synthesizer, the DX7, which also used FM synthesis and would become one of the best-selling synthesizers of all time. The DX7 was known for its recognizable bright tonalities that was partly due to an overachieving sampling rate of 57 kHz. The Korg Poly-800 is a synthesizer released by Korg in 1983. Its initial list price of $795 made it the first fully programmable synthesizer that sold for less than $1000. It had 8-voice polyphony with one Digitally controlled oscillator (DCO) per voice. The Casio CZ-101 was the first and best-selling phase distortion synthesizer in the Casio CZ line. Released in November 1984, it was one of the first (if not the first) fully programmable polyphonic synthesizers that was available for under $500. The Roland D-50 is a digital synthesizer produced by Roland and released in April 1987. Its features include subtractive synthesis, on-board effects, a joystick for data manipulation, and an analogue synthesis-styled layout design. The external Roland PG-1000 (1987–1990) programmer could also be attached to the D-50 for more complex manipulation of its sounds. Samplers A sampler is an electronic or digital musical instrument which uses sound recordings (or "samples") of real instrument sounds (e.g., a piano, violin or trumpet), excerpts from recorded songs (e.g., a five-second bass guitar riff from a funk song) or found sounds (e.g., sirens and ocean waves). The samples are loaded or recorded by the user or by a manufacturer. These sounds are then played back using the sampler program itself, a MIDI keyboard, sequencer or another triggering device (e.g., electronic drums) to perform or compose music. Because these samples are usually stored in digital memory, the information can be quickly accessed. A single sample may often be pitch-shifted to different pitches to produce musical scales and chords. Before computer memory-based samplers, musicians used tape replay keyboards, which store recordings on analog tape. When a key is pressed the tape head contacts the moving tape and plays a sound. The Mellotron was the most notable model, used by many groups in the late 1960s and the 1970s, but such systems were expensive and heavy due to the multiple tape mechanisms involved, and the range of the instrument was limited to three octaves at the most. To change sounds a new set of tapes had to be installed in the instrument. The emergence of the digital sampler made sampling far more practical. The earliest digital sampling was done on the EMS Musys system, developed by Peter Grogono (software), David Cockerell (hardware and interfacing), and Peter Zinovieff (system design and operation) at their London (Putney) Studio c. 1969. The first commercially available sampling synthesizer was the Computer Music Melodian by Harry Mendell (1976). First released in 1977–1978, the Synclavier I using FM synthesis, re-licensed from Yamaha, and sold mostly to universities, proved to be highly influential among both electronic music composers and music producers, including Mike Thorne, an early adopter from the commercial world, due to its versatility, its cutting-edge technology, and distinctive sounds. The first polyphonic digital sampling synthesizer was the Australian-produced Fairlight CMI, first available in 1979. These early sampling synthesizers used wavetable sample-based synthesis. Birth of MIDI In 1980, a group of musicians and music merchants met to standardize an interface that new instruments could use to communicate control instructions with other instruments and computers. This standard was dubbed Musical Instrument Digital Interface (MIDI) and resulted from a collaboration between leading manufacturers, initially Sequential Circuits, Oberheim, Roland—and later, other participants that included Yamaha, Korg, and Kawai. A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized. MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and synchrony, with each device responding according to conditions predetermined by the composer. MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments. Miller Puckette developed graphic signal-processing software for 4X called Max (after Max Mathews) and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background. Sequencers and drum machines The early 1980s saw the rise of bass synthesizers, the most influential being the Roland TB-303, a bass synthesizer and sequencer released in late 1981 that later became a fixture in electronic dance music, particularly acid house. One of the first to use it was Charanjit Singh in 1982, though it would not be popularized until Phuture's "Acid Tracks" in 1987. Music sequencers began being used around the mid 20th century, and Tomita's albums in mid-1970s being later examples. In 1978, Yellow Magic Orchestra were using computer-based technology in conjunction with a synthesiser to produce popular music, making their early use of the microprocessor-based Roland MC-8 Microcomposer sequencer. Drum machines, also known as rhythm machines, also began being used around the late-1950s, with a later example being Osamu Kitajima's progressive rock album Benzaiten (1974), which used a rhythm machine along with electronic drums and a synthesizer. In 1977, Ultravox's "Hiroshima Mon Amour" was one of the first singles to use the metronome-like percussion of a Roland TR-77 drum machine. In 1980, Roland Corporation released the TR-808, one of the first and most popular programmable drum machines. The first band to use it was Yellow Magic Orchestra in 1980, and it would later gain widespread popularity with the release of Marvin Gaye's "Sexual Healing" and Afrika Bambaataa's "Planet Rock" in 1982. The TR-808 was a fundamental tool in the later Detroit techno scene of the late 1980s, and was the drum machine of choice for Derrick May and Juan Atkins. Chiptunes The characteristic lo-fi sound of chip music was initially the result of early computer's sound chips and sound cards' technical limitations; however, the sound has since become sought after in its own right. Common cheap popular sound chips of the first home computers of the 1980s include the SID of the Commodore 64 and General Instrument AY series and clones (like the Yamaha YM2149) used in the ZX Spectrum, Amstrad CPC, MSX compatibles and Atari ST models, among others. Late 1980s to 1990s Rise of dance music Synth-pop continued into the late 1980s, with a format that moved closer to dance music, including the work of acts such as British duos Pet Shop Boys, Erasure and The Communards, achieving success along much of the 1990s. The trend has continued to the present day with modern nightclubs worldwide regularly playing electronic dance music (EDM). Today, electronic dance music has radio stations, websites, and publications like Mixmag dedicated solely to the genre. Despite the industry's attempt to create a specific EDM brand, the initialism remains in use as an umbrella term for multiple genres, including dance-pop, house, techno, electro, and trance, as well as their respective subgenres. Moreover, the genre has found commercial and cultural significance in the United States and North America, thanks to the wildly popular big room house/EDM sound that has been incorporated into the U.S. pop music and the rise of large-scale commercial raves such as Electric Daisy Carnival, Tomorrowland and Ultra Music Festival. Electronica On the other hand, a broad group of electronic-based music styles intended for listening rather than strictly for dancing became known under the "electronica" umbrella which was also a music scene in the early 1990s in the United Kingdom. According to a 1997 Billboard article, "the union of the club community and independent labels" provided the experimental and trend-setting environment in which electronica acts developed and eventually reached the mainstream, citing American labels such as Astralwerks (the Chemical Brothers, Fatboy Slim, the Future Sound of London, Fluke), Moonshine (DJ Keoki), Sims, Daft Punk and City of Angels (the Crystal Method) for popularizing the latest version of electronic music. Indie electronic The category "indie electronic" (or "indietronica") has been used to refer to a wave of groups with roots in independent rock who embraced electronic elements (such as synthesizers, samplers, drum machines, and computer programs) and influences such as early electronic composition, krautrock, synth-pop, and dance music. Recordings are commonly made on laptops using digital audio workstations. The first wave of indie electronic artists began in the 1990s with acts such as Stereolab (who used vintage gear) and Disco Inferno (who embraced modern sampling technology), and the genre expanded in the 2000s as home recording and software synthesizers came into common use. Other acts included Broadcast, Lali Puna, Múm, the Postal Service, Skeletons, and School of Seven Bells. Independent labels associated with the style include Warp, Morr Music, Sub Pop, and Ghostly International. 2000s and 2010s As computer technology has become more accessible and music software has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional musical performance practices: for instance, laptop performance (laptronica), live coding and Algorave. In general, the term Live PA refers to any live performance of electronic music, whether with laptops, synthesizers, or other devices. Beginning around the year 2000, some software-based virtual studio environments emerged, with products such as Propellerhead's Reason and Ableton Live finding popular appeal. Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, it is now possible to create high-quality music using little more than a single laptop computer. Such advances have democratized music creation, leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet. Software-based instruments and effect units (so-called "plugins") can be incorporated in a computer-based studio using the VST platform. Some of these instruments are more or less exact replicas of existing hardware (such as the Roland D-50, ARP Odyssey, Yamaha DX7, or Korg M1). Circuit bending Circuit bending is the modification of battery-powered toys and synthesizers to create new unintended sound effects. It was pioneered by Reed Ghazala in the 1960s and Reed coined the name "circuit bending" in 1992. Modular synth revival Following the circuit bending culture, musicians also began to build their own modular synthesizers, causing a renewed interest in the early 1960s designs. Eurorack became a popular system. See also Clavioline Electronic sackbut List of electronic music genres New Interfaces for Musical Expression Ondioline Spectral music Tracker music Timeline of electronic music genres Live electronic music List of electronic music festivals Live electronic music Footnotes Sources (archive on 10 March 2011) (Online reprint , NASA Ames Research Center Technical Memorandum facsimile 2000. (First published in German in Melos 39 (January–February 1972): 42–44.) (Excerpt exist on History of Experimental Music in Northern California) (cloth); (pbk); (ebook). Abstract. (at webcitation.org) . Also published in German, as . Further reading Dorschel, Andreas, Gerhard Eckel, and Deniz Peters (eds.) (2012). Bodily Expression in Electronic Music: Perspectives on Reclaiming Performativity. Routledge Research in Music 2. London and New York: Routledge. . Strange, Allen (1983), Electronic Music: Systems, Technics, and Controls, second ed. Dubuque, Iowa: W.C. Brown Co. . External links History of Electroacoustic Music – Timeline Electronic Music Foundation History and Development of Electronic Music 19th century in music 19th-century music genres 20th century in music 20th-century music genres 21st century in music 21st-century music genres Audio engineering New media Sound effects
Electronic music
[ "Technology", "Engineering" ]
13,577
[ "Multimedia", "Electrical engineering", "New media", "Audio engineering" ]
9,518
https://en.wikipedia.org/wiki/Edmund%20Husserl
Edmund Gustav Albrecht Husserl ( ; , ; 8 April 1859 – 27 April 1938) was an Austrian-German philosopher and mathematician who established the school of phenomenology. In his early work, he elaborated critiques of historicism and of psychologism in logic based on analyses of intentionality. In his mature work, he sought to develop a systematic foundational science based on the so-called phenomenological reduction. Arguing that transcendental consciousness sets the limits of all possible knowledge, Husserl redefined phenomenology as a transcendental-idealist philosophy. Husserl's thought profoundly influenced 20th-century philosophy, and he remains a notable figure in contemporary philosophy and beyond. Husserl studied mathematics, taught by Karl Weierstrass and Leo Königsberger, and philosophy taught by Franz Brentano and Carl Stumpf. He taught philosophy as a Privatdozent at Halle from 1887, then as professor, first at Göttingen from 1901, then at Freiburg from 1916 until he retired in 1928, after which he remained highly productive. In 1933, under racial laws of the Nazi Party, Husserl was expelled from the library of the University of Freiburg due to his Jewish family background and months later resigned from the Deutsche Akademie. Following an illness, he died in Freiburg in 1938. Life and career Youth and education Husserl was born in 1859 in Proßnitz in the Margraviate of Moravia in the Austrian Empire (today Prostějov in the Czech Republic). He was born into a Jewish family, the second of four children. His father was a milliner. His childhood was spent in Prostějov, where he attended the secular primary school. Then Husserl traveled to Vienna to study at the Realgymnasium there, followed next by the Staatsgymnasium in Olmütz. At the University of Leipzig from 1876 to 1878, Husserl studied mathematics, physics, and astronomy. At Leipzig, he was inspired by philosophy lectures given by Wilhelm Wundt, one of the founders of modern psychology. Then he moved to the Frederick William University of Berlin (the present-day Humboldt University of Berlin) in 1878 where he continued his study of mathematics under Leopold Kronecker and Karl Weierstrass. In Berlin he found a mentor in Tomáš Garrigue Masaryk, then a former philosophy student of Franz Brentano and later the first president of Czechoslovakia. There Husserl also attended Friedrich Paulsen's philosophy lectures. In 1881 he left for the University of Vienna to complete his mathematics studies under the supervision of Leo Königsberger (a former student of Weierstrass). At Vienna in 1883 he obtained his PhD with the work Beiträge zur Variationsrechnung (Contributions to the Calculus of variations). Evidently as a result of his becoming familiar with the New Testament during his twenties, Husserl asked to be baptized into the Lutheran Church in 1886. Husserl's father Adolf had died in 1884. Herbert Spiegelberg writes, "While outward religious practice never entered his life any more than it did that of most academic scholars of the time, his mind remained open for the religious phenomenon as for any other genuine experience." At times Husserl saw his goal as one of moral "renewal". Although a steadfast proponent of a radical and rational autonomy in all things, Husserl could also speak "about his vocation and even about his mission under God's will to find new ways for philosophy and science," observes Spiegelberg. Following his PhD in mathematics, Husserl returned to Berlin to work as the assistant to Karl Weierstrass. Yet already Husserl had felt the desire to pursue philosophy. Then professor Weierstrass became very ill. Husserl became free to return to Vienna where, after serving a short military duty, he devoted his attention to philosophy. In 1884 at the University of Vienna he attended the lectures of Franz Brentano on philosophy and philosophical psychology. Brentano introduced him to the writings of Bernard Bolzano, Hermann Lotze, J. Stuart Mill, and David Hume. Husserl was so impressed by Brentano that he decided to dedicate his life to philosophy; indeed, Franz Brentano is often credited as being his most important influence, e.g., with regard to intentionality. Following academic advice, two years later in 1886 Husserl followed Carl Stumpf, a former student of Brentano, to the University of Halle, seeking to obtain his habilitation which would qualify him to teach at the university level. There, under Stumpf's supervision, he wrote Über den Begriff der Zahl (On the Concept of Number) in 1887, which would serve later as the basis for his first important work, Philosophie der Arithmetik (1891). In 1887 Husserl married Malvine Steinschneider, a union that would last over fifty years. In 1892 their daughter Elizabeth was born, in 1893 their son Gerhart, and in 1894 their son Wolfgang. Elizabeth would marry in 1922, and Gerhart in 1923; Wolfgang, however, became a casualty of the First World War. Gerhart would become a philosopher of law, contributing to the subject of comparative law, teaching in the United States and after the war in Austria. Professor of philosophy Following his marriage Husserl began his long teaching career in philosophy. He started in 1887 as a Privatdozent at the University of Halle. In 1891 he published his Philosophie der Arithmetik. Psychologische und logische Untersuchungen which, drawing on his prior studies in mathematics and philosophy, proposed a psychological context as the basis of mathematics. It drew the adverse notice of Gottlob Frege, who criticized its psychologism. In 1901 Husserl with his family moved to the University of Göttingen, where he taught as extraordinarius professor. Just prior to this a major work of his, Logische Untersuchungen (Halle, 1900–1901), was published. Volume One contains seasoned reflections on "pure logic" in which he carefully refutes "psychologism". This work was well received and became the subject of a seminar given by Wilhelm Dilthey; Husserl in 1905 traveled to Berlin to visit Dilthey. Two years later in Italy he paid a visit to Franz Brentano his inspiring old teacher and to Constantin Carathéodory the mathematician. Kant and Descartes were also now influencing his thought. In 1910 he became joint editor of the journal Logos. During this period Husserl had delivered lectures on internal time consciousness, which several decades later his former students Edith Stein and Martin Heidegger edited for publication. In 1912 at Freiburg the journal Jahrbuch für Philosophie und Phänomenologische Forschung ("Yearbook for Philosophy and Phenomenological Research") was founded by Husserl and his school, and which published articles of their phenomenological movement from 1913 to 1930. His important work Ideen was published in its first issue (Vol. 1, Issue 1, 1913). Before beginning Ideen, Husserl's thought had reached the stage where "each subject is 'presented' to itself, and to each all others are 'presentiated' (Vergegenwärtigung), not as parts of nature but as pure consciousness". Ideen advanced his transition to a "transcendental interpretation" of phenomenology, a view later criticized by, among others, Jean-Paul Sartre. In Ideen Paul Ricœur sees the development of Husserl's thought as leading "from the psychological cogito to the transcendental cogito". As phenomenology further evolves, it leads (when viewed from another vantage point in Husserl's 'labyrinth') to "transcendental subjectivity". Also in Ideen Husserl explicitly elaborates the phenomenological and eidetic reductions. Ivan Ilyin and Karl Jaspers visited Husserl at Göttingen. In October 1914 both his sons were sent to fight on the Western Front of World War I, and the following year one of them, Wolfgang Husserl, was badly injured. On 8 March 1916, on the battlefield of Verdun, Wolfgang was killed in action. The next year his other son Gerhart Husserl was wounded in the war but survived. His own mother Julia died. In November 1917 one of his outstanding students and later a noted philosophy professor in his own right, Adolf Reinach, was killed in the war while serving in Flanders. Husserl had transferred in 1916 to the University of Freiburg (in Freiburg im Breisgau) where he continued bringing his work in philosophy to fruition, now as a full professor. Edith Stein served as his personal assistant during his first few years in Freiburg, followed later by Martin Heidegger from 1920 to 1923. The mathematician Hermann Weyl began corresponding with him in 1918. Husserl gave four lectures on Phenomenological method at University College London in 1922. The University of Berlin in 1923 called on him to relocate there, but he declined the offer. In 1926 Heidegger dedicated his book Sein und Zeit (Being and Time) to him "in grateful respect and friendship." Husserl remained in his professorship at Freiburg until he requested retirement, teaching his last class on 25 July 1928. A Festschrift to celebrate his seventieth birthday was presented to him on 8 April 1929. Despite retirement, Husserl gave several notable lectures. The first, at Paris in 1929, led to Méditations cartésiennes (Paris 1931). Husserl here reviews the phenomenological epoché (or phenomenological reduction), presented earlier in his pivotal Ideen (1913), in terms of a further reduction of experience to what he calls a 'sphere of ownness.' From within this sphere, which Husserl enacts to show the impossibility of solipsism, the transcendental ego finds itself always already paired with the lived body of another ego, another monad. This 'a priori' interconnection of bodies, given in perception, is what founds the interconnection of consciousnesses known as transcendental intersubjectivity, which Husserl would go on to describe at length in volumes of unpublished writings. There has been a debate over whether or not Husserl's description of ownness and its movement into intersubjectivity is sufficient to reject the charge of solipsism, to which Descartes, for example, was subject. One argument against Husserl's description works this way: instead of infinity and the Deity being the ego's gateway to the Other, as in Descartes, Husserl's ego in the Cartesian Meditations itself becomes transcendent. It remains, however, alone (unconnected). Only the ego's grasp "by analogy" of the Other (e.g., by conjectural reciprocity) allows the possibility for an 'objective' intersubjectivity, and hence for community. In 1933, the racial laws of the new National Socialist German Workers Party were enacted. On 6 April Husserl was banned from using the library at the University of Freiburg, or any other academic library; the following week, after a public outcry, he was reinstated. Yet his colleague Heidegger was elected Rector of the university on 21–22 April, and joined the Nazi Party. By contrast, in July Husserl resigned from the Deutsche Akademie. Later Husserl lectured at Prague in 1935 and Vienna in 1936, which resulted in a very differently styled work that, while innovative, is no less problematic: Die Krisis (Belgrade 1936). Husserl describes here the cultural crisis gripping Europe, then approaches a philosophy of history, discussing Galileo, Descartes, several British philosophers, and Kant. The apolitical Husserl before had specifically avoided such historical discussions, pointedly preferring to go directly to an investigation of consciousness. Merleau-Ponty and others question whether Husserl here does not undercut his own position, in that Husserl had attacked in principle historicism, while specifically designing his phenomenology to be rigorous enough to transcend the limits of history. On the contrary, Husserl may be indicating here that historical traditions are merely features given to the pure ego's intuition, like any other. A longer section follows on the "lifeworld" [Lebenswelt], one not observed by the objective logic of science, but a world seen through subjective experience. Yet a problem arises similar to that dealing with 'history' above, a chicken-and-egg problem. Does the lifeworld contextualize and thus compromise the gaze of the pure ego, or does the phenomenological method nonetheless raise the ego up transcendent? These last writings presented the fruits of his professional life. Since his university retirement Husserl had "worked at a tremendous pace, producing several major works." After suffering a fall in the autumn of 1937, the philosopher became ill with pleurisy. Edmund Husserl died in Freiburg on 27 April 1938, having just turned 79. His wife Malvine survived him. Eugen Fink, his research assistant, delivered his eulogy. Gerhard Ritter was the only Freiburg faculty member to attend the funeral, as an anti-Nazi protest. Heidegger and the Nazi era Husserl was rumoured to have been denied the use of the library at Freiburg as a result of the anti-Jewish legislation of April 1933. Relatedly, among other disabilities, Husserl was unable to publish his works in Nazi Germany [see above footnote to Die Krisis (1936)]. It was also rumoured that his former pupil Martin Heidegger informed Husserl that he was discharged, but it was actually the previous rector. Apparently Husserl and Heidegger had moved apart during the 1920s, which became clearer after 1928 when Husserl retired and Heidegger succeeded to his university chair. In the summer of 1929 Husserl had studied carefully selected writings of Heidegger, coming to the conclusion that on several of their key positions they differed: e.g., Heidegger substituted Dasein ["Being-there"] for the pure ego, thus transforming phenomenology into an anthropology, a type of psychologism strongly disfavored by Husserl. Such observations of Heidegger, along with a critique of Max Scheler, were put into a lecture Husserl gave to various Kant Societies in Frankfurt, Berlin, and Halle during 1931 entitled Phänomenologie und Anthropologie. In the war-time 1941 edition of Heidegger's primary work, Being and Time (, first published in 1927), the original dedication to Husserl was removed. This was not due to a negation of the relationship between the two philosophers, however, but rather was the result of a suggested censorship by Heidegger's publisher who feared that the book might otherwise be banned by the Nazi regime. The dedication can still be found in a footnote on page 38, thanking Husserl for his guidance and generosity. Husserl had died three years earlier. In post-war editions of Sein und Zeit the dedication to Husserl is restored. The complex, troubled, and sundered philosophical relationship between Husserl and Heidegger has been widely discussed. On 4 May 1933, Professor Edmund Husserl addressed the recent regime change in Germany and its consequences:The future alone will judge which was the true Germany in 1933, and who were the true Germans—those who subscribe to the more or less materialistic-mythical racial prejudices of the day, or those Germans pure in heart and mind, heirs to the great Germans of the past whose tradition they revere and perpetuate.After his death, Husserl's manuscripts, amounting to approximately 40,000 pages of "Gabelsberger" stenography and his complete research library, were in 1939 smuggled to the Catholic University of Leuven in Belgium by the Franciscan priest Herman Van Breda. There they were deposited at Leuven to form the Husserl-Archives of the Higher Institute of Philosophy. Much of the material in his research manuscripts has since been published in the Husserliana critical edition series. Development of his thought Several early themes In his first works, Husserl combined mathematics, psychology, and philosophy with the goal of providing a sound foundation for mathematics. He analyzed the psychological process needed to obtain the concept of number and then built up a theory on this analysis. He used methods and concepts taken from his teachers. From Weierstrass he derived the idea of generating the concept of number by counting a certain collection of objects. From Brentano and Stumpf he took the distinction between proper and improper presenting. In an example, Husserl explained this in the following way: if someone is standing in front of a house, they have a proper, direct presentation of that house, but if they are looking for it and ask for directions, then these directions (e.g. the house on the corner of this and that street) are an indirect, improper presentation. In other words, the person can have a proper presentation of an object if it is actually present, and an improper (or symbolic, as Husserl also calls it) one if they only can indicate that object through signs, symbols, etc. Husserl's Logical Investigations (1900–1901) is considered the starting point for the formal theory of wholes and their parts known as mereology. Another important element that Husserl took over from Brentano was intentionality, the notion that the main characteristic of consciousness is that it is always intentional. While often simplistically summarised as "aboutness" or the relationship between mental acts and the external world, Brentano defined it as the main characteristic of mental phenomena, by which they could be distinguished from physical phenomena. Every mental phenomenon, every psychological act, has a content, is directed at an object (the intentional object). Every belief, desire, etc. has an object that it is about: the believed, the wanted. Brentano used the expression "intentional inexistence" to indicate the status of the objects of thought in the mind. The property of being intentional, of having an intentional object, was the key feature to distinguish mental phenomena and physical phenomena, because physical phenomena lack intentionality altogether. The elaboration of phenomenology Some years after the 1900–1901 publication of his main work, the Logische Untersuchungen (Logical Investigations), Husserl made some key conceptual elaborations which led him to assert that to study the structure of consciousness, one would have to distinguish between the act of consciousness and the phenomena at which it is directed (the objects as intended). Knowledge of essences would only be possible by "bracketing" all assumptions about the existence of an external world. This procedure he called "epoché". These new concepts prompted the publication of the Ideen (Ideas) in 1913, in which they were at first incorporated, and a plan for a second edition of the Logische Untersuchungen. From the Ideen onward, Husserl concentrated on the ideal, essential structures of consciousness. The metaphysical problem of establishing the reality of what people perceive, as distinct from the perceiving subject, was of little interest to Husserl in spite of his being a transcendental idealist. Husserl proposed that the world of objects—and of ways in which people direct themselves toward and perceive those objects—is normally conceived of in what he called the "natural attitude", which is characterized by a belief that objects exist distinct from the perceiving subject and exhibit properties that people see as emanating from them (this attitude is also called physicalist objectivism). Husserl proposed a radical new phenomenological way of looking at objects by examining how people, in their many ways of being intentionally directed toward them, actually "constitute" them (to be distinguished from materially creating objects or objects merely being figments of the imagination); in the Phenomenological standpoint, the object ceases to be something simply "external" and ceases to be seen as providing indicators about what it is, and becomes a grouping of perceptual and functional aspects that imply one another under the idea of a particular object or "type". The notion of objects as real is not expelled by phenomenology, but "bracketed" as a way in which people regard objectsinstead of a feature that inheres in an object's essence founded in the relation between the object and the perceiver. To better understand the world of appearances and objects, phenomenology attempts to identify the invariant features of how objects are perceived and pushes attributions of reality into their role as an attribution about the things people perceive (or an assumption underlying how people perceive objects). The major dividing line in Husserl's thought is the turn to transcendental idealism. In a later period, Husserl began to wrestle with the complicated issues of intersubjectivity, specifically, how communication about an object can be assumed to refer to the same ideal entity (Cartesian Meditations, Meditation V). Husserl tries new methods of bringing his readers to understand the importance of phenomenology to scientific inquiry (and specifically to psychology) and what it means to "bracket" the natural attitude. The Crisis of the European Sciences is Husserl's unfinished work that deals most directly with these issues. In it, Husserl for the first time attempts a historical overview of the development of Western philosophy and science, emphasizing the challenges presented by their increasingly one-sidedly empirical and naturalistic orientation. Husserl declares that mental and spiritual reality possess their own reality independent of any physical basis, and that a science of the mind ('Geisteswissenschaft') must be established on as scientific a foundation as the natural sciences have managed: "It is my conviction that intentional phenomenology has for the first time made spirit as spirit the field of systematic scientific experience, thus effecting a total transformation of the task of knowledge." Husserl's thought Husserl's thought is revolutionary in several ways, most notably in the distinction between "natural" and "phenomenological" modes of understanding. In the former, sense-perception in correspondence with the material realm constitutes the known reality, and understanding is premised on the accuracy of the perception and the objective knowability of what is called the "real world". Phenomenological understanding strives to be rigorously "presuppositionless" by means of what Husserl calls "phenomenological reduction". This reduction is not conditioned but rather transcendental: in Husserl's terms, pure consciousness of absolute Being. In Husserl's work, consciousness of any given thing calls for discerning its meaning as an "intentional object". Such an object does not simply strike the senses, to be interpreted or misinterpreted by mental reason; it has already been selected and grasped, grasping being an etymological connotation, of percipere, the root of "perceive". Meaning and object From Logical Investigations (1900/1901) to Experience and Judgment (published in 1939), Husserl expressed clearly the difference between meaning and object. He identified several different kinds of names. For example, there are names that have the role of properties that uniquely identify an object. Each of these names expresses a meaning and designates the same object. Examples of this are "the victor in Jena" and "the loser in Waterloo", or "the equilateral triangle" and "the equiangular triangle"; in both cases, both names express different meanings, but designate the same object. There are names which have no meaning, but have the role of designating an object: "Aristotle", "Socrates", and so on. Finally, there are names which designate a variety of objects. These are called "universal names"; their meaning is a "concept" and refers to a series of objects (the extension of the concept). The way people know sensible objects is called "sensible intuition". Husserl also identifies a series of "formal words" which are necessary to form sentences and have no sensible correlates. Examples of formal words are "a", "the", "more than", "over", "under", "two", "group", and so on. Every sentence must contain formal words to designate what Husserl calls "formal categories". There are two kinds of categories: meaning categories and formal-ontological categories. Meaning categories relate judgments; they include forms of conjunction, disjunction, forms of plural, among others. Formal-ontological categories relate objects and include notions such as set, cardinal number, ordinal number, part and whole, relation, and so on. The way people know these categories is through a faculty of understanding called "categorial intuition". Through sensible intuition, consciousness constitutes what Husserl calls a "situation of affairs" (). It is a passive constitution where objects themselves are presented. To this situation of affairs, through categorial intuition, people are able to constitute a "state of affairs" (). One situation of affairs through objective acts of consciousness (acts of constituting categorially) can serve as the basis for constituting multiple states of affairs. For example, suppose a and b are two sensible objects in a certain situation of affairs. It can be used as the basis to say, "a<b" and "b>a", two judgments which designate the same state of affairs. For Husserl a sentence has a proposition or judgment as its meaning, and refers to a state of affairs which has a situation of affairs as a reference base. Formal and regional ontology Husserl sees ontology as a science of essences. Sciences of essences are contrasted with factual sciences: the former are knowable a priori and provide the foundation for the later, which are knowable a posteriori. Ontology as a science of essences is not interested in actual facts, but in the essences themselves, whether they have instances or not. Husserl distinguishes between formal ontology, which investigates the essence of objectivity in general, and regional ontologies, which study regional essences that are shared by all entities belonging to the region. Regions correspond to the highest genera of concrete entities: material nature, personal consciousness and interpersonal spirit. Husserl's method for studying ontology and sciences of essence in general is called eidetic variation. It involves imagining an object of the kind under investigation and varying its features. The changed feature is inessential to this kind if the object can survive its change, otherwise it belongs to the kind's essence. For example, a triangle remains a triangle if one of its sides is extended but it ceases to be a triangle if a fourth side is added. Regional ontology involves applying this method to the essences corresponding to the highest genera. Philosophy of logic and mathematics Husserl believed that truth-in-itself has as ontological correlate being-in-itself, just as meaning categories have formal-ontological categories as correlates. Logic is a formal theory of judgment, that studies the formal a priori relations among judgments using meaning categories. Mathematics, on the other hand, is formal ontology; it studies all the possible forms of being (of objects). Hence for both logic and mathematics, the different formal categories are the objects of study, not the sensible objects themselves. The problem with the psychological approach to mathematics and logic is that it fails to account for the fact that this approach is about formal categories, and not simply about abstractions from sensibility alone. The reason why sensible objects are not dealt with in mathematics is because of another faculty of understanding called "categorial abstraction." Through this faculty people are able to get rid of sensible components of judgments, and just focus on formal categories themselves. Thanks to "eidetic reduction" (or "essential intuition"), people are able to grasp the possibility, impossibility, necessity and contingency among concepts and among formal categories. Categorial intuition, along with categorial abstraction and eidetic reduction, are the basis for logical and mathematical knowledge. Husserl criticized the logicians of his day for not focusing on the relation between subjective processes that offer objective knowledge of pure logic. All subjective activities of consciousness need an ideal correlate, and objective logic (constituted noematically) as it is constituted by consciousness needs a noetic correlate (the subjective activities of consciousness). Husserl stated that logic has three strata, each further away from consciousness and psychology than those that precede it. The first stratum is what Husserl called a "morphology of meanings" concerning a priori ways to relate judgments to make them meaningful. In this stratum people elaborate a "pure grammar" or a logical syntax, and he would call its rules "laws to prevent non-sense", which would be similar to what logic calls today "formation rules". Mathematics, as logic's ontological correlate, also has a similar stratum, a "morphology of formal-ontological categories". The second stratum would be called by Husserl "logic of consequence" or the "logic of non-contradiction" which explores all possible forms of true judgments. He includes here syllogistic classic logic, propositional logic and that of predicates. This is a semantic stratum, and the rules of this stratum would be the "laws to avoid counter-sense" or "laws to prevent contradiction". They are very similar to today's logic "transformation rules". Mathematics also has a similar stratum which is based among others on pure theory of pluralities, and a pure theory of numbers. They provide a science of the conditions of possibility of any theory whatsoever. Husserl also talked about what he called "logic of truth" which consists of the formal laws of possible truth and its modalities, and precedes the third logical third stratum. The third stratum is metalogical, what he called a "theory of all possible forms of theories." It explores all possible theories in an a priori fashion, rather than the possibility of theory in general. Theories of possible relations between pure forms of theories could be established; these logical relations could in turn be investigated using deduction. The logician is free to see the extension of this deductive, theoretical sphere of pure logic. The ontological correlate to the third stratum is the "theory of manifolds". In formal ontology, it is a free investigation where a mathematician can assign several meanings to several symbols, and all their possible valid deductions in a general and indeterminate manner. It is, properly speaking, the most universal mathematics of all. Through the posit of certain indeterminate objects (formal-ontological categories) as well as any combination of mathematical axioms, mathematicians can explore the apodeictic connections between them, as long as consistency is preserved. According to Husserl, this view of logic and mathematics accounted for the objectivity of a series of mathematical developments of his time, such as n-dimensional manifolds (both Euclidean and non-Euclidean), Hermann Grassmann's theory of extensions, William Rowan Hamilton's Hamiltonians, Sophus Lie's theory of transformation groups, and Cantor's set theory. Jacob Klein was one student of Husserl who pursued this line of inquiry, seeking to "desedimentize" mathematics and the mathematical sciences. Husserl and psychologism Philosophy of arithmetic and Frege After obtaining his PhD in mathematics, Husserl began analyzing the foundations of mathematics from a psychological point of view. In his habilitation thesis, On the Concept of Number (1886) and in his Philosophy of Arithmetic (1891), Husserl sought, by employing Brentano's descriptive psychology, to define the natural numbers in a way that advanced the methods and techniques of Karl Weierstrass, Richard Dedekind, Georg Cantor, Gottlob Frege, and other contemporary mathematicians. Later, in the first volume of his Logical Investigations, the Prolegomena of Pure Logic, Husserl, while attacking the psychologistic point of view in logic and mathematics, also appears to reject much of his early work, although the forms of psychologism analysed and refuted in the Prolegomena did not apply directly to his Philosophy of Arithmetic. Some scholars question whether Frege's negative review of the Philosophy of Arithmetic helped turn Husserl towards modern Platonism, but he had already discovered the work of Bernard Bolzano independently around 1890/91. In his Logical Investigations, Husserl explicitly mentioned Bolzano, G. W. Leibniz and Hermann Lotze as inspirations for his newer position. Husserl's review of Ernst Schröder, published before Frege's landmark 1892 article, clearly distinguishes sense from reference; thus Husserl's notions of noema and object also arose independently. Likewise, in his criticism of Frege in the Philosophy of Arithmetic, Husserl remarks on the distinction between the content and the extension of a concept. Moreover, the distinction between the subjective mental act, namely the content of a concept, and the (external) object, was developed independently by Brentano and his school, and may have surfaced as early as Brentano's 1870s lectures on logic. Scholars such as J. N. Mohanty, Claire Ortiz Hill, and Guillermo E. Rosado Haddock, among others, have argued that Husserl's so-called change from psychologism to Platonism came about independently of Frege's review. For example, the review falsely accuses Husserl of subjectivizing everything, so that no objectivity is possible, and falsely attributes to him a notion of abstraction whereby objects disappear until all that remains are numbers as mere ghosts. Contrary to what Frege states, in Husserl's Philosophy of Arithmetic there are already two different kinds of representations: subjective and objective. Moreover, objectivity is clearly defined in that work. Frege's attack seems to be directed at certain foundational doctrines then current in Weierstrass's Berlin School, of which Husserl and Cantor cannot be said to be orthodox representatives. Furthermore, various sources indicate that Husserl changed his mind about psychologism as early as 1890, a year before he published the Philosophy of Arithmetic. Husserl stated that by the time he published that book, he had already changed his mind—that he had doubts about psychologism from the very outset. He attributed this change of mind to his reading of Leibniz, Bolzano, Lotze, and David Hume. Husserl makes no mention of Frege as a decisive factor in this change. In his Logical Investigations, Husserl mentions Frege only twice, once in a footnote to point out that he had retracted three pages of his criticism of Frege's The Foundations of Arithmetic, and again to question Frege's use of the word to designate "reference" rather than "meaning" (sense). In a letter dated 24 May 1891, Frege thanked Husserl for sending him a copy of the Philosophy of Arithmetic and Husserl's review of Ernst Schröder's . In the same letter, Frege used the review of Schröder's book to analyze Husserl's notion of the sense of reference of concept words. Hence Frege recognized, as early as 1891, that Husserl distinguished between sense and reference. Consequently, Frege and Husserl independently elaborated a theory of sense and reference before 1891. Commentators argue that Husserl's notion of noema has nothing to do with Frege's notion of sense, because noemata are necessarily fused with noeses which are the conscious activities of consciousness. Noemata have three different levels: The substratum, which is never presented to the consciousness, and is the support of all the properties of the object; The noematic senses, which are the different ways the objects are presented to us; The modalities of being (possible, doubtful, existent, non-existent, absurd, and so on). Consequently, in intentional activities, even non-existent objects can be constituted, and form part of the whole noema. Frege, however, did not conceive of objects as forming parts of senses: If a proper name denotes a non-existent object, it does not have a reference, hence concepts with no objects have no truth value in arguments. Moreover, Husserl did not maintain that predicates of sentences designate concepts. According to Frege the reference of a sentence is a truth value; for Husserl it is a "state of affairs." Frege's notion of "sense" is unrelated to Husserl's noema, while the latter's notions of "meaning" and "object" differ from those of Frege. In detail, Husserl's conception of logic and mathematics differs from that of Frege, who held that arithmetic could be derived from logic. For Husserl this is not the case: mathematics (with the exception of geometry) is the ontological correlate of logic, and while both fields are related, neither one is strictly reducible to the other. Husserl's criticism of psychologism Reacting against authors such as John Stuart Mill, Christoph von Sigwart and his own former teacher Brentano, Husserl criticised their psychologism in mathematics and logic, i.e. their conception of these abstract and a priori sciences as having an essentially empirical foundation and a prescriptive or descriptive nature. According to psychologism, logic would not be an autonomous discipline, but a branch of psychology, either proposing a prescriptive and practical "art" of correct judgement (as Brentano and some of his more orthodox students did) or a description of the factual processes of human thought. Husserl pointed out that the failure of anti-psychologists to defeat psychologism was a result of being unable to distinguish between the foundational, theoretical side of logic, and the applied, practical side. Pure logic does not deal at all with "thoughts" or "judgings" as mental episodes but about a priori laws and conditions for any theory and any judgments whatsoever, conceived as propositions in themselves. "Here 'Judgement' has the same meaning as 'proposition', understood, not as a grammatical, but as an ideal unity of meaning. This is the case with all the distinctions of acts or forms of judgement, which provide the foundations for the laws of pure logic. Categorial, hypothetical, disjunctive, existential judgements, and however else we may call them, in pure logic are not names for classes of judgements, but for ideal forms of propositions." Since "truth-in-itself" has "being-in-itself" as ontological correlate, and since psychologists reduce truth (and hence logic) to empirical psychology, the inevitable consequence is scepticism. Psychologists have not been successful either in showing how induction or psychological processes can justify the absolute certainty of logical principles, such as the principles of identity and non-contradiction. It is therefore futile to base certain logical laws and principles on uncertain processes of the mind. This confusion made by psychologism (and related disciplines such as biologism and anthropologism) can be due to three specific prejudices: 1. The first prejudice is the supposition that logic is somehow normative in nature. Husserl argues that logic is theoretical, i.e., that logic itself proposes a priori laws which are themselves the basis of the normative side of logic. Since mathematics is related to logic, he cites an example from mathematics: a formula like "(a + b)(a – b) = a² – b²" does not offer any insight into how to think mathematically. It just expresses a truth. A proposition that says: "The product of the sum and the difference of a and b should give the difference of the squares of a and b" does express a normative proposition, but this normative statement is based on the theoretical statement "(a + b)(a – b) = a² – b²". 2. For psychologists, the acts of judging, reasoning, deriving, and so on, are all psychological processes. Therefore, it is the role of psychology to provide the foundation of these processes. Husserl states that this effort made by psychologists is a "metábasis eis állo génos" (Gr. μετάβασις εἰς ἄλλο γένος, "a transgression to another field"). It is a metábasis because psychology cannot provide any foundations for a priori laws which themselves are the basis for all correct thought. Psychologists have the problem of confusing intentional activities with the object of these activities. It is important to distinguish between the act of judging and the judgment itself, the act of counting and the number itself, and so on. Counting five objects is undeniably a psychological process, but the number 5 is not. 3. Judgments can be true or not true. Psychologists argue that judgments are true because they become "evidently" true to us. This evidence, a psychological process that "guarantees" truth, is indeed a psychological process. Husserl responds by saying that truth itself, as well as logical laws, always remain valid regardless of psychological "evidence" that they are true. No psychological process can explain the a priori objectivity of these logical truths. From this criticism to psychologism, the distinction between psychological acts and their intentional objects, and the difference between the normative side of logic and the theoretical side, derives from a Platonist conception of logic. This means that logical and mathematical laws should be regarded as being independent of the human mind, and also as an autonomy of meanings. It is essentially the difference between the real (everything subject to time) and the ideal or irreal (everything that is atemporal), such as logical truths, mathematical entities, mathematical truths and meanings in general. Influence David Carr commented on Husserl following in his 1970 dissertation at Yale: "It is well known that Husserl was always disappointed at the tendency of his students to go their own way, to embark upon fundamental revisions of phenomenology rather than engage in the communal task" as originally intended by the radical new science. Notwithstanding, he did attract philosophers to phenomenology. Martin Heidegger is the best known of Husserl's students, the one whom Husserl chose as his successor at Freiburg. Heidegger's magnum opus Being and Time was dedicated to Husserl. They shared their thoughts and worked alongside each other for over a decade at the University of Freiburg, Heidegger being Husserl's assistant during 1920–1923. Heidegger's early work followed his teacher, but with time he began to develop new insights distinctively variant. Husserl became increasingly critical of Heidegger's work, especially in 1929, and included pointed criticism of Heidegger in lectures he gave during 1931. Heidegger, while acknowledging his debt to Husserl, followed a political position offensive and harmful to Husserl after the Nazis came to power in 1933, Husserl being of Jewish origin and Heidegger infamously being then a Nazi proponent. Academic discussion of Husserl and Heidegger is extensive. At Göttingen in 1913 Adolf Reinach (1884–1917) "was now Husserl's right hand. He was above all the mediator between Husserl and the students, for he understood extremely well how to deal with other persons, whereas Husserl was pretty much helpless in this respect." He was an original editor of Husserl's new journal, Jahrbuch; one of his works (giving a phenomenological analysis of the law of obligations) appeared in its first issue. Reinach was widely admired and a remarkable teacher. Husserl, in his 1917 obituary, wrote, "He wanted to draw only from the deepest sources, he wanted to produce only work of enduring value. And through his wise restraint he succeeded in this." Edith Stein was Husserl's student at Göttingen and Freiburg while she wrote her doctoral thesis The Empathy Problem as it Developed Historically and Considered Phenomenologically (1916). She then became his assistant at Freiburg in 1916–18. She later adapted her phenomenology to the modern school of modern Thomism. Ludwig Landgrebe became assistant to Husserl in 1923. From 1939 he collaborated with Eugen Fink at the Husserl-Archives in Leuven. In 1954 he became leader of the Husserl-Archives. Landgrebe is known as one of Husserl's closest associates, but also for his independent views relating to history, religion and politics as seen from the viewpoints of existentialist philosophy and metaphysics. Eugen Fink was a close associate of Husserl during the 1920s and 1930s. He wrote the Sixth Cartesian Meditation which Husserl said was the truest expression and continuation of his own work. Fink delivered the eulogy for Husserl in 1938. Roman Ingarden, an early student of Husserl at Freiburg, corresponded with Husserl into the mid-1930s. Ingarden did not accept, however, the later transcendental idealism of Husserl which he thought would lead to relativism. Ingarden has written his work in German and Polish. In his Spór o istnienie świata (Ger.: "Der Streit um die Existenz der Welt", Eng.: "Dispute about existence of the world") he created his own realistic position, which also helped to spread phenomenology in Poland. Max Scheler met Husserl in Halle in 1901 and found in his phenomenology a methodological breakthrough for his own philosophy. Scheler, who was at Göttingen when Husserl taught there, was one of the original few editors of the journal Jahrbuch für Philosophie und Phänomenologische Forschung (1913). Scheler's work Formalism in Ethics and Nonformal Ethics of Value appeared in the new journal (1913 and 1916) and drew acclaim. The personal relationship between the two men, however, became strained, due to Scheler's legal troubles, and Scheler returned to Munich. Although Scheler later criticised Husserl's idealistic logical approach and proposed instead a "phenomenology of love", he states that he remained "deeply indebted" to Husserl throughout his work. Nicolai Hartmann was once thought to be at the center of phenomenology, but perhaps no longer. In 1921 the prestige of Hartmann the Neo-Kantian, who was Professor of Philosophy at Marburg, was added to the Movement; he "publicly declared his solidarity with the actual work of die Phänomenologie." Yet Hartmann's connections were with Max Scheler and the Munich circle; Husserl himself evidently did not consider him as a phenomenologist. His philosophy, however, is said to include an innovative use of the method. Emmanuel Levinas in 1929 gave a presentation at one of Husserl's last seminars in Freiburg. Also that year he wrote on Husserl's Ideen (1913) a long review published by a French journal. With Gabrielle Peiffer, Levinas translated into French Husserl's Méditations cartésiennes (1931). He was at first impressed with Heidegger and began a book on him, but broke off the project when Heidegger became involved with the Nazis. After the war he wrote on Jewish spirituality; most of his family had been murdered by the Nazis in Lithuania. Levinas then began to write works that would become widely known and admired. Alfred Schutz's Phenomenology of the Social World seeks to rigorously ground Max Weber's interpretive sociology in Husserl's phenomenology. Husserl was impressed by this work and asked Schutz to be his assistant. Jean-Paul Sartre was also largely influenced by Husserl, although he later came to disagree with key points in his analyses. Sartre rejected Husserl's transcendental interpretations begun in his Ideen (1913) and instead followed Heidegger's ontology. Maurice Merleau-Ponty's Phenomenology of Perception is influenced by Edmund Husserl's work on perception, intersubjectivity, intentionality, space, and temporality, including Husserl's theory of retention and protention. Merleau-Ponty's description of 'motor intentionality' and sexuality, for example, retain the important structure of the noetic/noematic correlation of Ideen I, yet further concretize what it means for Husserl when consciousness particularizes itself into modes of intuition. Merleau-Ponty's most clearly Husserlian work is, perhaps, "the Philosopher and His Shadow." Depending on the interpretation of Husserl's accounts of eidetic intuition, given in Husserl's Phenomenological Psychology and Experience and Judgment, it may be that Merleau-Ponty did not accept the "eidetic reduction" nor the "pure essence" said to result. Merleau-Ponty was the first student to study at the Husserl-archives in Leuven. Gabriel Marcel explicitly rejected existentialism, due to Sartre, but not phenomenology, which has enjoyed a wide following among French Catholics. He appreciated Husserl, Scheler, and (but with apprehension) Heidegger. His expressions like "ontology of sensability" when referring to the body, indicate influence by phenomenological thought. Kurt Gödel is known to have read Cartesian Meditations. He expressed very strong appreciation for Husserl's work, especially with regard to "bracketing" or "epoché". Hermann Weyl's interest in intuitionistic logic and impredicativity appears to have resulted from his reading of Husserl. He was introduced to Husserl's work through his wife, Helene Joseph, herself a student of Husserl at Göttingen. Colin Wilson has used Husserl's ideas extensively in developing his "New Existentialism," particularly in regards to his "intentionality of consciousness," which he mentions in a number of his books. Rudolf Carnap was also influenced by Husserl, not only concerning Husserl's notion of essential insight that Carnap used in his Der Raum, but also his notion of "formation rules" and "transformation rules" is founded on Husserl's philosophy of logic. Karol Wojtyla, who would later become Pope John Paul II, was influenced by Husserl. Phenomenology appears in his major work, The Acting Person (1969). Originally published in Polish, it was translated by Andrzej Potocki and edited by Anna-Teresa Tymieniecka in the Analecta Husserliana. The Acting Person combines phenomenological work with Thomistic ethics. Paul Ricœur has translated many works of Husserl into French and has also written many of his own studies of the philosopher. Among other works, Ricœur employed phenomenology in his Freud and Philosophy (1965). Jacques Derrida wrote several critical studies of Husserl early in his academic career. These included his dissertation, The Problem of Genesis in Husserl's Philosophy, and also his introduction to The Origin of Geometry. Derrida continued to make reference to Husserl in works such as Of Grammatology. Stanisław Leśniewski and Kazimierz Ajdukiewicz were inspired by Husserl's formal analysis of language. Accordingly, they employed phenomenology in the development of categorial grammar. José Ortega y Gasset visited Husserl at Freiburg in 1934. He credited phenomenology for having 'liberated him' from a narrow neo-Kantian thought. While perhaps not a phenomenologist himself, he introduced the philosophy to Iberia and Latin America. Wilfrid Sellars, an influential figure in the so-called "Pittsburgh School" (Robert Brandom, John McDowell) had been a student of Marvin Farber, a pupil of Husserl, and was influenced by phenomenology through him: In his 1942 essay The Myth of Sisyphus, absurdist philosopher Albert Camus acknowledges Husserl as a previous philosopher who described and attempted to deal with the feeling of the absurd, but claims he committed "philosophical suicide" by elevating reason and ultimately arriving at ubiquitous Platonic forms and an abstract god. Hans Blumenberg received his habilitation in 1950, with a dissertation on ontological distance, an inquiry into the crisis of Husserl's phenomenology. Roger Scruton, despite some disagreements with Husserl, drew upon his work in Sexual Desire (1986). The influence of the Husserlian phenomenological tradition in the 21st century extends beyond the confines of the European and North American legacies. It has already started to impact (indirectly) scholarship in Eastern and Oriental thought, including research on the impetus of philosophical thinking in the history of ideas in Islam. Bibliography In German 1887. Über den Begriff der Zahl. Psychologische Analysen (On the Concept of Number; habilitation thesis) 1891. Philosophie der Arithmetik. Psychologische und logische Untersuchungen (Philosophy of Arithmetic) 1900. Logische Untersuchungen. Erster Teil: Prolegomena zur reinen Logik (Logical Investigations, Vol. 1: Prolegomena to Pure Logic) 1901. Logische Untersuchungen. Zweiter Teil: Untersuchungen zur Phänomenologie und Theorie der Erkenntnis (Logical Investigations, Vol. 2) 1911. Philosophie als strenge Wissenschaft (included in Phenomenology and the Crisis of Philosophy: Philosophy as Rigorous Science and Philosophy and the Crisis of European Man) 1913. Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie. Erstes Buch: Allgemeine Einführung in die reine Phänomenologie (Ideas: General Introduction to Pure Phenomenology) 1923–24. Erste Philosophie. Zweiter Teil: Theorie der phänomenologischen Reduktion (First Philosophy, Vol. 2: Phenomenological Reductions) 1925. Erste Philosophie. Erster Teil: Kritische Ideengeschichte (First Philosophy, Vol. 1: Critical History of Ideas) 1928. Vorlesungen zur Phänomenologie des inneren Zeitbewusstseins (Lectures on the Phenomenology of the Consciousness of Internal Time) 1929. Formale und transzendentale Logik. Versuch einer Kritik der logischen Vernunft (Formal and Transcendental Logic) 1930. Nachwort zu meinen "Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie" (Postscript to my "Ideas") 1936. Die Krisis der europäischen Wissenschaften und die transzendentale Phänomenologie: Eine Einleitung in die phänomenologische Philosophie (The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy) 1939. Erfahrung und Urteil. Untersuchungen zur Genealogie der Logik. (Experience and Judgment) 1950. Cartesianische Meditationen (translation of Méditations cartésiennes (Cartesian Meditations, 1931)) 1952. Ideen II: Phänomenologische Untersuchungen zur Konstitution (Ideas II: Studies in the Phenomenology of Constitution) 1952. Ideen III: Die Phänomenologie und die Fundamente der Wissenschaften (Ideas III: Phenomenology and the Foundations of the Sciences) 1973. Zur Phänomenologie der Intersubjektivität (On the Phenomenology of Intersubjectivity) In English Philosophy of Arithmetic, Willard, Dallas, trans., 2003 [1891]. Dordrecht: Kluwer. Logical Investigations, 1973 [1900, 2nd revised edition 1913], Findlay, J. N., trans. London: Routledge. "Philosophy as Rigorous Science", translated in Quentin Lauer, S.J., editor, 1965 [1910] Phenomenology and the Crisis of Philosophy. New York: Harper & Row. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy – First Book: General Introduction to a Pure Phenomenology, 1982 [1913]. Kersten, F., trans. The Hague: Nijhoff. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy – Second Book: Studies in the Phenomenology of Constitution, 1989. R. Rojcewicz and A. Schuwer, translators. Dordrecht: Kluwer. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy – Third Book: Phenomenology and the Foundations of the Sciences, 1980, Klein, T. E., and Pohl, W. E., translators. Dordrecht: Kluwer. On the Phenomenology of the Consciousness of Internal Time (1893–1917), 1990 [1928]. Brough, J.B., trans. Dordrecht: Kluwer. Cartesian Meditations, 1960 [1931]. Cairns, D., trans. Dordrecht: Kluwer. Formal and Transcendental Logic, 1969 [1929], Cairns, D., trans. The Hague: Nijhoff. Experience and Judgement, 1973 [1939], Churchill, J. S., and Ameriks, K., translators. London: Routledge. The Crisis of European Sciences and Transcendental Phenomenology, 1970 [1936/54], Carr, D., trans. Evanston: Northwestern University Press. "Universal Teleology". Telos 4 (Fall 1969). New York: Telos Press. Anthologies Willard, Dallas, trans., 1994. Early Writings in the Philosophy of Logic and Mathematics. Dordrecht: Kluwer. Welton, Donn, ed., 1999. The Essential Husserl. Bloomington: Indiana University Press. See also Early phenomenology Experimental phenomenology List of phenomenologists Notes Citations Further reading Adorno, Theodor W., 2013. Against Epistemology. Cambridge: Polity Press. Bernet, Rudolf, et al., 1993. Introduction to Husserlian Phenomenology. Evanston: Northwestern University Press. Derrida, Jacques, 1954 (French), 2003 (English). The Problem of Genesis in Husserl's Philosophy. Chicago & London: University of Chicago Press. --------, 1962 (French), 1976 (English). Introduction to Husserl's The Origin of Geometry. Includes Derrida's translation of Appendix III of Husserl's 1936 The Crisis of European Sciences and Transcendental Phenomenology. --------, 1967 (French), 1973 (English). Speech and Phenomena (La Voix et le Phénomène), and other Essays on Husserl's Theory of Signs. Fink, Eugen 1995, Sixth Cartesian meditation. The Idea of a Transcendental Theory of Method with textual notations by Edmund Husserl. Translated with an introduction by Ronald Bruzina, Bloomington: Indiana University Press. Hill, C. O., 1991. Word and Object in Husserl, Frege, and Russell: The Roots of Twentieth-Century Philosophy. Ohio Univ. Press. Hopkins, Burt C., (2011). The Philosophy of Husserl. Durham: Acumen. Levinas, Emmanuel, 1963 (French), 1973 (English). The Theory of Intuition in Husserl's Phenomenology. Evanston: Northwestern University Press. Köchler, Hans, 1982. Edmund Husserl's Theory of Meaning. The Hague: Martinus Nijhoff. --------, 1982. Husserl and Frege. Bloomington: Indiana University Press. Moran, D. and Cohen, J., 2012, The Husserl Dictionary. London, Continuum Press. Natanson, Maurice, 1973. Edmund Husserl: Philosopher of Infinite Tasks. Evanston: Northwestern University Press. Ricœur, Paul, 1967. Husserl: An Analysis of His Phenomenology. Evanston: Northwestern University Press. Rollinger, R. D., 2008. Austrian Phenomenology: Brentano, Husserl, Meinong, and Others on Mind and Language. Frankfurt am Main: Ontos-Verlag. Sokolowski, Robert. Introduction to Phenomenology. New York: Cambridge University Press, 1999. Smith, David Woodruff, 2007. Husserl. London: Routledge. Zahavi, Dan, 2003. Husserl's Phenomenology. Stanford: Stanford University Press. External links Husserl archives Husserl-Archives Leuven, the main Husserl-Archive in Leuven, International Centre for Phenomenological Research. Husserliana: Edmund Husserl Gesammelte Werke, the ongoing critical edition of Husserl's works. Husserliana: Materialien, edition for lectures and shorter works. Edmund Husserl Collected Works, English translation of Husserl's works. Husserl-Archives at the University of Cologne. Husserl-Archives Freiburg. Archives Husserl de Paris, at the École normale supérieure, Paris. Other links Papers on Edmund Husserl by Barry Smith English translation of "Vienna Lecture" (1935): "Philosophy and the Crisis of European Humanity" The Husserl Page by Bob Sandmeyer. Includes a number of online texts in German and English. Husserl.net, open content project. "Edmund Husserl: Formal Ontology and Transcendental Logic." Resource guide on Husserl's logic and formal ontology, with annotated bibliography. The Husserl Circle. Cartesian Meditations in Internet Archive Ideas, Part I in Internet Archive Edmund Husserl on the Open Commons of Phenomenology. Complete bibliography and links to all German texts, including Husserliana vols. I–XXVIII 1859 births 1938 deaths 19th-century Austrian male writers 19th-century Austrian writers 19th-century German philosophers 19th-century German writers 19th-century German male writers 20th-century Austrian male writers 20th-century Austrian philosophers 20th-century German philosophers 20th-century German writers 19th-century Austrian Jews Austrian logicians Austrian Lutherans Austrian philosophers Converts to Lutheranism from Judaism Descartes scholars Epistemologists German logicians German Lutherans German male non-fiction writers Humboldt University of Berlin alumni Leipzig University alumni Lutheran philosophers Jewish philosophers Martin Luther University of Halle-Wittenberg alumni Metaphysicians Ontologists Writers from Prostějov People from the Margraviate of Moravia Phenomenologists Philosophers of culture German philosophers of education Philosophers of language Philosophers of logic Philosophers of mathematics Philosophers of mind Philosophers of psychology Platonists Trope theorists Academic staff of the University of Freiburg Academic staff of the University of Göttingen University of Vienna alumni Corresponding fellows of the British Academy Moravian-German people
Edmund Husserl
[ "Mathematics" ]
13,460
[ "Philosophers of mathematics" ]
9,531
https://en.wikipedia.org/wiki/Electrical%20engineering
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use. Electrical engineering is divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science. Electrical engineers typically hold a degree in electrical engineering, electronic or electrical and electronic engineering. Practicing engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE). Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software. History Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery. 19th century In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism. In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction. In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy. Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation. During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts. In about 1885, Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world. During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard. Early 20th century During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose-built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of . Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936. In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. In 1948, Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise). Solid-state electronics The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices. The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959. The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking. The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution. Subfields One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today, electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes, certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right. Power and energy Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. Telecommunications Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer. Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static. Control engineering Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries. Electronics Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner. Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering. Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today. Microelectronics and nanoelectronics Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics. Signal processing Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals. Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems. DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing. Instrumentation Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points. Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control. Computers Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering. Photonics and optics Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber-optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials). Related disciplines Mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles. Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems. The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication. In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion. Education Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study. At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered. Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree. Professional practice In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union). The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law. Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer. In Australia, Canada, and the United States, electrical engineers make up around 0.25% of the labor force. Tools and work From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunications systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery. Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others. Although most electrical engineers will understand basic circuit theory (that is, the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunications systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering. A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high-frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting. For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important. The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers. Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets. See also Barnacle (slang) Comparison of EDA software Electrical Technologist Electronic design automation Glossary of electrical and electronics engineering Index of electrical engineering articles Information engineering International Electrotechnical Commission (IEC) List of electrical engineers List of engineering branches List of mechanical, electrical and electronic equipment manufacturing companies by revenue List of Russian electrical engineers Occupations in electrical/electronics engineering Outline of electrical engineering Timeline of electrical and electronic engineering Notes References Bibliography Martini, L., "BSCCO-2233 multilayered conductors", in Superconducting Materials for High Energy Colliders, pp. 173–181, World Scientific, 2001 . Schmidt, Rüdiger, "The LHC accelerator and its challenges", in Kramer M.; Soler, F.J.P. (eds), Large Hadron Collider Phenomenology, pp. 217–250, CRC Press, 2004 . Further reading External links International Electrotechnical Commission (IEC) MIT OpenCourseWare in-depth look at Electrical Engineering – online courses with video lectures. IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences. Electronic engineering Computer engineering Electrical and computer engineering Engineering disciplines
Electrical engineering
[ "Technology", "Engineering" ]
6,637
[ "Computer engineering", "Electronic engineering", "Electrical and computer engineering", "nan", "Electrical engineering" ]
9,532
https://en.wikipedia.org/wiki/Electromagnetism
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light. History Ancient world Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures). 19th century Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: Electric charges or one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel. Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement. In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." A fundamental force The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range. All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction. Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena. Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects. The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. Classical electrodynamics In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10May 1752 by Thomas-François Dalibard of France using a iron rod instead of a kite and he successfully extracted electrical sparks from a cloud. One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields. Extension to nonlinear phenomena The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics. Quantities and units Here is a list of common units related to electromagnetism: ampere (electric current, SI unit) coulomb (electric charge) farad (capacitance) henry (inductance) ohm (resistance) siemens (conductance) tesla (magnetic flux density) volt (electric potential) watt (power) weber (magnetic flux) In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. Applications The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction. See also Abraham–Lorentz force Aeromagnetic surveys Computational electromagnetics Double-slit experiment Electrodynamic droplet deformation Electromagnet Electromagnetic induction Electromagnetic wave equation Electromagnetic scattering Electromechanics Geophysics Introduction to electromagnetism Magnetostatics Magnetoquasistatic field Optics Relativistic electromagnetism Wheeler–Feynman absorber theory References Further reading Web sources Textbooks General coverage External links Magnetic Field Strength Converter Electromagnetic Force – from Eric Weisstein's World of Physics Fundamental interactions
Electromagnetism
[ "Physics", "Mathematics" ]
3,282
[ "Physical phenomena", "Force", "Electromagnetism", "Physical quantities", "Fundamental interactions", "Particle physics", "Electrodynamics", "Dynamical systems" ]
9,540
https://en.wikipedia.org/wiki/Electricity%20generation
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage, using for example, the pumped-storage method. Consumable electricity is not freely available in nature, so it must be "produced", transforming other forms of energy to electricity. Production is carried out in power stations, also called "power plants". Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission, but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics). Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power. History The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss. Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph. Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains. The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources. In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification. The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today. Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It was not until the 1930s that rural areas saw the large-scale establishment of electrification. Methods of generation Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics. Generators Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material, e.g. copper wire. Almost all commercial electrical generation uses electromagnetic induction, in which mechanical energy forces a generator to rotate. Electrochemistry Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge. Photovoltaic effect The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems. Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India. Economics The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand. All power grids have varying loads on them. The daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal. Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high. Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle. Generating equipment Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale forms of electricity production that do not employ a generator are photovoltaic solar and fuel cells. Turbines Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines. The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine, invented by Sir Charles Parsons in 1884, currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include: Steam Water is boiled by coal burned in a thermal power plant. About 41% of all electricity is generated this way. Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of electricity is generated this way. Renewable energy. The steam is generated by biomass, solar thermal energy, or geothermal power. Natural gas: turbines are driven directly by gases produced by combustion. Combined cycle are driven by both steam and natural gas. They generate power by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of the world's electricity is generated by natural gas. Water Energy is captured by a water turbine from the movement of water - from falling water, the rise and fall of tides or ocean thermal currents (see ocean thermal energy conversion). Currently, hydroelectric plants provide approximately 16% of the world's electricity. The windmill was a very early wind turbine. In 2018 around 5% of the world's electricity was produced from wind Turbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points. Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages. World production Total world generation in 2021 was 28,003 TWh, including coal (36%), gas (23%), hydro (15%), nuclear (10%), wind (6.6%), solar (3.7%), oil and other fossil fuels (3.1%), biomass (2.4%) and geothermal and other renewables (0.33%). Production by country China produced a third of the world's electricity in 2021, largely from coal. The United States produces half as much as China but uses far more natural gas and nuclear. Environmental concerns Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US. According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output. A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources. Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods. Centralised and distributed generation Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used. Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar. Technologies Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale. Solar Wind Coal Natural gas Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin. Natural gas power plants are more efficient than coal power generation, they however contribute to climate change, but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, the extraction of gas when mined releases a significant amount of methane into the atmosphere. Nuclear Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process. Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem. Electricity generation capacity by country The table lists 45 countries with their total electricity capacities. The data is from 2022. According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981. Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries. See also Glossary of power generation Cogeneration: the use of a heat engine or power station to generate electricity and useful heat at the same time. Cost of electricity by source Diesel generator Engine-generator Generation expansion planning Steam–electric power station World energy supply and consumption Notes References Power engineering Fossil fuel power stations Infrastructure
Electricity generation
[ "Engineering" ]
3,677
[ "Energy engineering", "Construction", "Power engineering", "Electrical engineering", "Infrastructure" ]
9,541
https://en.wikipedia.org/wiki/Design%20of%20experiments
The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment. Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity. Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience. History Statistical experiments, following Charles S. Peirce A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. Randomized experiments Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s. Optimal designs for regression models Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less). Sequences of experiments The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952. Fisher's principles A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research. Comparison In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline. Randomization Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment. The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things. Statistical replication Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible. Blocking Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. Orthogonality Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. Multifactorial experiments Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test. Example This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs. Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by We consider two different experiments: Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8. Do the eight weighings according to the following schedule—a weighing matrix: Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is Similar estimates can be found for the weights of the other items: The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example and others. Avoiding false positives False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention. Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible. Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. Discussion topics when setting up an experimental design An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have, and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks: did the manipulation really work? What are the background variables? What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there confounding variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. Causal attributions In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. Statistical control It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Experimental designs after Fisher Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn. The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification. Human participant constraints Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments. In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans. Balancing the constraints are views from the medical field. Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393) See also Adversarial collaboration Bayesian experimental design Block design Box–Behnken design Central composite design Clinical trial Clinical study design Computer experiment Control variable Controlling for a variable Experimetrics (econometrics-related experiments) Factor analysis Fractional factorial design Glossary of experimental design Grey box model Industrial engineering Instrument effect Law of large numbers Manipulation checks Multifactor design of experiments software One-factor-at-a-time method Optimal design Plackett–Burman design Probabilistic design Protocol (natural sciences) Quasi-experimental design Randomized block design Randomized controlled trial Research design Robust parameter design Sample size determination Supersaturated design Royal Commission on Animal Magnetism Survey sampling System identification Taguchi methods References Sources Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12–13. Relevant individual papers: (1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint. (1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint. (1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint. (1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint. (1883), "A Theory of Probable Inference", Studies in Logic, pp. 126–181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, ) External links A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Experiments Industrial engineering Metascience Quantitative research Statistical process control Statistical theory Systems engineering Mathematics in medicine
Design of experiments
[ "Mathematics", "Engineering" ]
4,271
[ "Systems engineering", "Statistical process control", "Applied mathematics", "Industrial engineering", "Engineering statistics", "Mathematics in medicine" ]
9,546
https://en.wikipedia.org/wiki/Engineering%20statistics
Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are: Design of Experiments (DOE) is a methodology for formulating scientific and engineering problems using statistical models. The protocol specifies a randomization procedure for the experiment and specifies the primary data-analysis, particularly in hypothesis testing. In a secondary analysis, the statistical analyst further examines the data to suggest other questions and to help plan future experiments. In engineering applications, the goal is often to optimize a process or product, rather than to subject a scientific hypothesis to test of its predictive adequacy. The use of optimal (or near optimal) designs reduces the cost of experimentation. Quality control and process control use statistics as a tool to manage conformance to specifications of manufacturing processes and their products. Time and methods engineering use statistics to study repetitive operations in manufacturing in order to set standards and find optimum (in some sense) manufacturing procedures. Reliability engineering which measures the ability of a system to perform for its intended function (and time) and has tools for improving performance. Probabilistic design involving the use of probability in product and system design System identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models. History Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics. Examples Factorial Experimental Design A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own. Six Sigma Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution; in other words, that each step of the manufacturing process has at most a 0.00034% chance of producing a defect. Notes References Box, G. E., Hunter, W.G., Hunter, J.S., Hunter, W.G., "Statistics for Experimenters: Design, Innovation, and Discovery", 2nd Edition, Wiley, 2005, External links
Engineering statistics
[ "Engineering" ]
728
[ "Engineering statistics" ]
9,550
https://en.wikipedia.org/wiki/Electricity
Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts. Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force behind the Second Industrial Revolution, with electricity's versatility driving transformations in both industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society. History Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artefact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Isaac Newton made early investigations into electricity, with an idea of his written down in his book Opticks arguably the beginning of the field theory of the electric force. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862. While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life. In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor. Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. Concepts Electric charge By modern convention, the charge carried by electrons is defined as negative, and that by protons is positive. Before these particles were discovered, Benjamin Franklin had defined a positive charge as being the charge acquired by a glass rod when it is rubbed with a silk cloth. A proton by definition carries a charge of exactly . This value is also defined as the elementary charge. No object can have a charge smaller than the elementary charge, and any amount of charge an object may carry is a multiple of the elementary charge. An electron has an equal negative charge, i.e. . Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle. The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract. The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together. Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other. Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer. Electric current The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator. By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires. Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment. In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. Electric field The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, they originate at positive charges and terminate at negative charges; second, they must enter any good conductor at right angles, and third, they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principle of the Faraday cage, a conducting metal shell that isolates its interior from outside electrical effects. The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh. The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect. Electric potential The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. The electric field is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage. For practical purposes, defining a common reference point to which potentials may be expressed and compared is useful. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge and is therefore electrically uncharged—and unchargeable. Electric potential is a scalar quantity. That is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface. The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together. Electromagnets Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too. Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere. This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained. Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work. Electric circuits An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task. The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli. The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp. The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it. The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current but opposes a rapidly changing one. Electric power Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second. Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is where Q is electric charge in coulombs t is time in seconds I is electric current in amperes V is electric potential or voltage in volts Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency. Electronics Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system. Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering. Electromagnetic wave Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics. The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances. Production, storage and uses Generation and transmission In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity. Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect. Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China. Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels. Transmission and storage The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed. Normally, demand for electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower). Applications Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector. The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps). The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership. Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process. Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square. Electricity and the natural world Physiological effects A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century. Electrical phenomena in nature Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly. Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best-known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants. Cultural perception It is said that in the 1850s, British politician William Ewart Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com "the anecdote should be considered apocryphal because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death." In the 19th and early 20th centuries, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films. As public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers. With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it acquired particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968), are still often cast as heroic, wizard-like figures. See also Ampère's circuital law, connects the direction of an electric current and its associated magnetic currents. Electric potential energy, the potential energy of a system of charges Electricity market, the sale of electrical energy Etymology of electricity, the origin of the word electricity and its current different usages Hydraulic analogy, an analogy between the flow of water and electric current Notes References External links Basic Concepts of Electricity chapter from Lessons In Electric Circuits Vol 1 DC book and series. "One-Hundred Years of Electricity", May 1931, Popular Mechanics Socket and plug standards Electricity Misconceptions Electricity and Magnetism Understanding Electricity and Electronics in about 10 Minutes
Electricity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
8,199
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
9,559
https://en.wikipedia.org/wiki/Electrical%20network
An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits (although networks without a closed loop are often imprecisely referred to as "circuits"). A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools. Classification By passivity An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source. An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit. Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors. By linearity Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear. By lumpiness Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits. A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter. Classification of sources Sources can be classified as independent sources and dependent sources. Independent An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is. Applying electrical laws A number of electrical laws apply to all linear resistive networks. These include: Kirchhoff's current law: The sum of all currents entering a node is equal to the sum of all currents leaving the node. Kirchhoff's voltage law: The directed sum of the electrical potential differences around a loop must be zero. Ohm's law: The voltage across a resistor is equal to the product of the resistance and the current flowing through it. Norton's theorem: Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's theorem: Any network of voltage or current sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Superposition theorem: In a linear network with several independent sources, the response in a particular branch when all the sources are acting simultaneously is equal to the linear sum of individual responses calculated by taking one independent source at a time. Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components. Design methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model. Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes. Network simulation software More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin. Linearization around operating point When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination. Piecewise-linear approximation Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time. See also Digital circuit Ground (electricity) Impedance Load Memristor Open-circuit voltage Short circuit Voltage drop Representation Circuit diagram Schematic Netlist Design and analysis methodologies Network analysis (electrical circuits) Mathematical methods in electronics Superposition theorem Topology (electronics) Mesh analysis Prototype filter Measurement Network analyzer (electrical) Network analyzer (AC power) Continuity test Analogies Hydraulic analogy Mechanical–electrical analogies Impedance analogy (Maxwell analogy) Mobility analogy (Firestone analogy) Through and across analogy (Trent analogy) Specific topologies Bridge circuit LC circuit RC circuit RL circuit RLC circuit Potential divider Series and parallel circuits References Electricity Electrical engineering de:Netzwerk (Elektrotechnik)
Electrical network
[ "Engineering" ]
1,564
[ "Electrical engineering" ]
9,566
https://en.wikipedia.org/wiki/Empty%20set
In mathematics, the empty set or void set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). Notation Common notations for the empty set include "{ }", "", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø () in the Danish and Norwegian alphabets. In the past, "0" (the numeral zero) was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation. The symbol ∅ is available at Unicode point . It can be coded in HTML as and as or as . It can be coded in LaTeX as . The symbol is coded in LaTeX as . When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead. Properties In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of "the empty set" rather than "an empty set". The only subset of the empty set is the empty set itself; equivalently, the power set of the empty set is the set containing only the empty set. The number of elements of the empty set (i.e., its cardinality) is zero. The empty set is the only set with either of these properties. For any set A: The empty set is a subset of A The union of A with the empty set is A The intersection of A with the empty set is the empty set The Cartesian product of A and the empty set is the empty set For any property P: For every element of , the property P holds (vacuous truth). There is no element of for which the property P holds. Conversely, if for some property P and some set V, the following two statements hold: For every element of V the property P holds There is no element of V for which the property P holds then By the definition of subset, the empty set is a subset of any set A. That is, element x of belongs to A. Indeed, if it were not true that every element of is in A, then there would be at least one element of that is not present in A. Since there are elements of at all, there is no element of that is not in A. Any statement that begins "for every element of " is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set." In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set. Operations on the empty set When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set (the empty sum) is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set (the empty product) should be considered to be one, since one is the identity element for multiplication. A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation (), and it is vacuously true that no element (of the empty set) can be found that retains its original position. In other areas of mathematics Extended real numbers Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted which is defined to be less than every other extended real number, and positive infinity, denoted which is defined to be greater than every other extended real number), we have that: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. Topology In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." Category theory If is a set, then there exists precisely one function from to the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. Set theory In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as . Thus, we have , , , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, , such that the Peano axioms of arithmetic are satisfied. Questioned existence Historical issues In the context of sets of real numbers, Cantor used to denote " contains no single point". This notation was utilized in definitions; for example, Cantor defined two sets as being disjoint if their intersection has an absence of points; however, it is debatable whether Cantor viewed as an existent set on its own, or if Cantor merely used as an emptiness predicate. Zermelo accepted itself as a set, but considered it an "improper set". Axiomatic set theory In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation. Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity. Philosophical issues While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as ; rather, it is a set with nothing it and a set is always . This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism Nothing is better than eternal happiness; a ham sandwich is better than nothing; therefore, a ham sandwich is better than eternal happiness is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is " and the latter to "The set {ham sandwich} is better than the set ". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set was undoubtedly an important landmark in the history of mathematics, … we should not assume that its utility in calculation is dependent upon its actually denoting some object. it is also the case that: "All that we are ever informed about the empty set is that it (1) is a set, (2) has no members, and (3) is unique amongst sets in having no members. However, there are very many things that 'have no members', in the set-theoretical sense—namely, all non-sets. It is perfectly clear why these things have no members, for they are not sets. What is unclear is how there can be, uniquely amongst sets, a which has no members. We cannot conjure such an entity into existence by mere stipulation." George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members. See also References Further reading Halmos, Paul, Naive Set Theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. (paperback edition). External links Basic concepts in set theory 0 (number)
Empty set
[ "Mathematics" ]
2,233
[ "Basic concepts in set theory" ]
9,569
https://en.wikipedia.org/wiki/Endomorphism
In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space is a linear map , and an endomorphism of a group is a group homomorphism . In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself. In any category, the composition of any two endomorphisms of is again an endomorphism of . It follows that the set of all endomorphisms of forms a monoid, the full transformation monoid, and denoted (or to emphasize the category ). Automorphisms An invertible endomorphism of is called an automorphism. The set of all automorphisms is a subset of with a group structure, called the automorphism group of and denoted . In the following diagram, the arrows denote implication: Endomorphism rings Any two endomorphisms of an abelian group, , can be added together by the rule . Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of is the ring of all matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. Operator theory In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing the notion of element orbits to be defined, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. Endofunctions An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let be an arbitrary set. Among endofunctions on one finds permutations of and constant functions associating to every in the same element in . Every permutation of has the codomain equal to its domain and is bijective and invertible. If has more than one element, a constant function on has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number the floor of has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets of size there are endofunctions on the set. Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses. See also Adjoint endomorphism Epimorphism (surjective homomorphism) Frobenius endomorphism Monomorphism (injective homomorphism) Notes References External links Morphisms
Endomorphism
[ "Mathematics" ]
782
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Category theory", "Mathematical relations", "Morphisms" ]
9,588
https://en.wikipedia.org/wiki/Extraterrestrial%20life
Extraterrestrial life, or alien life (colloquially, alien), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" in The City of God. Pre-modern writers typically assumed extraterrestrial "worlds" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun, from an exterior perspective, due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation. When considering the atmospheric composition and ecosystems hosted by extraterrestrial bodies, extraterrestrial life can seem more speculation than reality, due to the harsh conditions and disparate chemical composition of the atmospheres, when compared to the life-abundant Earth. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Hydrothermal vents, acidic hot springs, and volcanic lakes are examples of life forming under difficult circumstances, provide parallels to the extreme environments on other planets and support the possibility of extraterrestrial life. Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit communications. The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth. Context Initially, after the Big Bang the universe was too hot to allow life. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia. During most of its stellar evolution stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The higher-sized stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. In the end, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects is a difficulty for the study of extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 has left the Solar System at a speed of 50,000 kilometers per hour, if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role on the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the stars stellar evolution. The Big Bang took place 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or even billions of years ago. The brief times of existence of Earth's species, when considered from a cosmic perspective, may suggest that extraterrestrial life may be equally fleeting under such a scale. Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation, and may have stricter requirements. A celestial body may not have any life on it, even if it was habitable. Likelihood of existence It is unclear if life and intelligent life are ubiquitous in the cosmos or rare. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the chemical elements that make up life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life, and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilisations in the Milky Way galaxy. The Drake equation is: where: N = the number of Milky Way galaxy civilisations already capable of communicating across interplanetary space and R* = the average rate of star formation in our galaxy fp = the fraction of those stars that have planets ne = the average number of planets that can potentially support life fl = the fraction of planets that actually support life fi = the fraction of planets with life that evolves to become intelligent life (civilisations) fc = the fraction of civilisations that develop a technology to broadcast detectable signs of their existence into space L = the length of time over which such civilisations broadcast detectable signals into space Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten per cent of all Sun-like stars have a system of planets, i.e. there are stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. Harsh environmental conditions on Earth harboring life It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Search for basic life Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Search for extraterrestrial intelligences Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Extrasolar planets Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located from Earth in the southern constellation of Centaurus. , the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact Cosmic pluralism The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would made it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. Early modern period By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which trialed and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotlean ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. 19th century Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced to investigate the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. Recent history The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe. In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading Gribbin, John, "Alone in the Milky Way: Why we are probably the only intelligent life in the galaxy", Scientific American, vol. 319, no. 3 (September 2018), pp. 94–99. External links Astrobiology at NASA European Astrobiology Institute Astrobiology Interstellar messages Search for extraterrestrial intelligence Unsolved problems in biology Unsolved problems in astronomy Astronomical controversies Biological hypotheses Biology controversies Scientific speculation Outer space
Extraterrestrial life
[ "Physics", "Astronomy", "Biology" ]
11,563
[ "Unsolved problems in astronomy", "Origin of life", "Outer space", "History of astronomy", "Concepts in astronomy", "Speculative evolution", "Astrobiology", "Astronomical controversies", "Biological hypotheses", "Astronomical sub-disciplines" ]
9,596
https://en.wikipedia.org/wiki/Ellipsis
The ellipsis (, plural ellipses; from , , ), rendered , alternatively described as suspension points/dots, points/periods of ellipsis, or ellipsis points, or colloquially, dot-dot-dot, is a punctuation mark consisting of a series of three dots. An ellipsis can be used in many ways, such as for intentional omission of text or numbers, to imply a concept without using words. Style guides differ on how to render an ellipsis in printed material. Style Opinions differ on how to render an ellipsis in printed material and are to some extent based on the technology used for rendering. According to The Chicago Manual of Style, it should consist of three periods, each separated from its neighbor by a non-breaking space: . According to the AP Stylebook, the periods should be rendered with no space between them: . A third option available in electronic text is to use the precomposed character U+2026 . When text is omitted following a sentence, a period (full stop) terminates the sentence, and a subsequent ellipsis indicates one or more omitted sentences before continuing a longer quotation. Business Insider magazine suggests this style and it is also used in many academic journals. The Associated Press Stylebook favors this approach. When a sentence ends with ellipsis, some style guides indicate there should be four dots; three for ellipsis and a period. Chicago advises it, as does the Publication Manual of the American Psychological Association (APA style), while some other style guides do not; the Merriam-Webster Dictionary and related works treat this style as optional, saying that it "may" be used. In writing In her book on the ellipsis, Ellipsis in English Literature: Signs of Omission, Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's Andria, by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. "Subpuncting" of medieval manuscripts also denotes omitted meaning and may be related. The popularity of the ellipsis took off after Kyffin's usage; containing three examples in his 1588 translation of Andria, by the 1627 translation of the same play there were 29 examples of its usage. They appear in William Shakespeare's plays in addition to Ben Jonson's. In 1634, John Barton, an English schoolmaster, wrote in The Art of Rhetorick that "eclipsis" is much used in playbooks “where they are noted thus ---”. In the first folio edition of Shakespeare’s Henry IV, Part 1, Toner writes, "Hotspur dies on a dash", with his last words cut short. Different types of ellipsis faced opposition. In the 18th-century, Jonathan Swift rhymed "dash" with "printed trash", while Henry Fielding chose the name 'Dash' for an unlikeable character in his 1730 play The Author's Farce. It has also been championed by writers such as Percy Bysshe Shelley, Jane Austen and Virginia Woolf. According to Toner, an early example of the dot dot dot phrase is in Woolf's short story "An Unwritten Novel" (1920). Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored. An ellipsis may also imply an unstated alternative indicated by context. For example, "I never drink wine ..." implies that the speaker does drink something elsesuch as vodka. In reported speech, the ellipsis can be used to represent an intentional silence. In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem. In news reporting, often put inside square brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in "The President said that [...] he would not be satisfied", where the exact quotation was "The President said that, for as long as this situation continued, he would not be satisfied". Herb Caen, Pulitzer-prize-winning columnist for the San Francisco Chronicle, became famous for his "three-dot journalism". Depending on context, ellipsis can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: "But I thought he was..." When placed at the end of a sentence, an ellipsis may be used to suggest melancholy or longing. In newspaper and magazine columns, ellipses may separate items of a list instead of paragraph breaks. Merriam-Webster's Manual for Writers and Editors uses a line of ellipsis to indicate omission of whole lines in a quoted poem. In different languages In English American English The Chicago Manual of Style suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The Chicago Style Q&A recommends that writers avoid using the precomposed  (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: . Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. ). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's Elements of Typographic Style, the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character . Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the Bluebook citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. ). In some legal writing, an ellipsis is written as three asterisks, or , to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph. British English The Oxford Style Guide recommends setting the ellipsis as a single character or as a series of three (narrow) spaced dots surrounded by spaces, thus: . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. The ... fox jumps ... The quick brown fox jumps over the lazy dog. ... And if they have not died, they are still alive today. It is not cold ... it is freezing cold. Contrary to The Oxford Style Guide, the University of Oxford Style Guide demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before), and it states that an ellipsis should never be preceded or followed by a full stop. The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold. In Polish When applied in Polish syntax, the ellipsis is called , literally "multidot". The word wielokropek distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an . When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, (Rules for Setting Texts in Polish). In Russian The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. In Japanese The most common character corresponding to an ellipsis is called 3-ten rīdā ("3-dot leaders", ). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two 3-ten rīdā characters, ). Three dots (one 3-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced "", the dots are colloquially called "" (, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the ten-ten-ten is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects "speaking" the ellipsis. In Chinese In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters). In horizontally written text the dots are commonly vertically centered along the midline (halfway between the Roman descent and Roman ascent, or equivalently halfway between the Roman baseline and the capital height, i.e. ). This is generally true of Traditional Chinese, while Simplified Chinese tends to have the ellipses aligned with the baseline; in vertically written text the dots are always centered horizontally (i.e. ). Also note that Taiwan and China have different punctuation standards. In Spanish In Spanish, the ellipsis is commonly used as a substitute of et cetera at the end of unfinished lists. So it means "and so forth" or "and other things". Other use is the suspension of a part of a text, or a paragraph, or a phrase or a part of a word because it is obvious, or unnecessary, or implied. For instance, sometimes the ellipsis is used to avoid the complete use of expletives. When the ellipsis is placed alone into a parenthesis (...) or—less often—between brackets [...], which is what happens usually within a text transcription, it means the original text had more contents on the same position but are not useful to our target in the transcription. When the suppressed text is at the beginning or at the end of a text, the ellipsis does not need to be placed in a parenthesis. The number of dots is three and only three. They should have no space in between them nor with the preceding word, but there should be an space with the following word (except if they are followed by a punctuation sign, such as a comma). In French In French, the ellipsis is commonly used at the end of lists to represent . In French typography, the ellipsis is written immediately after the preceding word, but has a space after it, for example: . If, exceptionally, it begins a sentence, there is a space before and after, for example: . However, any omitted word, phrase or line at the end of a quoted passage would be indicated as follows: [...] (space before and after the square brackets but not inside), for example: . In German In German, the ellipsis in general is surrounded by spaces, if it stands for one or more omitted words. On the other side there is no space between a letter or (part of) a word and an ellipsis, if it stands for one or more omitted letters, that should stick to the written letter or letters. Example for both cases, using German style: The first el...is stands for omitted letters, the second ... for an omitted word. If the ellipsis is at the end of a sentence, the final full stop is omitted. Example: I think that ... In Italian The suggests the use of an ellipsis () to indicate a pause longer than a period and, when placed between brackets, the omission of letters, words or phrases. In mathematical notation An ellipsis is used in mathematics to mean "and so forth"; usually indicating the omission of terms that follow an obvious pattern as indicated by included terms. The whole numbers from 1 to 100 can be shown as: The positive whole numbers, an infinite list, can be shown as: To indicate omitted terms in a repeated operation, an ellipsis is sometimes raised from the baseline, as: But, this raised formatting is not standard. For example, Russian mathematical texts use the baseline format. The ellipsis is not a formally defined mathematical symbol. Repeated summations or products may be more formally denoted using capital sigma and capital pi notation, respectively: (see termial) (see factorial) Ellipsis is sometimes used where the pattern is not clear. For example, indicating the indefinite continuation of an irrational number such as: It can be useful to display an expression compactly, for example: In set notation, the ellipsis is used as horizontal, vertical and diagonal for indicating missing matrix terms, such as the size-n identity matrix: In computer programming Some programming languages use ellipsis to indicate a range or for a variable argument list. The CSS text-overflow property can be set to ellipsis, which cuts off text with an ellipsis when it overflows the content area. In computer user interface More An ellipsis is sometimes used as the label for a button to access user interface that has been omitted probably due to space limitations particularly in mobile apps running on small screen devices. This may be described as a "more button". Similar functionality may be accessible via a button with a hamburger icon (≡) or a narrow version called the kebab icon which is a vertical ellipsis (). More info needed According to some style guides, a menu item or button labeled with a trailing ellipsis requests an operation that cannot be completed without additional information and selecting it will prompt the user for input. Without an ellipsis, selecting the item or button will perform an action without user input. For example, the menu item "Save" overwrites an existing file whereas "Save as..." prompts the user for save options before saving. Busy/progress Ellipsis is commonly used to indicate that a longer-lasting operation is in progress like "Loading...", "Saving...". Sometimes progress is animated with an ellipse-like construct of repeatedly adding dots to a label. In texting In text-based communications, the ellipsis may indicate: Floor holding, signal that more is to come, for instance when people break up longer turns in chat. Politeness, for instance indicating topic change or hesitation. Turn construction unit to signal silence, for example when indicating disagreement, disapproval or confusion. Although an ellipsis is complete with three periods (...), an ellipsis-like construct with more dots is used to indicate "trailing-off" or "silence". The extent of repetition in itself might serve as an additional contextualization or paralinguistic cue; one paper wrote that they "extend the lexical meaning of the words, add character to the sentences, and allow fine-tuning and personalisation of the message". While composing a text message, some environments show others in the conversation a typing awareness indicator ellipsis to indicate remote activity. Computer representations In computing, several ellipsis characters have been codified. Unicode Unicode defines the following ellipsis characters: Unicode recognizes a series of three period characters () as compatibility equivalent (though not canonical) to the horizontal ellipsis character. HTML In HTML, the horizontal ellipsis character may be represented by the entity reference &hellip; (since HTML 4.0), and the vertical ellipsis character by the entity reference &vellip; (since HTML 5.0). Alternatively, in HTML, XML, and SGML, a numeric character reference such as &#x2026; or &#8230; can be used. TeX In the TeX typesetting system, the following types of ellipsis are available: In LaTeX, the reverse orientation of \ddots can be achieved with \reflectbox provided by the graphicx package: \reflectbox{\ddots} yields . With the amsmath package from AMS-LaTeX, more specific ellipses are provided for math mode. Other The horizontal ellipsis character also appears in older character maps: in Windows-1250—Windows-1258 and in IBM/MS-DOS Code page 874, at code 85 (hexadecimal) in Mac-Roman, Mac-CentEuro and several other Macintosh encodings, at code C9 (hexadecimal) in Ventura International encoding at code C1 (hexadecimal) Note that ISO/IEC 8859 encoding series provides no code point for ellipsis. As with all characters, especially those outside the ASCII range, the author, sender and receiver of an encoded ellipsis must be in agreement upon what bytes are being used to represent the character. Naive text processing software may improperly assume that a particular encoding is being used, resulting in mojibake. Input In Windows using a suitable code page, can be inserted with , using the numeric keypad. In macOS, it can be inserted with (on an English language keyboard). In some Linux distributions, it can be inserted with (this produces an interpunct on other systems), or . In Android, ellipsis is a long-press key. If Gboard is in alphanumeric layout, change to numeric and special characters layout by pressing from alphanumeric layout. Once in numeric and special characters layout, long press key to insert an ellipsis. This is a single symbol without spaces in between the three dots ( ). In Chinese and sometimes in Japanese, ellipsis characters are made by entering two consecutive horizontal ellipses, each with Unicode code point U+2026. In vertical texts, the application should rotate the symbol accordingly. See also Code folding or holophrasting – switching between full text and an ellipsis – a row of three dots (usually widely separated) alone in the middle of a gap between two paragraphs, to indicate a sub-chapter. An em dash is sometimes used instead of an ellipsis, especially in written dialogue. . In written text, this is sometimes denoted using the horizontal ellipsis. References Further reading Halliday, M. A. K., and Ruqayia, H. (1976), Cohesion in English, London: Longman. External links Mathematical notation Punctuation Typographical symbols Dot patterns
Ellipsis
[ "Mathematics" ]
4,687
[ "Symbols", "Typographical symbols", "nan" ]
9,598
https://en.wikipedia.org/wiki/Electronvolt
In physics, an electronvolt (symbol eV), also written electron-volt and electron volt, is the measure of an amount of kinetic energy gained by a single electron accelerating through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equal to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 revision of the SI, this sets 1 eV equal to the exact value Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy after passing through a voltage of V. Definition and use An electronvolt is the amount of energy gained or lost by a single electron when it moves through an electric potential difference of one volt. Hence, it has a value of one volt, which is , multiplied by the elementary charge Therefore, one electronvolt is equal to The electronvolt (eV) is a unit of energy, but is not an SI unit. It is a commonly used unit of energy within physics, widely used in solid state, atomic, nuclear and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli- (10−3), kilo- (103), mega- (106), giga- (109), tera- (1012), peta- (1015) or exa- (1018), the respective symbols being meV, keV, MeV, GeV, TeV, PeV and EeV. The SI unit of energy is the joule (J). In some older documents, and in the name Bevatron, the symbol BeV is used, where the B stands for billion. The symbol BeV is therefore equivalent to GeV, though neither is an SI unit. Relation to other physical properties and units In the fields of physics in which the electronvolt is used, other quantities are typically measured using units derived from the electronvolt as a product with fundamental constants of importance in the theory are often used. Mass By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from ). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of is: For example, an electron and a positron, each with a mass of , can annihilate to yield of energy. A proton has a mass of . In general, the masses of all hadrons are of the order of , which makes the GeV/c2 a convenient unit of mass for particle physics: The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula: Momentum By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may informally be omitted to express momentum using the unit electronvolt. The energy–momentum relation in natural units (with ) is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as in high-energy physics such that an applied energy with expressed in the unit eV conveniently results in a numerically approximately equivalent change of momentum when expressed with the unit eV/c. The dimension of momentum is . The dimension of energy is . Dividing a unit of energy (such as eV) by a fundamental constant (such as the speed of light) that has the dimension of velocity () facilitates the required conversion for using a unit of energy to quantify momentum. For example, if the momentum p of an electron is , then the conversion to MKS system of units can be achieved by: Distance In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented using a unit of inverse particle mass. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via . For example, the meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: Temperature In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: where kB is the Boltzmann constant. The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kiloelectronvolt), which is equal to 174 MK (megakelvin). As an approximation: kBT is about (≈ ) at a temperature of . Wavelength The energy E, frequency ν, and wavelength λ of a photon are related by where h is the Planck constant, c is the speed of light. This reduces to A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency . Scattering experiments In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. Energy comparisons Molar energy One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ ), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n. See also Orders of magnitude (energy) References External links Fundamental Physical Constants from NIST Particle physics Units of chemical measurement Units of energy Voltage Electron
Electronvolt
[ "Physics", "Chemistry", "Mathematics" ]
1,515
[ "Electron", "Molecular physics", "Physical quantities", "Electrical systems", "Quantity", "Chemical quantities", "Units of energy", "Physical systems", "Units of chemical measurement", "Particle physics", "Voltage", "Wikipedia categories named after physical quantities", "Units of measuremen...
9,601
https://en.wikipedia.org/wiki/Electrochemistry
Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically conducting phase (typically an external electrical circuit, but not necessarily, as in electroless plating) between electrodes separated by an ionically conducting and electronically insulating electrolyte (or ionic species in a solution). When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an electrochemical reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction. History 16th–18th century Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the "Father of Magnetism." He discovered various methods for producing and strengthening magnets. In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity. By the mid-18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: "vitreous" (from the Latin for "glass"), or positive, electricity; and "resinous," or negative, electricity. This was the two-fluid theory of electricity, which was to be opposed by Benjamin Franklin's one-fluid theory later in the century. In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England. In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay "De Viribus Electricitatis in Motu Musculari Commentarius" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a "nerveo-electrical substance" on biological life forms. In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed "animal electricity," which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the "natural" form produced by lightning or by the electric eel and torpedo ray as well as the "artificial" form produced by friction (i.e., static electricity). Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an "animal electric fluid," replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. Nevertheless, Volta's experimentation led him to develop the first practical battery, which took advantage of the relatively high energy (weak bonding) of zinc and could deliver an electrical current for much longer than any other device known at the time. 19th century In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis using Volta's battery. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck. By the 1810s, William Hyde Wollaston made improvements to the galvanic cell. Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of metallic sodium and potassium by electrolysis of their molten salts, and of the alkaline earth metals from theirs, in 1808. Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically. In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential between the juncture points of two dissimilar metals when there is a temperature difference between the joints. In 1827, the German scientist Georg Ohm expressed his law in this famous book "Die galvanische Kette, mathematisch bearbeitet" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity. In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by introducing copper ions into the solution near the positive electrode and thus eliminating hydrogen gas generation. Later results revealed that at the other electrode, amalgamated zinc (i.e., zinc alloyed with mercury) would produce a higher voltage. William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc–carbon cell. Svante Arrhenius published his thesis in 1884 on Recherches sur la conductibilité galvanique des électrolytes (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions. In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina. In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids. Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the voltage produced could be used to calculate the free energy change in the chemical reaction producing the voltage. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties. In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes. 20th century In 1902, The Electrochemical Society (ECS) was founded. In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron. In 1911, Harvey Fletcher, working with Millikan, was successful in measuring the charge on the electron, by replacing the water droplets used by Millikan, which quickly evaporated, with oil droplets. Within one day Fletcher measured the charge of an electron within several decimal places. In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis. In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis. A year later, in 1949, the International Society of Electrochemistry (ISE) was founded. By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his students. Principles Oxidation and reduction The term "redox" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion, changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease. For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond. The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are "OIL RIG" (Oxidation Is Loss, Reduction Is Gain) and "LEO" the lion says "GER" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state. The atom or molecule which loses electrons is known as the reducing agent, or reductant, and the substance which accepts the electrons is called the oxidizing agent, or oxidant. Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a weaker bond and higher electronegativity, and thus accepts electrons even better) than oxygen. For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction. Balancing redox reactions Electrochemical reactions in water are better analyzed by using the ion-electron method, where H+, OH− ion, H2O and electrons (to compensate the oxidation changes) are added to the cell's half-reactions for oxidation and reduction. Acidic medium In acidic medium, H+ ions and water are added to balance each half-reaction. For example, when manganese reacts with sodium bismuthate. Unbalanced reaction: Mn2+ + NaBiO3 → Bi3+ + Oxidation: 4 H2O + Mn2+ → + 8 H+ + 5 e− Reduction: 2 e− + 6 H+ + → Bi3+ + 3 H2O Finally, the reaction is balanced by multiplying the stoichiometric coefficients so the numbers of electrons in both half reactions match 8 H2O + 2 Mn2+ → 2 + 16 H+ + 10 e− 10 e− + 30 H+ + 5 → 5 Bi3+ + 15 H2O and adding the resulting half reactions to give the balanced reaction: 14 H+ + 2 Mn2+ + 5 NaBiO3 → 7 H2O + 2 + 5 Bi3+ + 5 Na+ Basic medium In basic medium, OH− ions and water are added to balance each half-reaction. For example, in a reaction between potassium permanganate and sodium sulfite: Unbalanced reaction: KMnO4 + Na2SO3 + H2O → MnO2 + Na2SO4 + KOH Reduction: 3 e− + 2 H2O + → MnO2 + 4 OH− Oxidation: 2 OH− + → + H2O + 2 e− Here, 'spectator ions' (K+, Na+) were omitted from the half-reactions. By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: 6 e− + 4 H2O + 2 → 2 MnO2 + 8 OH− 6 OH− + 3 → 3 + 3 H2O + 6 e− the balanced overall reaction is obtained: 2 KMnO4 + 3 Na2SO3 + H2O → 2 MnO2 + 3 Na2SO4 + 2 KOH Neutral medium The same procedure as used in acidic medium can be applied, for example, to balance the complete combustion of propane: Unbalanced reaction: C3H8 + O2 → CO2 + H2O Reduction: 4 H+ + O2 + 4 e− → 2 H2O Oxidation: 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+ By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: 20 H+ + 5 O2 + 20 e− → 10 H2O 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+ the balanced equation is obtained: C3H8 + 5 O2 → 3 CO2 + 4 H2O Electrochemical cells An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted experiments on chemical reactions and electric current during the late 18th century. Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move. The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light. A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell. The half reactions in a Daniell cell are as follows: Zinc electrode (anode): Zn → Zn2+ + 2 e− Copper electrode (cathode): Cu2+ + 2 e− → Cu In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode. To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while minimizing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte. A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode. The electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: Zn | Zn2+ (1 M) || Cu2+ (1 M) | Cu First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the exact cell potential. Standard electrode potential To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction 2 H+ + 2 e− → H2 which is shown as a reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter, i.e. pH = 0). The SHE electrode can be connected to any other electrode by a salt bridge and an external circuit to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. E°cell = E°red (cathode) – E°red (anode) = E°red (cathode) + E°oxi (anode) For example, the standard electrode potential for a copper electrode is: Cell diagram Pt | H2 (1 atm) | H+ (1 M) || Cu2+ (1 M) | Cu E°cell = E°red (cathode) – E°red (anode) At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving Ecell = E°(Cu2+/Cu) – E°(H+/H2) Or, E°(Cu2+/Cu) = 0.34 V Changes in the stoichiometric coefficients of a balanced cell equation will not change the E°red value because the standard electrode potential is an intensive property. Spontaneity of redox reaction During operation of an electrochemical cell, chemical energy is transformed into electrical energy. This can be expressed mathematically as the product of the cell's emf Ecell measured in volts (V) and the electric charge Qele,trans transferred through the external circuit. Electrical energy = EcellQele,trans Qele,trans is the cell current integrated over time and measured in coulombs (C); it can also be determined by multiplying the total number ne of electrons transferred (measured in moles) times Faraday's constant (F). The emf of the cell at zero current is the maximum possible emf. It can be used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: , where work is defined as positive when it increases the energy of the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows: . Rearranging to express the relation between standard potential and equilibrium constant yields . At T = 298 K, the previous equation can be rewritten using the Briggsian logarithm as follows: Cell EMF dependency on changes in concentration Nernst equation The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is the reaction quotient, which can be calculated by dividing concentrations of products by those of reactants, each raised to the power of its stoichiometric coefficient, using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes Here ne is the number of electrons (in moles), F is the Faraday constant (in coulombs/mole), and ΔE is the cell potential (in volts). Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: Assuming standard conditions (T = 298 K or 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base-10 logarithm as shown below: Note that is also known as the thermal voltage VT and is found in the study of plasmas and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. Concentration cells A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Cu2+ + 2 e− → Cu Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where the concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the concentration cell mentioned above: Cu | Cu2+ (0.05 M) || Cu2+ (2.0 M) | Cu where the half cell reactions for oxidation and reduction are: Oxidation: Cu → Cu2+ (0.05 M) + 2 e− Reduction: Cu2+ (2.0 M) + 2 e− → Cu Overall reaction: Cu2+ (2.0 M) → Cu2+ (0.05 M) The cell's emf is calculated through the Nernst equation as follows: The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: or by: However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Battery Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem, however, is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use but it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium metal battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen and oxygen directly into electrical energy with a much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. Iron corrosion For iron rust to occur the metal has to be in contact with oxygen and water. The chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons. Fe → Fe2+ + 2 e− Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal. O2 + 4 H+ + 4 e− → 2 H2O Global reaction for the process: 2 Fe + O2 + 4 H+ → 2 Fe2+ + 2 H2O Standard emf for iron rusting: E° = E° (cathode) − E° (anode) E° = 1.23V − (−0.44 V) = 1.67 V Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize further, following this equation: 4 Fe2+ + O2 + (4+2) H2O → 2 Fe2O3·H2O + 8 H+ Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·H2O. An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Corrosion of common metals Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface, which bonds with the underlying metal. This thin oxide layer protects the underlying bulk of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Prevention of corrosion Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Coating Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. Sacrificial anodes A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those dissolved. Electrolysis The spontaneous redox reactions of a conventional battery produce electricity through the different reduction potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. Electrolysis of molten sodium chloride When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Downs cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place in a Downs cell are the following: Anode (oxidation): 2 Cl− → Cl2 + 2 e− Cathode (reduction): 2 Na+ + 2 e− → 2 Na Overall reaction: 2 Na+ + 2 Cl− → 2 Na + Cl2 This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used in mineral dressing and metallurgy industries. The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential difference of 4 V. However, larger voltages must be used for this reaction to occur at a high rate. Electrolysis of water Water can be converted to its component elemental gases, H2 and O2, through the application of an external voltage. Water does not decompose into hydrogen and oxygen spontaneously as the Gibbs free energy change for the process at standard conditions is very positive, about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M). Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above: Anode (oxidation): 2 H2O → O2 + 4 H+ + 4 e− Cathode (reduction): 2 H2O + 2 e− → H2 + 2 OH− Overall reaction: 2 H2O → 2 H2 + O2 Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively low voltages (~2 V depending on the pH). Electrolysis of aqueous solutions Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized. Electrolysis of a solution of sodium chloride The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned above in electrolysis of water yielding gaseous oxygen in the anode and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na+ and Cl− ions. The cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The chloride anion will then be attracted to the anode (+), where it is oxidized to chlorine gas. The following half reactions should be considered in the process mentioned: Cathode: Na+ + e− → NaE°red = –2.71 V Anode: 2 Cl− → Cl2 + 2 e−E°red = +1.36 V Cathode: 2 H2O + 2 e− → H2 + 2 OH−E°red = –0.83 V Anode: 2 H2O → O2 + 4 H+ + 4 e−E°red = +1.23 V Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process. When comparing the reduction potentials in reactions 2 and 4, the oxidation of chloride ion is favored over oxidation of water, thus chlorine gas is produced at the anode and not oxygen gas. Although the initial analysis is correct, there is another effect, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the E°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage). The overall reaction for the process according to the analysis is the following: Anode (oxidation): 2 Cl− → Cl2 + 2 e− Cathode (reduction): 2 H2O + 2 e− → H2 + 2 OH− Overall reaction: 2 H2O + 2 Cl− → H2 + Cl2 + 2 OH− As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH− ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide. Quantitative electrolysis and Faraday's laws Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms electrolyte, electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy. First law Faraday concluded after several experiments on electric current in a non-spontaneous process that the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell. Below is a simplified equation of Faraday's first law: where m is the mass of the substance produced at the electrode (in grams), Q is the total electric charge that passed through the solution (in coulombs), n is the valence number of the substance as an ion in solution (electrons per ion), M is the molar mass of the substance (in grams per mole), F is Faraday's constant (96485 coulombs per mole). Second law Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating "the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them." In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights. An important aspect of the second law of electrolysis is electroplating, which together with the first law of electrolysis has a significant number of applications in industry, as when used to protectively coat metals to avoid corrosion. Applications There are various important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunk drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. In addition to established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemical or coulometric titrations were introduced for quantitative analysis of minute quantities in 1938 by the Hungarian chemists László Szebellédy and Zoltan Somogyi. Electrochemistry also has important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, or the determination of free acidity in olive oil. See also Bioelectromagnetism Bioelectrochemistry Bipolar electrochemistry Contact tension – a historical forerunner to the theory of electrochemistry. Corrosion engineering Cyclic voltammetry Electrochemical coloring of metals Electrochemical impedance spectroscopy Electroanalytical methods Electrocatalyst Electrochemical potential Electrochemiluminescence Electrodeionization Electropolishing Electroplating Electrochemical engineering Electrochemical energy conversion Electrosynthesis Frost diagram Fuel cells ITIES List of electrochemists Important publications in electrochemistry Magnetoelectrochemistry Nanoelectrochemistry Photoelectrochemistry Plasma electrochemistry Pourbaix diagram Protein film voltammetry Reactivity series Redox titration Standard electrode potential (data page) Voltammetry References Bibliography Ebbing, Darrell D. and Gammon, Steven D. General Chemistry (2007) , Nobel Lectures in Chemistry, Volume 1, World Scientific (1999) Swaddle, Thomas Wilson Inorganic chemistry: an industrial and environmental perspective, Academic Press (1997) Brett CMA, Brett AMO, ELECTROCHEMISTRY, Principles, methods, and applications, Oxford University Press, (1993) Wiberg, Egon; Wiberg, Nils and Holleman, Arnold Frederick Inorganic chemistry, Academic Press (2001) External links Physical chemistry
Electrochemistry
[ "Physics", "Chemistry" ]
9,477
[ "Physical chemistry", "Electrochemistry", "Applied and interdisciplinary physics", "nan" ]
9,603
https://en.wikipedia.org/wiki/Ernest%20Rutherford
Ernest Rutherford, 1st Baron Rutherford of Nelson (30 August 1871 – 19 October 1937) was a New Zealand physicist who was a pioneering researcher in both atomic and nuclear physics. He has been described as "the father of nuclear physics", and "the greatest experimentalist since Michael Faraday". In 1908, he was awarded the Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances." He was the first Oceanian Nobel laureate, and the first to perform the awarded work in Canada. Rutherford's discoveries include the concept of radioactive half-life, the radioactive element radon, and the differentiation and naming of alpha and beta radiation. Together with Thomas Royds, Rutherford is credited with proving that alpha radiation is composed of helium nuclei. In 1911, he theorized that atoms have their charge concentrated in a very small nucleus. He arrived at this theory through his discovery and interpretation of Rutherford scattering during the gold foil experiment performed by Hans Geiger and Ernest Marsden. In 1912 he invited Niels Bohr to join his lab, leading to the Bohr-Rutherford model of the atom. In 1917, he performed the first artificially induced nuclear reaction by conducting experiments in which nitrogen nuclei were bombarded with alpha particles. These experiments led him to discover the emission of a subatomic particle that he initially called the "hydrogen atom", but later (more precisely) renamed the proton. He is also credited with developing the atomic numbering system alongside Henry Moseley. His other achievements include advancing the fields of radio communications and ultrasound technology. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919. Under his leadership, the neutron was discovered by James Chadwick in 1932. In the same year, the first controlled experiment to split the nucleus was performed by John Cockcroft and Ernest Walton, working under his direction. In honour of his scientific advancements, Rutherford was recognised as a baron of the United Kingdom. After his death in 1937, he was buried in Westminster Abbey near Charles Darwin and Isaac Newton. The chemical element rutherfordium (104Rf) was named after him in 1997. Early life and education Ernest Rutherford was born on 30 August 1871 in Brightwater, a town near Nelson, New Zealand. He was the fourth of twelve children of James Rutherford, an immigrant farmer and mechanic from Perth, Scotland, and his wife Martha Thompson, a schoolteacher from Hornchurch, England. Rutherford's birth certificate was mistakenly written as 'Earnest'. He was known by his family as Ern. When Rutherford was five he moved to Foxhill, New Zealand, and attended Foxhill School. At age 11 in 1883, the Rutherford family moved to Havelock, a town in the Marlborough Sounds. The move was made to be closer to the flax mill Rutherford's father developed. Ernest studied at Havelock School. In 1887, on his second attempt, he won a scholarship to study at Nelson College. On his first examination attempt, he received 75 out of 130 marks for geography, 76 out of 130 for history, 101 out of 140 for English, and 200 out of 200 for arithmetic, totalling 452 out of 600 marks. With these marks, he had the highest of anyone from Nelson. When he was awarded the scholarship, he had received 580 out of 600 possible marks. After being awarded the scholarship, Havelock School presented him with a five-volume set of books titled The Peoples of the World. He studied at Nelson College between 1887 and 1889, and was head boy in 1889. He also played in the school's rugby team. He was offered a cadetship in government service, but he declined as he still had 15 months of college remaining. In 1889, after his second attempt, he won a scholarship to study at Canterbury College, University of New Zealand, between 1890 and 1894. He participated in its debating society and the Science Society. At Canterbury, he was awarded a complex BA in Latin, English, and Maths in 1892, a MA in Mathematics and Physical Science in 1893, and a BSc in Chemistry and Geology in 1894. Thereafter, he invented a new form of radio receiver, and in 1895 Rutherford was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851, to travel to England for postgraduate study at the Cavendish Laboratory, University of Cambridge. In 1897, he was awarded a BA Research Degree and the Coutts-Trotter Studentship from Trinity College, Cambridge. Scientific career When Rutherford began his studies at Cambridge, he was among the first 'aliens' (those without a Cambridge degree) allowed to do research at the university, and was additionally honoured to study under J. J. Thomson. With Thomson's encouragement, Rutherford detected radio waves at , and briefly held the world record for the distance over which electromagnetic waves could be detected, although when he presented his results at the British Association meeting in 1896, he discovered he had been outdone by Guglielmo Marconi, whose radio waves had sent a message across nearly . Work with radioactivity Again under Thomson's leadership, Rutherford worked on the conductive effects of X-rays on gases, which led to the discovery of the electron, the results first presented by Thomson in 1897. Hearing of Henri Becquerel's experience with uranium, Rutherford started to explore its radioactivity, discovering two types that differed from X-rays in their penetrating power. Continuing his research in Canada, in 1899 he coined the terms "alpha ray" and "beta ray" to describe these two distinct types of radiation. In 1898, Rutherford was accepted to the chair of Macdonald Professor of physics position at McGill University in Montreal, Canada, on Thomson's recommendation. From 1900 to 1903, he was joined at McGill by the young chemist Frederick Soddy (Nobel Prize in Chemistry, 1921) for whom he set the problem of identifying the noble gas emitted by the radioactive element thorium, a substance which was itself radioactive and would coat other substances. Once he had eliminated all the normal chemical reactions, Soddy suggested that it must be one of the inert gases, which they named thoron. This substance was later found to be 220Rn, an isotope of radon. They also found another substance they called Thorium X, later identified as 224Rn, and continued to find traces of helium. They also worked with samples of "Uranium X" (protactinium), from William Crookes, and radium, from Marie Curie. Rutherford further investigated thoron in conjunction with R.B. Owens and found that a sample of radioactive material of any size invariably took the same amount of time for half the sample to decay (in this case, 11 minutes), a phenomenon for which he coined the term "half-life". Rutherford and Soddy published their paper "Law of Radioactive Change" to account for all their experiments. Until then, atoms were assumed to be the indestructible basis of all matter; and although Curie had suggested that radioactivity was an atomic phenomenon, the idea of the atoms of radioactive substances breaking up was a radically new idea. Rutherford and Soddy demonstrated that radioactivity involved the spontaneous disintegration of atoms into other, as yet, unidentified matter. In 1903, Rutherford considered a type of radiation, discovered (but not named) by French chemist Paul Villard in 1900, as an emission from radium, and realised that this observation must represent something different from his own alpha and beta rays, due to its very much greater penetrating power. Rutherford therefore gave this third type of radiation the name of gamma ray. All three of Rutherford's terms are in standard use today – other types of radioactive decay have since been discovered, but Rutherford's three types are among the most common. In 1904, Rutherford suggested that radioactivity provides a source of energy sufficient to explain the existence of the Sun for the many millions of years required for the slow biological evolution on Earth proposed by biologists such as Charles Darwin. The physicist Lord Kelvin had argued earlier for a much younger Earth, based on the insufficiency of known energy sources, but Rutherford pointed out, at a lecture attended by Kelvin, that radioactivity could solve this problem. Later that year, he was elected as a member to the American Philosophical Society, and in 1907 he returned to Britain to take the chair of physics at the Victoria University of Manchester. In Manchester, Rutherford continued his work with alpha radiation. In conjunction with Hans Geiger, he developed zinc sulfide scintillation screens and ionisation chambers to count alpha particles. By dividing the total charge accumulated on the screen by the number counted, Rutherford determined that the charge on the alpha particle was two. In late 1907, Ernest Rutherford and Thomas Royds allowed alphas to penetrate a very thin window into an evacuated tube. As they sparked the tube into discharge, the spectrum obtained from it changed, as the alphas accumulated in the tube. Eventually, the clear spectrum of helium gas appeared, proving that alphas were at least ionised helium atoms, and probably helium nuclei. In 1910 Rutherford, with Geiger and mathematician Harry Bateman published their classic paper describing the first analysis of the distribution in time of radioactive emission, a distribution now called the Poisson distribution. Ernest Rutherford was awarded the 1908 Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances". Model of the atom Rutherford continued to make ground-breaking discoveries long after receiving the Nobel prize in 1908. Under his direction in 1909, Hans Geiger and Ernest Marsden performed the Geiger–Marsden experiment, which demonstrated the nuclear nature of atoms by measuring the deflection of alpha particles passing through a thin gold foil. Rutherford was inspired to ask Geiger and Marsden in this experiment to look for alpha particles with very high deflection angles, which was not expected according to any theory of matter at that time. Such deflection angles, although rare, were found. Reflecting on these results in one of his last lectures, Rutherford was quoted as saying: "It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." It was Rutherford's interpretation of this data that led him to propose the nucleus, a very small, charged region containing much of the atom's mass. In 1912, Rutherford was joined by Niels Bohr (who postulated that electrons moved in specific orbits about the compact nucleus). Bohr adapted Rutherford's nuclear structure to be consistent with Max Planck's quantum hypothesis. The resulting Rutherford–Bohr model was the basis for quantum mechanical atomic physics of Heisenberg which remains valid today. Piezoelectricity During World War I, Rutherford worked on a top-secret project to solve the practical problems of submarine detection. Both Rutherford and Paul Langevin suggested the use of piezoelectricity, and Rutherford successfully developed a device which measured its output. The use of piezoelectricity then became essential to the development of ultrasound as it is known today. The claim that Rutherford developed sonar, however, is a misconception, as subaquatic detection technologies utilise Langevin's transducer. Discovery of the proton Together with H.G. Moseley, Rutherford developed the atomic numbering system in 1913. Rutherford and Moseley's experiments used cathode rays to bombard various elements with streams of electrons and observed that each element responded in a consistent and distinct manner. Their research was the first to assert that each element could be defined by the properties of its inner structures – an observation that later led to the discovery of the atomic nucleus. This research led Rutherford to theorize that the hydrogen atom (at the time the least massive entity known to bear a positive charge) was a sort of "positive electron" – a component of every atomic element. It was not until 1919 that Rutherford expanded upon his theory of the "positive electron" with a series of experiments beginning shortly before the end of his time at Manchester. He found that nitrogen, and other light elements, ejected a proton, which he called a "hydrogen atom", when hit with α (alpha) particles. In particular, he showed that particles ejected by alpha particles colliding with hydrogen have unit charge and 1/4 the momentum of alpha particles. Rutherford returned to the Cavendish Laboratory in 1919, succeeding J. J. Thomson as the Cavendish professor and the laboratory's director, posts that he held until his death in 1937. During his tenure, Nobel prizes were awarded to James Chadwick for discovering the neutron (in 1932), John Cockcroft and Ernest Walton for an experiment that was to be known as splitting the atom using a particle accelerator, and Edward Appleton for demonstrating the existence of the ionosphere. Development of proton and neutron theory In 1919–1920, Rutherford continued his research on the "hydrogen atom" to confirm that alpha particles break down nitrogen nuclei and to affirm the nature of the products. This result showed Rutherford that hydrogen nuclei were a part of nitrogen nuclei (and by inference, probably other nuclei as well). Such a construction had been suspected for many years, on the basis of atomic weights that were integral multiples of that of hydrogen; see Prout's hypothesis. Hydrogen was known to be the lightest element, and its nuclei presumably the lightest nuclei. Now, because of all these considerations, Rutherford decided that a hydrogen nucleus was possibly a fundamental building block of all nuclei, and also possibly a new fundamental particle as well, since nothing was known to be lighter than that nucleus. Thus, confirming and extending the work of Wilhelm Wien, who in 1898 discovered the proton in streams of ionized gas, in 1920 Rutherford postulated the hydrogen nucleus to be a new particle, which he dubbed the proton. In 1921, while working with Niels Bohr, Rutherford theorized about the existence of neutrons, (which he had christened in his 1920 Bakerian Lecture), which could somehow compensate for the repelling effect of the positive charges of protons by causing an attractive nuclear force and thus keep the nuclei from flying apart, due to the repulsion between protons. The only alternative to neutrons was the existence of "nuclear electrons", which would counteract some of the proton charges in the nucleus, since by then it was known that nuclei had about twice the mass that could be accounted for if they were simply assembled from hydrogen nuclei (protons). But how these nuclear electrons could be trapped in the nucleus, was a mystery. In 1932, Rutherford's theory of neutrons was proved by his associate James Chadwick, who recognised neutrons immediately when they were produced by other scientists and later himself, in bombarding beryllium with alpha particles. In 1935, Chadwick was awarded the Nobel Prize in Physics for this discovery. Induced nuclear reaction and probing the nucleus In Rutherford's four-part article on the "Collision of α-particles with light atoms" he reported two additional fundamental and far reaching discoveries. First, he showed that at high angles the scattering of alpha particles from hydrogen differed from the theoretical results he himself published in 1911. These were the first results to probe the interactions that hold a nucleus together. Second, he showed that α-particles colliding with nitrogen nuclei would react rather than simply bounce off. One product of the reaction was the proton; the other product was shown by Patrick Blackett, Rutherford's colleague and former student, to be oxygen: 14N + α → 17O + p. Rutherford therefore recognised "that the nucleus may increase rather than diminish in mass as the result of collisions in which the proton is expelled". Blackett was awarded the Nobel prize in 1948 for his work in perfecting the high-speed cloud chamber apparatus used to make that discovery and many others. Later years and honours Rutherford received significant recognition in his home country of New Zealand. In 1901, he earned a DSc from the University of New Zealand. In 1916, he was awarded the Hector Memorial Medal. In 1925, Rutherford called for the New Zealand Government to support education and research, which led to the formation of the Department of Scientific and Industrial Research (DSIR) in the following year. In 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, which was established by the Royal Society of New Zealand as an award for outstanding scientific research. Additionally, Rutherford received a number of awards from the British Crown. He was knighted in 1914. He was appointed to the Order of Merit in the 1925 New Year Honours. Between 1925 and 1930, he served as President of the Royal Society, and later as president of the Academic Assistance Council which helped almost 1,000 university refugees from Germany. In 1931 was raised to Baron of the United Kingdom under the title Baron Rutherford of Nelson, decorating his coat of arms with a kiwi and a Māori warrior. The title became extinct upon his unexpected death in 1937. Since 1992 his portrait appears on the New Zealand one hundred-dollar note. Personal life and death Around 1888 Rutherford made his grandmother a wooden potato masher which is now in the collection of the Royal Society. In 1900, Rutherford married Mary Georgina Newton (1876–1954), at St Paul's Anglican Church, Papanui in Christchurch. (He had become engaged to her before leaving New Zealand.) They had one daughter, Eileen Mary (1901–1930); she married the physicist Ralph Fowler, and died during the birth of her fourth child. Rutherford's hobbies included golf and motoring. For some time before his death, Rutherford had a small hernia, which he neglected to have repaired, and it eventually became strangulated, rendering him violently ill. He had an emergency operation in London, but died in Cambridge four days later, on 19 October 1937, at age 66, of what physicians termed "intestinal paralysis". After cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton, Charles Darwin, and other illustrious British scientists. Legacy At the opening session of the 1938 Indian Science Congress, which Rutherford had been expected to preside over before his death, astrophysicist James Jeans spoke in his place and deemed him "one of the greatest scientists of all time", saying: Nuclear physics Rutherford is known as "the father of nuclear physics" because his research, and work done under him as laboratory director, established the nuclear structure of the atom and the essential nature of radioactive decay as a nuclear process. Patrick Blackett, a research fellow working under Rutherford, using natural alpha particles, demonstrated induced nuclear transmutation. Later, Rutherford's team, using protons from an accelerator, demonstrated artificially-induced nuclear reactions and transmutation. Rutherford died too early to see Leó Szilárd's idea of controlled nuclear chain reactions come into being. However, a speech of Rutherford's about his artificially-induced transmutation in lithium, printed in the 12 September 1933 issue of The Times, was reported by Szilárd to have been his inspiration for thinking of the possibility of a controlled energy-producing nuclear chain reaction. Rutherford's speech touched on the 1932 work of his students John Cockcroft and Ernest Walton in "splitting" lithium into alpha particles by bombardment with protons from a particle accelerator they had constructed. Rutherford realised that the energy released from the split lithium atoms was enormous, but he also realised that the energy needed for the accelerator, and its essential inefficiency in splitting atoms in this fashion, made the project an impossibility as a practical source of energy (accelerator-induced fission of light elements remains too inefficient to be used in this way, even today). Rutherford's speech in part, read: The element rutherfordium, Rf, Z=104, was named in honour of Rutherford in 1997. Publications Books Radio-activity (1904), 2nd ed. (1905), Radioactive Transformations (1906), Radioactive Substances and their Radiations (1913) The Electrical Structure of Matter (1926) The Artificial Transmutation of the Elements (1933) The Newer Alchemy (1937) Articles "Disintegration of the Radioactive Elements" Harper's Monthly Magazine, January 1904, pages 279 to 284. See also Bateman equation Hydrophone Magnetic detector Neutron generator Royal Society of New Zealand Rutherford (unit) Rutherfordine The Rutherford Journal List of presidents of the Royal Society Footnotes References Further reading Campbell, John. (1999) Rutherford: Scientist Supreme, AAS Publications, Christchurch, Reeves, Richard (2008). A Force of Nature: The Frontier Genius of Ernest Rutherford. New York: W. W. Norton. Rhodes, Richard (1986). The Making of the Atomic Bomb. New York: Simon & Schuster. Wilson, David (1983). Rutherford. Simple Genius, Hodder & Stoughton, External links Biography and web exhibit American Institute of Physics including the Nobel Lecture, 11 December 1908 The Chemical Nature of the Alpha Particles from Radioactive Substances The Rutherford Museum Rutherford Scientist Supreme Well-source site with details on Rutherford's life. 1871 births 1937 deaths Experimental physicists New Zealand nuclear physicists Radio pioneers Nobel laureates in Chemistry Recipients of the Copley Medal Academic staff of McGill University Presidents of the Royal Society New Zealand fellows of the Royal Society Foreign associates of the National Academy of Sciences Members of the Pontifical Academy of Sciences Honorary members of the Russian Academy of Sciences (1917–1925) Honorary members of the USSR Academy of Sciences Fellows of Trinity College, Cambridge University of Canterbury alumni Academics of the Victoria University of Manchester Barons in the Peerage of the United Kingdom Knights Bachelor People from Brightwater New Zealand people of English descent New Zealand people of Scottish descent British Nobel laureates English Nobel laureates New Zealand Nobel laureates Burials at Westminster Abbey Fellows of the Royal Society of New Zealand Persons of National Historic Significance (Canada) People educated at Nelson College Presidents of the Institute of Physics Honorary Fellows of the Royal Society of Edinburgh Corresponding Members of the Russian Academy of Sciences (1917–1925) Barons created by George V New Zealand recipients of a British peerage New Zealand emigrants to the United Kingdom 20th-century British physicists 19th-century British physicists 19th-century New Zealand physicists New Zealand members of the Order of Merit Recipients of the Matteucci Medal Recipients of the Dalton Medal Members of the American Philosophical Society Discoverers of chemical elements Cavendish Professors of Physics Recipients of Franklin Medal
Ernest Rutherford
[ "Physics" ]
4,594
[ "Experimental physics", "Experimental physicists" ]
9,604
https://en.wikipedia.org/wiki/Many-worlds%20interpretation
The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in different "worlds". The evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s. In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics. The many-worlds interpretation implies that there are many parallel, non-interacting worlds. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend, the EPR paradox and Schrödinger's cat, since every possible outcome of a quantum event exists in its own world. Overview of the interpretation The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems. This stands in contrast to the Copenhagen interpretation, in which a measurement is a "primitive" concept, not describable by unitary quantum mechanics; using the Copenhagen interpretation the universe is divided into a quantum and a classical domain, and the collapse postulate is central. In MWI there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds, each is an internally consistent and actualized alternative history or timeline. The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected." Zurek emphasizes that his work does not depend on a particular interpretation. The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse. MWI treats the other histories or worlds as real, since it regards the universal wave function as the "basic physical entity" or "the fundamental entity, obeying at all times a deterministic wave equation". The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real. Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation. Everett argued that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world." Deutsch dismissed the idea that many-worlds is an "interpretation", saying that to call it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records." Formulation In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse. Relative state Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple...) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat. In the example of a measurement of a continuous variable (e.g., position q) the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q. The states of the pairs of relative states are, post measurement, correlated with each other. In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches. Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency. Renamed many-worlds Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term "world" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent within themselves. Since many observation-like events have happened and are constantly happening, Everett's model implies that there are an enormous and growing number of simultaneously existing states or "worlds". Properties MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all "quantum paradoxes" such as the EPR paradox and von Neumann's "boundary problem", this provides a clearer and easier approach to their resolution. Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology, he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology. MWI is a realist, deterministic and local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory. MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe. MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this. Alternative to wavefunction collapse As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves. Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory. Testability In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability. Probability and the Born rule Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule. Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful. Frequentism DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect. Decision theory A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed. Symmetries and invariance In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave. In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory. Branch counting In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule. The preferred basis problem As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem. The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics. This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics. History MWI originated in Everett's Princeton University PhD thesis "The Theory of the Universal Wave Function", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957. Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable. Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas. According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he "never wavered in his belief over his many-worlds theory". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around "real" to indicate a meaning within scientific practice. Reception MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory. Support One of MWI's strongest longtime advocates is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". He also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin. Equivocal Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy. Rejection Some scientists consider some aspects of MWI to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development of a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse". Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field". Philosopher of science Robert P. Crease says that MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing. Theoretical physicist Gerard 't Hooft also dismisses the idea: "I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real." Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated. Polls A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true". Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory", Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote? A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored. A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll. Speculative implications DeWitt has said that Everett, Wheeler, and Graham "do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." Tegmark affirmed that absurd or highly unlikely events are rare but inevitable under MWI: "Things inconsistent with the laws of physics will never happen—everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely." David Deutsch speculates in his book The Beginning of Infinity that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics. According to Ladyman and Ross, many seemingly physically plausible but unrealized possibilities, such as those discussed in other scientific fields, generally have no counterparts in other branches, because they are in fact incompatible with the universal wave function. According to Carroll, human decision-making, contrary to common misconceptions, is best thought of as a classical process, not a quantum one, because it works on the level of neurochemistry rather than fundamental particles. Human decisions do not cause the world to branch into equally realized outcomes; even for subjectively difficult decisions, the "weight" of realized outcomes is almost entirely concentrated in a single branch. Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics that can purportedly distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. Most experts believe the experiment would not work in the real world, because the world with the surviving experimenter has a lower "measure" than the world before the experiment, making it less likely that the experimenter will experience their survival. See also Alternate history Consistent histories Many-minds interpretation "The Garden of Forking Paths" Parallel universes in fiction The Beginning of Infinity Mathematical universe hypothesis Multiverse Notes References Further reading Jeffrey A. Barrett, The Quantum Mechanics of Minds and Worlds, Oxford University Press, Oxford, 1999. Peter Byrne, The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family, Oxford University Press, 2010. Jeffrey A. Barrett and Peter Byrne, eds., "The Everett Interpretation of Quantum Mechanics: Collected Works 1955–1980 with Commentary", Princeton University Press, 2012. Julian Brown, Minds, Machines, and the Multiverse, Simon & Schuster, 2000, Sean M. Carroll, Something deeply hidden, Penguin Random House, (2019) Paul C.W. Davies, Other Worlds, (1980) A study of the painful three-way relationship between Hugh Everett, John A Wheeler and Niels Bohr and how this affected the early development of the many-worlds theory. David Wallace, Worlds in the Everett Interpretation, Studies in History and Philosophy of Modern Physics, 33, (2002), pp. 637–661, John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton University Press, (1983), External links Everett's Relative-State Formulation of Quantum Mechanics – Jeffrey A. Barrett's article on Everett's formulation of quantum mechanics in the Stanford Encyclopedia of Philosophy. Many-Worlds Interpretation of Quantum Mechanics – Lev Vaidman's article on the many-worlds interpretation of quantum mechanics in the Stanford Encyclopedia of Philosophy. Hugh Everett III Manuscript Archive (UC Irvine) – Jeffrey A. Barrett, Peter Byrne, and James O. Weatherall (eds.). Henry Stapp's critique of MWI, focusing on the basis problem Canadian Journal of Physics 80, 1043–1052 (2002). Scientific American report on Many Worlds and Hugh Everett. Interpretations of quantum mechanics Quantum measurement Multiverse Reality Metaphysical realism 1957 in science 1970s neologisms Metaphysics of science
Many-worlds interpretation
[ "Physics", "Astronomy" ]
6,502
[ "Astronomical hypotheses", "Multiverse", "Quantum mechanics", "Quantum measurement", "Interpretations of quantum mechanics" ]
9,611
https://en.wikipedia.org/wiki/E-commerce
E-commerce (electronic commerce) refers to commercial activities including the electronic buying or selling products and services which are conducted on online platforms or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is the largest sector of the electronics industry and is in turn driven by the technological advances of the semiconductor industry. Defining e-commerce The term was coined and first employed by Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984. E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as the iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business. The existence value of e-commerce is to allow consumers to shop online and pay online through the Internet, saving the time and space of customers and enterprises, greatly improving transaction efficiency, especially for busy office workers, and also saving a lot of valuable time. E-commerce businesses may also employ some or all of the following: Online shopping for retail sales direct to consumers via web sites and mobile apps, conversational commerce via live chat, chatbots, and voice assistants. Providing or participating in online marketplaces, which process third-party business-to-consumer (B2C) or consumer-to-consumer (C2C) sales; Business-to-business (B2B) buying and selling. Gathering and using demographic data through web contacts and social media. B2B electronic data interchange. Marketing to prospective and established customers by e-mail or fax (for example, with newsletters). Engaging in pretail for launching new products and services. Online financial exchanges for currency exchanges or trading purposes. There are five essential categories of E-commerce: Business to Business Business to Consumer Business to Government Consumer to Business Consumer to Consumer Forms Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C). On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce. Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used. Governmental regulation In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, the more recent California Privacy Rights Act (2020), enacted through a popular election proposition and to control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies. Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996). Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies. There is also Asia Pacific Economic Cooperation. APEC was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region. In Australia, trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong. The European Union undertook an extensive enquiry into e-commerce in 2015–16 which observed significant growth in the development of e-commerce, along with some developments which raised concerns, such as increased use of selective distribution systems, which allow manufacturers to control routes to market, and "increased use of contractual restrictions to better control product distribution". The European Commission felt that some emerging practices might be justified if they could improve the quality of product distribution, but "others may unduly prevent consumers from benefiting from greater product choice and lower prices in e-commerce and therefore warrant Commission action" in order to promote compliance with EU competition rules. In the United Kingdom, the Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012. In India, the Information Technology Act 2000 governs the basic applicability of e-commerce. In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, the Administrative Measures on Internet Information Services were released, the first administrative regulations to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted an Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation. Global trends E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them. Cross-border e-Commerce is also an essential field for e-Commerce businesses.  It has responded to the trend of globalization. It shows that numerous firms have opened up new businesses, expanded new markets, and overcome trade barriers; more and more enterprises have started exploring the cross-border cooperation field. In addition, compared with traditional cross-border trade, the information on cross-border e-commerce is more concealed. In the era of globalization, cross-border e-commerce for inter-firm companies means the activities, interactions, or social relations of two or more e-commerce enterprises. However, the success of cross-border e-commerce promotes the development of small and medium-sized firms, and it has finally become a new transaction mode. It has helped the companies solve financial problems and realize the reasonable allocation of resources field. SMEs ( small and medium enterprises) can also precisely match the demand and supply in the market, having the industrial chain majorization and creating more revenues for companies. In 2012, e-commerce sales topped $1 trillion for the first time in history. Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017. For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company. Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying. China Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users as of 2014, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, Alibaba still dominated the B2B marketplace in China with a market share of 44.82%, followed by several other companies including Made-in-China.com at 3.21%, and GlobalSources.com at 2.98%, with the total transaction value of China's B2B market exceeding 4.5 billion yuan. China is also the largest e-commerce market in the world by value of sales, with an estimated in 2016. It accounted for 42.4% of worldwide retail e-commerce in that year, the most of any country. Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market. The expansion of e-commerce in China has resulted in the development of Taobao villages, clusters of e-commerce businesses operating in rural areas. Because Taobao villages have increased the incomes or rural people and entrepreneurship in rural China, Taobao villages have become a component of rural revitalization strategies. In 2015, the State Council promoted the Internet Plus initiative, a five-year plan to integrate traditional manufacturing and service industries with big data, cloud computing, and Internet of things technology. The State Council provided support for Internet Plus through policy support in area including cross-border e-commerce and rural e-commerce. In 2019, the city of Hangzhou established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to e-commerce and internet-related intellectual property claims. Europe In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel. Arab states The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia. The Gulf Cooperation Council countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market is expected to grow to over $20 billion by 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive. The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing. However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users. The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions. India India has an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around six million new entrants every month. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities. The India retail market is expected to rise from 2.5% in 2016 to 5% in 2020. Brazil In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion. Logistics Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs. The optimization of logistics processes that contains long-term investment in an efficient storage infrastructure system and adoption of inventory management strategies is crucial to prioritize customer satisfaction throughout the entire process, from order placement to final delivery. Impacts Impact on markets and retailers E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings. E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacturer. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery. There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue. Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting businesses' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords. Impact on supply chain management For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies. E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions. In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain. Impact on employment E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees. Impact on customers E-commerce brings convenience for customers as they do not have to leave home and only need to browse websites online, especially for buying products which are not sold in nearby shops. It could help customers buy a wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Thanks to the practice of user-generated ratings and reviews from companies like Bazaarvoice, Trustpilot, and Yelp, customers can also see what other people think of a product, and decide before buying if they want to spend money on it. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online. E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce. However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues. Impact on the environment In 2018, E-commerce generated of container cardboard in North America, an increase from ) in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that does not require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials. Accelerated movement of packages around the world includes accelerated movement of living things, with all its attendant risks. Weeds, pests, and diseases all sometimes travel in packages of seeds. Some of these packages are part of brushing manipulation of e-commerce reviews. Impact on traditional retail E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations. E-commerce during COVID-19 In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Later studies show that online sales increased by 25% and online grocery shopping increased by over 100% during the crisis in the United States. Meanwhile, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over. Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales are expected to reach $6.5 trillion by 2023. Business application Some common applications related to electronic commerce are: Timeline A timeline for the development of e-commerce: 1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology, later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said. 1979: Michael Aldrich demonstrates the first online shopping system. 1981: Thomson Holidays UK is the first business-to-business (B2B) online shopping system to be installed. 1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering. 1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California. Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted to testify is Quantum Technology, later to become AOL.) California's Electronic Commerce Act was passed in 1984. 1983: Karen Earle Lile (AKA Karen Bean) and Kendall Ross Bean create e-commerce service in San Francisco Bay Area. Buyers and sellers of pianos connect through a database created by Piano Finders on a Kaypro personal computer using DOS interface. Pianos for sale are listed on a Bulletin board system. Buyers print list of pianos for sale by a dot matrix printer. Customer service happened through a Piano Advice Hotline listed in the San Francisco Chronicle classified ads and money transferred by a bank wire transfer when a sale was completed. 1984: Gateshead SIS/Tesco is first B2C online shopping system and Mrs Snowball, 72, is the first online home shopper 1984: In April 1984, CompuServe launches the Electronic Mall in the US and Canada. It is the first comprehensive electronic commerce service. 1989: In May 1989, Sequoia Data Corp. introduced Compumarket, the first internet based system for e-commerce. Sellers and buyers could post items for sale and buyers could search the database and make purchases with a credit card. 1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer. 1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing. 1993: Paget Press releases edition No. 3 of the first app store, The Electronic AppWrapper 1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure. 1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket. 1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase through NetMarket. 1995: The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet. 1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, product manager for CompuServe UK, from W H Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, Past Times, PC World (retailer) and Innovations. 1995: Amazon is launched by Jeff Bezos. 1995: eBay is founded by computer programmer Pierre Omidyar as AuctionWeb. It is the first online auction site supporting person-to-person transactions. 1995: The first commercial-free 24-hour, internet-only radio stations, Radio HK and NetRadio start broadcasting. 1996: The use of Excalibur BBS with replicated "storefronts" was an early implementation of electronic commerce started by a group of SysOps in Australia and replicated to global partner sites. 1998: Electronic postal stamps can be purchased and downloaded for printing from the Web. 1999: Alibaba Group is established in China. Business.com sold for US$7.5 million to eCompanies, which was purchased in 1997 for US$149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online. 1999: Global e-commerce reaches $150 billion 2000: The dot-com bust. 2001: eBay has the largest userbase of any e-commerce site. 2001: Alibaba.com achieved profitability in December 2001. 2002: eBay acquires PayPal for $1.5 billion. Niche retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal. 2003: Amazon posts first yearly profit. 2004: DHgate.com, China's first online B2B transaction platform, is established, forcing other B2B sites to move away from the "yellow pages" model. 2007: Business.com acquired by R.H. Donnelley for $345 million. 2014: US e-commerce and online retail sales projected to reach $294 billion, an increase of 12 percent over 2013 and 9% of all retail sales. Alibaba Group has the largest Initial public offering ever, worth $25 billion. 2015: Amazon accounts for more than half of all e-commerce growth, selling almost 500 Million SKU's in the US. 2016: The Government of India launches the BHIM UPI digital payment interface. In the year 2020 it has 2 billion digital payment transactions. 2017: Retail e-commerce sales across the world reaches $2.304 trillion, which was a 24.8 percent increase than previous year. 2017: Global e-commerce transactions generate , including for business-to-business (B2B) transactions and for business-to-consumer (B2C) sales. See also Comparison of free software e-commerce web application frameworks Comparison of shopping cart software Customer intelligence Digital economy E-commerce credit card payment system Electronic bill payment Electronic money Non-store retailing Online shopping Payments as a service South Dakota v. Wayfair, Inc. Types of e-commerce Timeline of e-commerce References Further reading External links Electronics industry Non-store retailing Retail formats Supply chain management
E-commerce
[ "Technology" ]
6,777
[ "Information and communications technology", "Information technology", "E-commerce", "Electronics industry" ]
9,613
https://en.wikipedia.org/wiki/Euler%27s%20formula
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number , one has where is the base of the natural logarithm, is the imaginary unit, and and are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted ("cosine plus i sine"). The formula is still valid if is a complex number, and is also called Euler's formula in this more general case. Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics". When , Euler's formula may be rewritten as or , which is known as Euler's identity. History In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of ) as: Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of . Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum. Johann Bernoulli had found that And since the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral. Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values. The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel. Definitions of complex exponentiation The exponential function for real values of may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of for complex values of simply by substituting in place of and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of to the complex plane. Differential equation definition The exponential function is the unique differentiable function of a complex variable for which the derivative equals the function and Power series definition For complex Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines for all complex . Limit definition For complex Here, is restricted to positive integers, so there is no question about what the power with exponent means. Proofs Various proofs of the formula are possible. Using differentiation This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted). Consider the function for real . Differentiating gives by the product rule Thus, is a constant. Since , then for all real , and thus Using power series Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of : Using now the power-series definition from above, we see that for real values of where in the last step we recognize the two terms are the Maclaurin series for and . The rearrangement of terms is justified because each series is absolutely convergent. Using polar coordinates Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some and depending on , No assumptions are being made about and ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of is . Therefore, differentiating both sides gives Substituting for and equating real and imaginary parts in this formula gives and . Thus, is a constant, and is for some constant . The initial values and come from , giving and . This proves the formula Applications Applications in complex number theory Interpretation of the formula This formula can be interpreted as saying that the function is a unit complex number, i.e., it traces out the unit circle in the complex plane as ranges through the real numbers. Here is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function (where is a complex number) and of and for real numbers (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers . A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number , and its complex conjugate, , can be written as where is the real part, is the imaginary part, is the magnitude of and . is the argument of , i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of . Many texts write instead of , but the first equation needs adjustment when . This is because for any real and , not both zero, the angles of the vectors and differ by radians, but have the identical value of . Use of the formula to define the logarithm of complex numbers Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): and that both valid for any complex numbers and . Therefore, one can write: for any . Taking the logarithm of both sides shows that and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because is multi-valued. Finally, the other exponential law which can be seen to hold for all integers , together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula. Relationship to trigonometry Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function: The two equations above can be derived by adding or subtracting Euler's formulas: and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex arguments . For example, letting , we have: In addition Complex exponentials can simplify trigonometry, because they are mathematically easier to manipulate than their sine and cosine components. One technique is simply to convert sines and cosines into equivalent expressions in terms of exponentials sometimes called complex sinusoids. After the manipulations, the simplified result is still real-valued. For example: Another technique is to represent sines and cosines in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example: This formula is used for recursive generation of for integer values of and arbitrary (in radians). Considering a parameter in equation above yields recursive formula for Chebyshev polynomials of the first kind. Topological interpretation In the language of topology, Euler's formula states that the imaginary exponential function is a (surjective) morphism of topological groups from the real line to the unit circle . In fact, this exhibits as a covering space of . Similarly, Euler's identity says that the kernel of this map is , where . These observations may be combined and summarized in the commutative diagram below: Other applications In differential equations, the function is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation. In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor. In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point on this sphere, and a real number, Euler's formula applies: and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space. Other special cases The special cases that evaluate to units illustrate rotation around the complex unit circle: The special case at (where , one turn) yields . This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging the addends from the general case: An interpretation of the simplified form is that rotating by a full turn is an identity function. See also Complex number Euler's identity Integration using Euler's formula History of Lorentz transformations List of topics named after Leonhard Euler References Further reading External links Elements of Algebra Theorems in complex analysis Articles containing proofs Mathematical analysis E (mathematical constant) Trigonometry Leonhard Euler
Euler's formula
[ "Mathematics" ]
2,139
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "E (mathematical constant)", "Articles containing proofs" ]
9,616
https://en.wikipedia.org/wiki/Evolutionarily%20stable%20strategy
An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science. In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change). History Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it. Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author. The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour. Uses of ESS: The ESS was a major element used to analyze evolution in Richard Dawkins' bestselling 1976 book The Selfish Gene. The ESS was first used in the social sciences by Robert Axelrod in his 1984 book The Evolution of Cooperation. Since then, it has been widely used in the social sciences, including anthropology, economics, philosophy, and political science. In the social sciences, the primary interest is not in an ESS as the end of biological evolution, but as an end point in cultural evolution or individual learning. In evolutionary psychology, ESS is used primarily as a model for human biological evolution. Motivation The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies. Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives. Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes. Nash equilibrium An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T: E(S,S) ≥ E(T,S) In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS. Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either E(S,S) > E(T,S), or E(S,S) = E(T,S) and E(S,T) > E(T,T) The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T. There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S E(S,S) ≥ E(T,S), and E(S,T) > E(T,T) In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second. In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T. This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set. Examples of differences between Nash equilibria and ESSes In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS. Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B). Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS. Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation). This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points. Vs. evolutionarily stable state In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations. In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade. Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory. In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability. B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS. Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic. Stochastic ESS In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging. Prisoner's dilemma A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans. Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect. If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them. If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects. This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives. Human behavior The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies. Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences. See also Antipredator adaptation Behavioral ecology Evolutionary psychology Fitness landscape Hawk–dove game Koinophilia Sociobiology War of attrition (game) References Further reading Classic reference textbook. . An 88-page mathematical introduction; see Section 3.8. Free online at many universities. Parker, G. A. (1984) Evolutionary stable strategies. In Behavioural Ecology: an Evolutionary Approach (2nd ed) Krebs, J. R. & Davies N.B., eds. pp 30–61. Blackwell, Oxford. . A comprehensive reference from a computational perspective; see Section 7.7. Downloadable free online. Maynard Smith, John. (1982) Evolution and the Theory of Games. . Classic reference. External links Evolutionarily Stable Strategies at Animal Behavior: An Online Textbook by Michael D. Breed. Game Theory and Evolutionarily Stable Strategies, Kenneth N. Prestwich's site at College of the Holy Cross. Evolutionarily stable strategies knol Archived: https://web.archive.org/web/20091005015811/http://knol.google.com/k/klaus-rohde/evolutionarily-stable-strategies-and/xk923bc3gp4/50# Game theory equilibrium concepts Evolutionary game theory
Evolutionarily stable strategy
[ "Mathematics" ]
3,274
[ "Game theory", "Game theory equilibrium concepts", "Evolutionary game theory" ]
9,619
https://en.wikipedia.org/wiki/Extremophile
An extremophile () is an organism that is able to live (or in some cases thrive) in extreme environments, i.e., environments with conditions approaching or stretching the limits of what known life can adapt to, such as extreme temperature, pressure, radiation, salinity, or pH level. Since the definition of an extreme environment is relative to an arbitrarily defined standard, often an anthropocentric one, these organisms can be considered ecologically dominant in the evolutionary history of the planet. Dating back to more than 40 million years ago, extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. The study of extremophiles has expanded human knowledge of the limits of life, and informs speculation about extraterrestrial life. Extremophiles are also of interest because of their potential for bioremediation of environments made hazardous to humans due to pollution or contamination. Characteristics In the 1980s and 1990s, biologists found that microbial life has great flexibility for surviving in extreme environments—niches that are acidic, extraordinarily hot, or with irregular air pressure for example—that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far beneath the ocean's surface. According to astrophysicist Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Some bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica, and in the Marianas Trench, the deepest place in Earth's oceans. Expeditions of the International Ocean Discovery Program found microorganisms in sediment that is below seafloor in the Nankai Trough subduction zone. Some microorganisms have been found thriving inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are." A key to extremophile adaptation is their amino acid composition, affecting their protein folding ability under particular conditions. Studying extreme environments on Earth can help researchers understand the limits of habitability on other worlds. Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of . Classifications There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels. Terms Acidophile An organism with optimal growth at pH levels of 3.0 or below. Alkaliphile An organism with optimal growth at pH levels of 9.0 or above. Capnophile An organism with optimal growth conditions in high concentrations of carbon dioxide. An example would be Mannheimia succiniciproducens, a bacterium that inhabits a ruminant animal's digestive system. HalophileAn organism with optimal growth at a concentration of dissolved salts of 50 g/L (= 5% m/v) or above. HyperpiezophileAn organism with optimal growth at hydrostatic pressures above 50 MPa (= 493 atm = 7,252 psi). HyperthermophileAn organism with optimal growth at temperatures above . Metallotolerant Capable of tolerating high levels of dissolved heavy metals in solution, such as copper, cadmium, arsenic, and zinc. Examples include Ferroplasma sp., Cupriavidus metallidurans and GFAJ-1. Oligotroph An organism with optimal growth in nutritionally limited environments. Osmophile An organism with optimal growth in environments with a high sugar concentration. Piezophile An organism with optimal growth in hydrostatic pressures above 10 MPa (= 99 atm = 1,450 psi). Also referred to as barophile. Polyextremophile A polyextremophile (faux Ancient Latin/Greek for 'affection for many extremes') is an organism that qualifies as an extremophile under more than one category. Psychrophile/Cryophile An organism with optimal growth at temperatures of or lower. Radioresistant Organisms resistant to high levels of ionizing radiation, most commonly ultraviolet radiation. This category also includes organisms capable of resisting nuclear radiation. Sulphophile An organism with optimal growth conditions in high concentrations of sulfur. An example would be Sulfurovum epsilonproteobacteria, a sulfur-oxidizing bacteria that inhabits deep-water sulfur vents. Thermophile An organism with optimal growth at temperatures above . Xerophile An organism with optimal growth at water activity below 0.8. In astrobiology Astrobiology is the multidisciplinary field that investigates how life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and recognize biospheres that might be different from that on Earth. Astrobiologists are interested in extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters. Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). P. denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia. On 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under some conditions similar to those on Mars in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). On 29 April 2013, scientists at Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". On 19 May 2014, scientists announced that some microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms, giving rise to speculation that such microbes could have withstood space travel and are present on the Curiosity rover now on the planet Mars. On 20 August 2014, scientists confirmed the existence of microorganisms living half a mile below the ice of Antarctica. In September 2015, scientists from CNR-National Research Council of Italy reported that S. soflataricus survived under Martian radiation at a wavelength that was considered lethal to most bacteria. This discovery is significant because it indicates that not only bacterial spores, but also growing cells can resist to strong UV radiation. In June 2016, scientists from Brigham Young University reported that endospores of Bacillus subtilis were able to survive high speed impacts up to 299±28 m/s, extreme shock, and extreme deceleration. They pointed out that this feature might allow endospores to survive and to be transferred between planets by traveling within meteorites or by experiencing atmosphere disruption. Moreover, they suggested that the landing of spacecraft may also result in interplanetary spore transfer, given that spores can survive high-velocity impact while ejected from the spacecraft onto the planet surface. This is the first study which reported that bacteria can survive in such high-velocity impact. However, the lethal impact speed is unknown, and further experiments should be done by introducing higher-velocity impact to bacterial endospores. In August 2020 scientists reported that bacteria that feed on air discovered 2017 in Antarctica are likely not limited to Antarctica after discovering the two genes previously linked to their "atmospheric chemosynthesis" in soil of two other similar cold desert sites, which provides further information on this carbon sink and further strengthens the extremophile evidence that supports the potential existence of microbial life on alien planets. The same month, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans, were found to survive for three years in outer space, based on studies on the International Space Station. These findings support the notion of panspermia. Bioremediation Extremophiles can also be useful players in the bioremediation of contaminated sites as some species are capable of biodegradation under conditions too extreme for classic bioremediation candidate species. Anthropogenic activity causes the release of pollutants that may potentially settle in extreme environments as is the case with tailings and sediment released from deep-sea mining activity. While most bacteria would be crushed by the pressure in these environments, piezophiles can tolerate these depths and can metabolize pollutants of concern if they possess bioremediation potential. Hydrocarbons There are multiple potential destinations for hydrocarbons after an oil spill has settled and currents routinely deposit them in extreme environments. Methane bubbles resulting from the Deepwater Horizon oil spill were found 1.1 kilometers below water surface level and at concentrations as high as 183 μmol per kilogram. The combination of low temperatures and high pressures in this environment result in low microbial activity. However, bacteria that are present including species of Pseudomonas, Aeromonas and Vibrio were found to be capable of bioremediation, albeit at a tenth of the speed they would perform at sea level pressure. Polycyclic aromatic hydrocarbons increase in solubility and bioavailability with increasing temperature. Thermophilic Thermus and Bacillus species have demonstrated higher gene expression for the alkane mono-oxygenase alkB at temperatures exceeding . The expression of this gene is a crucial precursor to the bioremediation process. Fungi that have been genetically modified with cold-adapted enzymes to tolerate differing pH levels and temperatures have been shown to be effective at remediating hydrocarbon contamination in freezing conditions in the Antarctic. Metals Acidithiubacillus ferroxidans has been shown to be effective in remediating mercury in acidic soil due to its merA gene making it mercury resistant. Industrial effluent contain high levels of metals that can be detrimental to both human and ecosystem health. In extreme heat environments the extremophile Geobacillus thermodenitrificans has been shown to effectively manage the concentration of these metals within twelve hours of introduction. Some acidophilic microorganisms are effective at metal remediation in acidic environments due to proteins found in their periplasm, not present in any mesophilic organisms, allowing them to protect themselves from high proton concentrations. Rice paddies are highly oxidative environments that can produce high levels of lead or cadmium. Deinococcus radiodurans are resistant to the harsh conditions of the environment and are therefore candidate species for limiting the extent of contamination of these metals. Some bacteria are known to also use rare earth elements on their biological processes. For example, Methylacidiphilum fumariolicum, Methylorubrum extorquens, and Methylobacterium radiotolerans are known to be able to use lanthanides as cofactors to increase their methanol dehydrogenase activity. Acid mine drainage Acid mine drainage is a major environmental concern associated with many metal mines. This is due to the fact that this highly acidic water can mix with groundwater, streams, and lakes. The drainage turns the pH in these water sources from a more neutral pH to a pH lower than 4. This is close to the acidity levels of battery acid or stomach acid. Exposure to the polluted water can greatly affect the health of plants, humans, and animals. However, a productive method of remediation is to introduce the extremophile, Thiobacillus ferrooxidans. This extremophile is useful for its bioleaching property. It helps to break down minerals in the waste water created by the mine. By breaking down the minerals Thiobacillus ferrooxidans start to help neutralize the acidity of the waste water. This is a way to reduce the environmental impact and help remediate the damage caused by the acid mine drainage leaks. Oil-based, hazardous pollutants in Arctic regions Psychrophilic microbes metabolize hydrocarbons which assists in the remediation of hazardous, oil-based pollutants in the Arctic and Antarctic regions. These specific microbes are used in this region due to their ability to perform their functions at extremely cold temperatures. Radioactive materials Any bacteria capable of inhabiting radioactive mediums can be classified as an extremophile. Radioresistant organisms are therefore critical in the bioremediation of radionuclides. Uranium is particularly challenging to contain when released into an environment and very harmful to both human and ecosystem health. The NANOBINDERS project is equipping bacteria that can survive in uranium rich environments with gene sequences that enable proteins to bind to uranium in mining effluent, making it more convenient to collect and dispose of. Some examples are Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum. Radiotrophic fungi, which use radiation as an energy source, have been found inside and around the Chernobyl Nuclear Power Plant. Radioresistance has also been observed in certain species of macroscopic lifeforms. The lethal dose required to kill up to 50% of a tortoise population is 40,000 roentgens, compared to only 800 roentgens needed to kill 50% of a human population. In experiments exposing lepidopteran insects to gamma radiation, significant DNA damage was detected only at 20 Gy and higher doses, in contrast with human cells that showed similar damage at only 2 Gy. Examples and recent findings New sub-types of extremophiles are identified frequently and the sub-category list for extremophiles is always growing. For example, microbial life lives in the liquid asphalt lake, Pitch Lake. Research indicates that extremophiles inhabit the asphalt lake in populations ranging between 106 and 107 cells/gram. Likewise, until recently, boron tolerance was unknown, but a strong borophile was discovered in bacteria. With the recent isolation of Bacillus boroniphilus, borophiles came into discussion. Studying these borophiles may help illuminate the mechanisms of both boron toxicity and boron deficiency. In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live below the surface, and which breathe sulfur in order to survive. These organisms are also remarkable due to eating rocks such as pyrite as their regular food source. Biotechnology The thermoalkaliphilic catalase, which initiates the breakdown of hydrogen peroxide into oxygen and water, was isolated from an organism, Thermus brockianus, found in Yellowstone National Park by Idaho National Laboratory researchers. The catalase operates over a temperature range from 30 °C to over 94 °C and a pH range from 6–10. This catalase is extremely stable compared to other catalases at high temperatures and pH. In a comparative study, the T. brockianus catalase exhibited a half life of 15 days at 80 °C and pH 10 while a catalase derived from Aspergillus niger had a half life of 15 seconds under the same conditions. The catalase will have applications for removal of hydrogen peroxide in industrial processes such as pulp and paper bleaching, textile bleaching, food pasteurization, and surface decontamination of food packaging. DNA modifying enzymes such as Taq DNA polymerase and some Bacillus enzymes used in clinical diagnostics and starch liquefaction are produced commercially by several biotechnology companies. DNA transfer Over 65 prokaryotic species are known to be naturally competent for genetic transformation, the ability to transfer DNA from one cell to another cell followed by integration of the donor DNA into the recipient cell's chromosome. Several extremophiles are able to carry out species-specific DNA transfer, as described below. However, it is not yet clear how common such a capability is among extremophiles. The bacterium Deinococcus radiodurans is one of the most radioresistant organisms known. This bacterium can also survive cold, dehydration, vacuum and acid and is thus known as a polyextremophile. D. radiodurans is competent to perform genetic transformation. Recipient cells are able to repair DNA damage in donor transforming DNA that had been UV irradiated as efficiently as they repair cellular DNA when the cells themselves are irradiated. The extreme thermophilic bacterium Thermus thermophilus and other related Thermus species are also capable of genetic transformation. Halobacterium volcanii, an extreme halophilic (saline tolerant) archaeon, is capable of natural genetic transformation. Cytoplasmic bridges are formed between cells that appear to be used for DNA transfer from one cell to another in either direction. Sulfolobus solfataricus and Sulfolobus acidocaldarius are hyperthermophilic archaea. Exposure of these organisms to the DNA damaging agents UV irradiation, bleomycin or mitomycin C induces species-specific cellular aggregation. UV-induced cellular aggregation of S. acidocaldarius mediates chromosomal marker exchange with high frequency. Recombination rates exceed those of uninduced cultures by up to three orders of magnitude. Frols et al. and Ajon et al. hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to repair damaged DNA by means of homologous recombination. Van Wolferen et al. noted that this DNA exchange process may be crucial under DNA damaging conditions such as high temperatures. It has also been suggested that DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage (and see Transformation (genetics)). Extracellular membrane vesicles (MVs) might be involved in DNA transfer between different hyperthermophilic archaeal species. It has been shown that both plasmids and viral genomes can be transferred via MVs. Notably, a horizontal plasmid transfer has been documented between hyperthermophilic Thermococcus and Methanocaldococcus species, respectively belonging to the orders Thermococcales and Methanococcales. See also Earliest known life forms Dissimilatory metal-reducing microorganisms Extremotroph List of microorganisms tested in outer space Mesophile, an organism that grows best in moderate temperatures Neutrophile, an organism that grows best in a neutral pH level RISE project Tardigrade References Further reading External links Extreme Environments - Science Education Resource Center Extremophile Research Eukaryotes in extreme environments The Research Center of Extremophiles DaveDarling's Encyclopedia of Astrobiology, Astronomy, and Spaceflight The International Society for Extremophiles Idaho National Laboratory Polyextremophile on David Darling's Encyclopedia of Astrobiology, Astronomy, and Spaceflight T-Limit Expedition Environmental microbiology Astrobiology Bacteria Ecology Geomicrobiology Microbial growth and nutrition
Extremophile
[ "Astronomy", "Biology", "Environmental_science" ]
4,442
[ "Origin of life", "Speculative evolution", "Prokaryotes", "Ecology", "Astrobiology", "Organisms by adaptation", "Extremophiles", "Bacteria", "Biological hypotheses", "Environmental microbiology", "Microorganisms", "Astronomical sub-disciplines" ]
9,630
https://en.wikipedia.org/wiki/Ecology
Ecology () is the natural science of the relationships among living organisms and their environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history. Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes. Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology). The word ecology () was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory. Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value. Levels, scope, and scale of organization The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame. The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes. Hierarchy The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales. To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties." Biodiversity Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry. Habitat The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment. Niche Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness." Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species. Niche construction Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats." The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time. Biome Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans. Biosphere The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance. Population ecology Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat. A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration. An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by: where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change. Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst: where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size () will grow to approach equilibrium, where (), when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity." Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data." Metapopulations and migration The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population. In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure. Community ecology Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals. Ecosystem ecology Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria). The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity. Food webs A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows. Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems. Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life. The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras. Trophic levels A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'. Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing. Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores." Keystone species A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability. Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied. Complexity Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'. "Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960. Holism Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed." Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells. Relation to evolution Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation. Behavioural ecology All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba. Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness. Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk." Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors. Cognitive ecology Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...". Social ecology Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members. Coevolution Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients. Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure. Biogeography Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory. Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming. r/K selection theory A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection. In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring. Molecular ecology The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography. Human ecology Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century. The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth. Restoration Ecology Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes. Relation to the environment The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat. The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem. Disturbance and resilience A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances. The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades. Metabolism and the early atmosphere The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved. Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior. Radiation: heat, temperature and light The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy. There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds. Physical environments Water Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water. Gravity The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra). Pressure Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations. Wind and turbulence Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems. Fire Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s. Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems. Soils Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils. Biogeochemistry and climate Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry. The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm. In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles. History Early beginnings Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche. Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" () was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy. Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous. From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences. Since 1900 Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892. In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations. The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology. In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers. Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s. In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management. See also Carrying capacity Chemical ecology Climate justice Circles of Sustainability Cultural ecology Dialectical naturalism Ecological death Ecological empathy Ecological overshoot Ecological psychology Ecology movement Ecosophy Ecopsychology Human ecology Industrial ecology Information ecology Landscape ecology Natural resource Normative science Philosophy of ecology Political ecology Theoretical ecology Sensory ecology Sexecology Spiritual ecology Sustainable development Lists Glossary of ecology Index of biology articles List of ecologists Outline of biology Terminology of ecology Notes References External links The Nature Education Knowledge Project: Ecology Biogeochemistry Emergence
Ecology
[ "Chemistry", "Biology", "Environmental_science" ]
13,719
[ "Ecology terminology", "Environmental chemistry", "Ecology", "Chemical oceanography", "Biogeochemistry" ]
9,632
https://en.wikipedia.org/wiki/Ecosystem
An ecosystem (or ecological system) is a system formed by organisms in interaction with their environment. The biotic and abiotic components are linked together through nutrient cycles and energy flows. Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them. Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere. Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes. Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals. Definition An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows. "Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked. Origin and development of the term The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope". G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems. Processes External and internal factors Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem. Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside. Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function. Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors. Primary production Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect. Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Energy flow Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system. Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem. Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion. In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level. The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network. Decomposition The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted. Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones. Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material. The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources. Decomposition rates Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available. Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. Dynamics and resilience Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance. Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply." The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times. From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene. Nutrient cycling Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical. Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium. Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems. When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification. Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function. Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter. Function and biodiversity Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species. The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem. An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat. Study approaches Ecosystem ecology Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet. The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics. Classifications Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests". Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system. Human interactions with ecosystems Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate. Ecosystem goods and services Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted. The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition. The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change. Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species. Degradation and decline As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends. Management When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry). Restoration and sustainable development Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past. See also Complex system Earth science Ecoregion Ecological resilience Ecosystem-based adaptation Artificialization Types The following articles are types of ecosystems for particular types of regions or zones: Aquatic ecosystem Freshwater ecosystem Lake ecosystem (lentic ecosystem) River ecosystem (lotic ecosystem) Marine ecosystem Large marine ecosystem Tropical salt pond ecosystem Terrestrial ecosystem Boreal ecosystem Groundwater-dependent ecosystems Montane ecosystem Urban ecosystem Ecosystems grouped by condition Agroecosystem Closed ecosystem Depauperate ecosystem Novel ecosystem Reference ecosystem Instances Ecosystem instances in specific regions of the world: Greater Yellowstone Ecosystem Leuser Ecosystem Longleaf pine Ecosystem Tarangire Ecosystem References External links
Ecosystem
[ "Biology" ]
5,994
[ "Symbiosis", "Ecosystems" ]
9,633
https://en.wikipedia.org/wiki/E%20%28mathematical%20constant%29
The number is a mathematical constant approximately equal to 2.71828 that is the base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted . Alternatively, can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest. The number is of great importance in mathematics, alongside 0, 1, , and . All five appear in one formulation of Euler's identity and play important and recurring roles across mathematics. Like the constant , is irrational, meaning that it cannot be represented as a ratio of integers, and moreover it is transcendental, meaning that it is not a root of any non-zero polynomial with rational coefficients. To 30 decimal places, the value of is: Definitions The number is the limit an expression that arises in the computation of compound interest. It is the sum of the infinite series It is the unique positive number such that the graph of the function has a slope of 1 at . One has where is the (natural) exponential function, the unique function that equals its own derivative and satisfies the equation Since the exponential function is commonly denoted as one has also The logarithm of base can be defined as the inverse function of the function Since one has The equation implies therefore that is the base of the natural logarithm. The number can also be characterized in terms of an integral: For other characterizations, see . History The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base . It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of , but he did not recognize itself as a quantity of interest. The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant occurs as the limit where represents the number of intervals in a year on which the compound interest is evaluated (for example, for monthly compounding). The first symbol used for this constant was the letter by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691. Leonhard Euler started to use the letter for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter . Although some researchers used the letter in the subsequent years, the letter was more common and eventually became standard. Euler proved that is the sum of the infinite series where is the factorial of . The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem. Applications Compound interest Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest: If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding at the end of the year. Compounding quarterly yields , and compounding monthly yields . If there are compounding intervals, the interest for each interval will be and the value at the end of the year will be $1.00 × . Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger and, thus, smaller compounding intervals. Compounding weekly () yields $2.692596..., while compounding daily () yields $2.714567... (approximately two cents more). The limit as grows large is the number that came to be known as . That is, with continuous compounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate of will, after years, yield dollars with continuous compounding. Here, is the decimal equivalent of the rate of interest expressed as a percentage, so for 5% interest, . Bernoulli trials The number itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in and plays it times. As increases, the probability that gambler will lose all bets approaches . For , this is already approximately 1/2.789509.... This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in chance of winning. Playing times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning times out of trials is: In particular, the probability of winning zero times () is The limit of the above expression, as tends to infinity, is precisely . Exponential growth and decay Exponential growth is a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. The law of exponential growth can be written in different but mathematically equivalent forms, by using a different base, for which the number is a common and convenient choice: Here, denotes the initial value of the quantity , is the growth constant, and is the time it takes the quantity to grow by a factor of . Standard normal distribution The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function The constraint of unit standard deviation (and thus also unit variance) results in the in the exponent, and the constraint of unit total area under the curve results in the factor . This function is symmetric around , where it attains its maximum value , and has inflection points at . Derangements Another application of , also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem: guests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats into boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that none of the hats gets put into the right box. This probability, denoted by , is: As tends to infinity, approaches . Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is rounded to the nearest integer, for every positive . Optimal planning problems The maximum value of occurs at . Equivalently, for any value of the base , it is the case that the maximum value of occurs at (Steiner's problem, discussed below). This is useful in the problem of a stick of length that is broken into equal parts. The value of that maximizes the product of the lengths is then either or The quantity is also a measure of information gleaned from an event occurring with probability (approximately when ), so that essentially the same optimal division appears in optimal planning problems like the secretary problem. Asymptotics The number occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers and appear: As a consequence, Properties Calculus The principal motivation for introducing the number , particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms. A general exponential has a derivative, given by a limit: The parenthesized limit on the right is independent of the Its value turns out to be the logarithm of to base . Thus, when the value of is set this limit is equal and so one arrives at the following simple identity: Consequently, the exponential function with base is particularly suited to doing calculus. (as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler. Another motivation comes from considering the derivative of the base- logarithm (i.e., ), for : where the substitution was made. The base- logarithm of is 1, if equals . So symbolically, The logarithm with this special base is called the natural logarithm, and is usually denoted as ; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. Thus, there are two ways of selecting such special numbers . One way is to set the derivative of the exponential function equal to , and solve for . The other way is to set the derivative of the base logarithm to and solve for . In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for are actually the same: the number . The Taylor series for the exponential function can be deduced from the facts that the exponential function is its own derivative and that it equals 1 when evaluated at 0: Setting recovers the definition of as the sum of an infinite series. The natural logarithm function can be defined as the integral from 1 to of , and the exponential function can then be defined as the inverse function of the natural logarithm. The number is the value of the exponential function evaluated at , or equivalently, the number whose natural logarithm is 1. It follows that is the unique positive real number such that Because is the unique function (up to multiplication by a constant ) that is equal to its own derivative, it is therefore its own antiderivative as well: Equivalently, the family of functions where is any real or complex number, is the full solution to the differential equation Inequalities The number is the unique real number such that for all positive . Also, we have the inequality for all real , with equality if and only if . Furthermore, is the unique base of the exponential for which the inequality holds for all . This is a limiting case of Bernoulli's inequality. Exponential-like functions Steiner's problem asks to find the global maximum for the function This maximum occurs precisely at . (One can check that the derivative of is zero only for this value of .) Similarly, is where the global minimum occurs for the function The infinite tetration or converges if and only if , shown by a theorem of Leonhard Euler. Number theory The real number is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. (See also Fourier's proof that is irrational.) Furthermore, by the Lindemann–Weierstrass theorem, is transcendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare with Liouville number); the proof was given by Charles Hermite in 1873. The number is one of only a few transcendental numbers for which the exact irrationality exponent is known (given by ). An unsolved problem thus far is the question of whether or not the numbers and are algebraically independent. This would be resolved by Schanuel's conjecture – a currently unproven generalization of the Lindemann–Weierstrass theorem. It is conjectured that is normal, meaning that when is expressed in any base the possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length). In algebraic geometry, a period is a number that can be expressed as an integral of an algebraic function over an algebraic domain. The constant is a period, but it is conjectured that is not. Complex numbers The exponential function may be written as a Taylor series Because this series is convergent for every complex value of , it is commonly used to extend the definition of to the complex numbers. This, with the Taylor series for and , allows one to derive Euler's formula: which holds for every complex . The special case with is Euler's identity: which is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that is transcendental, which implies the impossibility of squaring the circle. Moreover, the identity implies that, in the principal branch of the logarithm, Furthermore, using the laws for exponentiation, for any integer , which is de Moivre's formula. The expressions of and in terms of the exponential function can be deduced from the Taylor series: The expression is sometimes abbreviated as . Representations The number can be represented in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. In addition to the limit and the series given above, there is also the simple continued fraction which written out looks like The following infinite product evaluates to : Many other series, sequence, continued fraction, and infinite product representations of have been proved. Stochastic representations In addition to exact analytical expressions for representation of , there are stochastic techniques for estimating . One such approach begins with an infinite sequence of independent random variables , ..., drawn from the uniform distribution on [0, 1]. Let be the least number such that the sum of the first observations exceeds 1: Then the expected value of is : . Known digits The number of known digits of has increased substantially since the introduction of the computer, due both to increasing performance of computers and to algorithmic improvements. Since around 2010, the proliferation of modern high-speed desktop computers has made it feasible for amateurs to compute trillions of digits of within acceptable amounts of time. On Dec 5, 2020, a record-setting calculation was made, giving to 31,415,926,535,897 (approximately ) digits. Computing the digits One way to compute the digits of is with the series A faster method involves two recursive functions and . The functions are defined as The expression produces the th partial sum of the series above. This method uses binary splitting to compute with fewer single-digit arithmetic operations and thus reduced bit complexity. Combining this with fast Fourier transform-based methods of multiplying integers makes computing the digits very fast. In computer culture During the emergence of internet culture, individuals and organizations sometimes paid homage to the number . In an early example, the computer scientist Donald Knuth let the version numbers of his program Metafont approach . The versions are 2, 2.7, 2.71, 2.718, and so forth. In another instance, the IPO filing for Google in 2004, rather than a typical round-number amount of money, the company announced its intention to raise 2,718,281,828 USD, which is billion dollars rounded to the nearest dollar. Google was also responsible for a billboard that appeared in the heart of Silicon Valley, and later in Cambridge, Massachusetts; Seattle, Washington; and Austin, Texas. It read "{first 10-digit prime found in consecutive digits of }.com". The first 10-digit prime in is 7427466391, which starts at the 99th digit. Solving this problem and visiting the advertised (now defunct) website led to an even more difficult problem to solve, which consisted in finding the fifth term in the sequence 7182818284, 8182845904, 8747135266, 7427466391. It turned out that the sequence consisted of 10-digit numbers found in consecutive digits of whose digits summed to 49. The fifth term in the sequence is 5966290435, which starts at the 127th digit. Solving this second problem finally led to a Google Labs webpage where the visitor was invited to submit a résumé. The last release of the official Python 2 interpreter has version number 2.7.18, a reference to e. References Further reading Commentary on Endnote 10 of the book Prime Obsession for another stochastic representation External links The number to 1 million places and NASA.gov 2 and 5 million places Approximations – Wolfram MathWorld Earliest Uses of Symbols for Constants Jan. 13, 2008 "The story of ", by Robin Wilson at Gresham College, 28 February 2007 (available for audio and video download) Search Engine 2 billion searchable digits of , and Leonhard Euler Mathematical constants Real transcendental numbers
E (mathematical constant)
[ "Mathematics" ]
3,594
[ "Mathematical objects", "E (mathematical constant)", "nan", "Mathematical constants", "Numbers" ]
9,637
https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin%20formula
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula. The formula If and are natural numbers and is a real or complex valued continuous function for real numbers in the interval , then the integral can be approximated by the sum (or vice versa) (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives evaluated at the endpoints of the interval, that is to say and . Explicitly, for a positive integer and a function that is times continuously differentiable on the interval , we have where is the th Bernoulli number (with ) and is an error term which depends on , , , and and is usually small for suitable values of . The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for . In this case we have or alternatively The remainder term The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals for . The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions . The Bernoulli polynomials may be defined recursively by and, for , The periodized Bernoulli functions are defined as where denotes the largest integer less than or equal to , so that always lies in the interval . With this notation, the remainder term equals When , it can be shown that for , where denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials . The bound is achieved for even when is zero. The term may be omitted for odd but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as Low-order cases The Bernoulli numbers from to are . Therefore, the low-order cases of the Euler–Maclaurin formula are: Applications The Basel problem The Basel problem is to determine the sum Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals , which he proved in the same year. Sums involving a polynomial If is a polynomial and is big enough, then the remainder term vanishes. For instance, if , we can choose to obtain, after simplification, Approximation of integrals The formula provides a means of approximating a finite integral. Let be the endpoints of the interval of integration. Fix , the number of points to use in the approximation, and denote the corresponding step size by . Set , so that and . Then: This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some , depending upon and , such that the terms past order increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation. Asymptotic expansion of sums In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is where and are integers. Often the expansion remains valid even after taking the limits or or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, Here the left-hand side is equal to , namely the first-order polygamma function defined by the gamma function is equal to when is a positive integer. This results in an asymptotic expansion for . That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function. Examples If is an integer greater than 1 we have: Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: For equal to 2 this simplifies to or When , the corresponding technique gives an asymptotic expansion for the harmonic numbers: where is the Euler–Mascheroni constant. Proofs Derivation by mathematical induction We outline the argument given in Apostol. The Bernoulli polynomials and the periodic Bernoulli functions for were introduced above. The first several Bernoulli polynomials are The values are the Bernoulli numbers . Notice that for we have and for , The functions agree with the Bernoulli polynomials on the interval and are periodic with period 1. Furthermore, except when , they are also continuous. Thus, Let be an integer, and consider the integral where Integrating by parts, we get Using , , and summing the above from to , we get Adding to both sides and rearranging, we have This is the case of the summation formula. To continue the induction, we apply integration by parts to the error term: where The result of integrating by parts is Summing from to and substituting this for the lower order error term results in the case of the formula, This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions. See also Cesàro summation Euler summation Gauss–Kronrod quadrature formula Darboux's formula Euler–Boole summation References Further reading External links Asymptotic analysis Hilbert spaces Numerical integration (quadrature) Articles containing proofs Theorems in analysis Summability methods Leonhard Euler
Euler–Maclaurin formula
[ "Physics", "Mathematics" ]
1,457
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical structures", "Articles containing proofs", "Summability methods", "Quantum mechanics", "Asymptotic analysis", "Mathematical problems", "Hilbert spaces", "Mathematical theorems" ]
9,640
https://en.wikipedia.org/wiki/Engine
An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy. Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form; thus heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing. Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion. Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine). Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions. Emission/Byproducts All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of . If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, , a greenhouse gas, is emitted. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of , but this is an electrochemical engine not a heat engine. Terminology The word engine derives from Old French , from the Latin –the root of the word . Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the Industrial Revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses. In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets. When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion. Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel. A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam). History Antiquity Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times. According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors. Medieval Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour. In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629. In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe. Industrial Revolution The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation. As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine. The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir. In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft. Automobiles The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the thermally more-efficient Diesel engine is used for trucks and buses. However, in recent years, turbocharged Diesel engines have become increasingly popular in automobiles, especially outside of the United States, even for quite small cars. Horizontally-opposed pistons In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as “flat” or “boxer” engines due to their shape and low profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles. Opposed four- and six-cylinder engines continue to be used as a power source in small, propeller-driven aircraft. Advancement The continued use of internal combustion engines in automobiles is partly due to the improvement of engine control systems, such as on-board computers providing engine management processes, and electronically controlled fuel injection. Forced air induction by turbocharging and supercharging have increased the power output of smaller displacement engines that are lighter in weight and more fuel-efficient at normal cruise power.. Similar changes have been applied to smaller Diesel engines, giving them almost the same performance characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine-propelled cars in Europe. Diesel engines produce lower hydrocarbon and emissions, but greater particulate and pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines. Increasing power In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S. models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements. Combustion efficiency Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around . Engine configuration Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft. The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day. Types An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs. Heat engine Combustion engine Combustion engines are heat engines driven by the heat of a combustion process. Internal combustion engine The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work. External combustion engine An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine). "Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines. The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas. Air-breathing combustion engines Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines. A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly. Examples Typical air-breathing engines include: Reciprocating engine Steam engine Gas turbine Airbreathing jet engine Turbo-propeller engine Pulse detonation engine Pulse jet Ramjet Scramjet Liquid air cycle engine/Reaction Engines SABRE. Environmental effects The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness. Air quality Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming. Non-combusting heat engines Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine. Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called "TA engines") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices. Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder. Non-thermal chemically powered motor Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include: Molecular motor – motors found in living things Synthetic molecular motor. Electric motor An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical. Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application. The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks. To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency). By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor. Physically powered motor Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands. Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy. Pneumatic motor A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or a piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry. Hydraulic motor A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery. Hybrid Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator. Performance The following are used in the assessment of the performance of an engine. Speed Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is typically measured in revolutions per minute (rpm). Thrust Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it. Torque Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft. Power Power is the measure of how fast work is done. Efficiency Efficiency is a proportion of useful energy output compared to total input. Sound levels Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air. Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets. Engines by use Particularly notable kinds of engines include: Aircraft engine Automobile engine Model engine Motorcycle engine Marine propulsion engines such as Outboard motor Non-road engine is the term used to define engines that are not used by vehicles on roadways. Railway locomotive engine Spacecraft propulsion engines such as Rocket engine Traction engine See also Aircraft engine Automobile engine replacement Electric motor Engine cooling Engine swap Gasoline engine HCCI engine Hesselman engine Hot bulb engine IRIS engine Micromotor Flagella – biological motor used by some microorganisms Nanomotor Molecular motor Synthetic molecular motor Adiabatic quantum motor Multifuel Reaction engine Solid-state engine Timeline of heat engine technology Timeline of motor and engine technology References Citations Sources J.G. Landels, Engineering in the Ancient World, External links Detailed Engine Animations Working 4-Stroke Engine – Animation Animated illustrations of various engines 5 Ways to Redesign the Internal Combustion Engine Article on Small SI Engines. Article on Compact Diesel Engines. Types Of Engines Motors (1915) by James Slough Zerbe.
Engine
[ "Physics", "Technology" ]
5,424
[ "Physical systems", "Machines", "Engine technology", "Engines" ]
9,644
https://en.wikipedia.org/wiki/European%20Environment%20Agency
The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment. Definition The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment. Its goal is to help those involved in developing, implementing and evaluating environmental policy, and to inform the general public. Organization The EEA was established by the European Economic Community (EEC) Regulation 1210/1990 (amended by EEC Regulation 933/1999 and EC Regulation 401/2009) and became operational in 1994, headquartered in Copenhagen, Denmark. The agency is governed by a management board composed of representatives of the governments of its 32 member states, a European Commission representative and two scientists appointed by the European Parliament, assisted by its Scientific Committee. The current Executive Director of the agency is Leena Ylä-Mononen, who has been appointed for a five-year term, starting on 1 June 2023. Ms Ylä-Mononen is the successor of professor Hans Bruyninckx. Member countries The member states of the European Union are members; however other states may become members of it by means of agreements concluded between them and the EU. It was the first EU body to open its membership to the 13 candidate countries (pre-2004 enlargement). The EEA has 32 member countries and six cooperating countries. The members are the 27 European Union member states together with Iceland, Liechtenstein, Norway, Switzerland and Turkey. Since Brexit in 2020, the UK is not a member of the EU anymore and therefore not a member state of the EEA. The six Western Balkan countries are cooperating countries: Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. These cooperation activities are integrated into Eionet and are supported by the EU under the "Instrument for Pre-Accession Assistance". The EEA is an active member of the EPA Network. Reports, data and knowledge The European Environment Agency (EEA) produces assessments based on quality-assured data on a wide range of issues from biodiversity, air quality, transport to climate change. These assessments are closely linked to the European Union's environment policies and legislation and help monitor progress in some areas and indicate areas where additional efforts are needed. As required in its founding regulation, the EEA publishes its flagship report the State and Outlook of Europe's environment (SOER), which is an integrated assessment, analysing trends, progress to targets as well as outlook for the mid- to long-term. The agency publishes annually a report on Europe's most polluted provinces for air quality, detailing fine particulate matter PM 2.5. The EEA shares this information, including the datasets used in its assessments, through its main website and a number of thematic information platforms such as Biodiversity Information System for Europe (BISE), Water Information System for Europe (WISE) and ClimateADAPT. The Climate-ADAPT knowledge platform presents information and data on expected climatic changes, the vulnerability of regions and sectors, adaptation case studies, and adaptation options, adaptation planning tools, and EU policy. European Nature Information System The European Nature Information System (EUNIS) provides access to the publicly available data in the EUNIS database for species, habitat types and protected sites across Europe. It is part of the European Biodiversity data centre (BDC), and is maintained by the EEA. The database contains data on species, habitat types and designated sites from the framework of Natura 2000, from material compiled by the European Topic Centre on Biological Diversity mentioned in relevant international conventions and in the IUCN Red Lists, collected in the framework of the EEA's reporting activities. European environment information and observation network The European Environment Information and Observation Network (Eionet) is a collaboration network between EEA member countries and non-member, cooperating nations. Cooperation is facilitated through different national environmental agencies, ministries, or offices. Eionet encourages the sharing of data and highlights specific topics for discussion and cooperation among participating countries. Eionet currently includes covers seven European Topic Centres (ETCs): ETC on Biodiversity and Ecosystems (ETC BE) ETC on Climate Change Adaptation and LULUCF (ETC CA) ETC on Climate Change Mitigation (ETC CM) ETC on Data Integration and Digitalisation (ETC DI) ETC on Human Health and the Environment (ETC HE) ETC on Circular Economy and Resource Use (ETC CE) ETC on Sustainability Transitions (ETC ST) The European Environment Agency (EEA) implements the "Shared Environmental Information System" principles and best practices via projects such as the "ENI SEIS II EAST PROJECT" & the "ENI SEIS II SOUTH PROJECT" to support environmental protection within the six eastern partnership countries (ENP) & to contribute to the reduction in marine pollution in the Mediterranean through the shared availability and access to relevant environmental information. Budget management and discharge As for every EU body and institution, the EEA's budget is subject to a discharge process, consisting of external examination of its budget execution and financial management, to ensure sound financial management of its budget. Since its establishment, the EEA has been granted discharge for its budget without exception. The EEA provides full access to its administrative and budgetary documents in its public documents register. The discharge process for the 2010 budget required additional clarifications. In February 2012, the European Parliament's Committee on Budgetary Control published a draft report, identifying areas of concern in the use of funds and its influence for the 2010 budget such as a 26% budget increase from 2009 to 2010 to €50 600 000. and questioned that maximum competition and value-for-money principles were honored in hiring, also possible fictitious employees. The EEA's Executive Director refuted allegations of irregularities in a public hearing. On 27 March 2012 Members of the European Parliament (MEPs) voted on the report and commended the cooperation between the Agency and NGOs working in the environmental area. On 23 October 2012, the European Parliament voted and granted the discharge to the European Environment Agency for its 2010 budget. Executive directors International cooperation In addition to its 32 members and six Balkan cooperating countries, the EEA also cooperates and fosters partnerships with its neighbours and other countries and regions, mostly in the context of the European Neighbourhood Policy: Eastern Partnership member states: Belarus, Ukraine, Moldova, Armenia, Azerbaijan, Georgia Union for the Mediterranean member states: Algeria, Egypt, Israel, Jordan, Lebanon, Libya, Morocco, Palestinian Authority, Syria, Tunisia Other ENPI states: Russia Central Asian states: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan Additionally the EEA cooperates with multiple international organizations and the corresponding agencies of the following countries: United States (Environmental Protection Agency) Canada (Environment Canada) Official languages The 26 official languages used by the EEA are: Bulgarian, Czech, Croatian, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Icelandic, Italian, Lithuanian, Latvian, Malti, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovene, Swedish and Turkish. See also Agencies of the European Union Citizen Science, cleanup projects that people can take part in. EU environmental policy List of atmospheric dispersion models List of environmental organizations Confederation of European Environmental Engineering Societies Coordination of Information on the Environment European Agency for Safety and Health at Work Environment Agency References External links European Topic Centre on Land Use and Spatial Information (ETC LUSI) European Topic Centre on Air and Climate Change(ETC/ACC) European Topic Centre on Biological Diversity(ETC/BD) Model Documentation System (MDS) The European Environment Agency's near real-time ozone map (ozoneweb) The European Climate Adaptation Platform Climate-ADAPT EUNIS homepage 1990 in the European Economic Community Agencies of the European Union Atmospheric dispersion modeling Environmental agencies in the European Union Government agencies established in 1990 Organizations based in Copenhagen 1994 establishments in Denmark
European Environment Agency
[ "Chemistry", "Engineering", "Environmental_science" ]
1,623
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
9,649
https://en.wikipedia.org/wiki/Energy
Energy () is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J). Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive. All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy. Forms The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples. History The word energy derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy". In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. Units of measure In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle. Scientific use Classical mechanics In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. This says that the work () is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. Chemistry In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. Biology In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy. Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action. All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria C6H12O6 + 6O2 -> 6CO2 + 6H2O C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP: The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). Cosmology In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Quantum mechanics In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is the Planck constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. Relativity When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: where m0 is the rest mass of the body, c is the speed of light in vacuum, is the rest energy. For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts). Transformation Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work). Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding . Conservation of energy and mass in transformation Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ , equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws. Reversible and non-reversible transformations Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease. Conservation of energy The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system. Richard Feynman said during a 1961 lecture: Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena. Energy transfer Closed systems Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law: where is the amount of energy transferred,   represents the work done on or by the system, and represents the heat flow into or out of the system. As a simplification, the heat term, , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes, This simplified equation is the one used to define the joule, for example. Open systems Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write Thermodynamics Internal energy Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone. First law of thermodynamics The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by where is the heat supplied to the system and is the work applied to the system. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production. See also Combustion Efficient energy use Energy democracy Energy crisis Energy recovery Energy recycling Index of energy articles Index of wave articles List of low-energy building techniques Orders of magnitude (energy) Power station Sustainable energy Transfer energy Waste-to-energy Waste-to-energy plant Zero-energy building Notes References Further reading The Biosphere (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1970.. This book, originally a 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources, population trends, and environmental degradation. Energy and Power (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1971.. Santos, Gildo M. "Energy in Brazil: a historical overview," The Journal of Energy History (2018), online. Journals The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018– External links Differences between Heat and Thermal energy () – BioCab Main topic articles Nature Universe Scalar physical quantities
Energy
[ "Physics", "Mathematics" ]
7,696
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Energy (physics)", "Wikipedia categories named after physical quantities" ]
9,653
https://en.wikipedia.org/wiki/Expected%20value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as or . History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it. In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his treatise, Huygens wrote: In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. Etymology Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: Notations The use of the letter to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique. When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as (upright), (italic), or (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used. Another popular notation is . , , and are commonly used in physics. is used in Russian-language literature. Definition As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by . Random variables with finitely many outcomes Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities . In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Examples Let represent the outcome of a roll of a fair six-sided die. More specifically, will be the number of pips showing on the top face of the die after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of is If one rolls the die times and computes the average (arithmetic mean) of the results, then as grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be That is, the expected value to be won from a $1 bet is −$. Thus, in 190 bets, the net loss will probably be about $10. Random variables with countably infinitely many outcomes Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation. Examples Suppose and for where is the scaling factor which makes the probabilities sum to 1. Then we have Random variables with density Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is straightforward to compute in this case that The limit of this expression as and does not exist: if the limits are taken so that , then the limit is zero, while if the constraint is taken, then the limit is . To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of for more general random variables . Arbitrary real-valued random variables All definitions of the expected value may be expressed in the language of measure theory. In general, if is a real-valued random variable defined on a probability space , then the expected value of , denoted by , is defined as the Lebesgue integral Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of is defined via weighted averages of approximations of which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable is said to be absolutely continuous if any of the following conditions are satisfied: there is a nonnegative measurable function on the real line such that for any Borel set , in which the integral is Lebesgue. the cumulative distribution function of is absolutely continuous. for any Borel set of real numbers with Lebesgue measure equal to zero, the probability of being valued in is also equal to zero for any positive number there is a positive number such that: if is a Borel set with Lebesgue measure less than , then the probability of being valued in is less than . These conditions are all equivalent, although this is nontrivial to establish. In this definition, is called the probability density function of (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that for any absolutely continuous random variable . The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variable can also be defined on the graph of its cumulative distribution function by a nearby equality of areas. In fact, with a real number if and only if the two surfaces in the --plane, described by respectively, have the same finite area, i.e. if and both improper Riemann integrals converge. Finally, this is equivalent to the representation also with convergent integrals. Infinite expected values Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of . This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes , with associated probabilities , for ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has It is natural to say that the expected value equals . There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as . The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable , one defines the positive and negative parts by and . These are nonnegative random variables, and it can be directly checked that . Since and are both then defined as either nonnegative numbers or , it is then natural to define: According to this definition, exists and is finite if and only if and are both finite. Due to the formula , this is the case if and only if is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. In the case of the St. Petersburg paradox, one has and so as desired. Suppose the random variable takes values with respective probabilities . Then it follows that takes value with probability for each positive integer , and takes value with remaining probability. Similarly, takes value with probability for each positive integer and takes value with remaining probability. Using the definition for non-negative random variables, one can show that both and (see Harmonic series). Hence, in this case the expectation of is undefined. Similarly, the Cauchy distribution, as discussed above, has undefined expectation. Expected values of common distributions The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. Properties The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like is true almost surely, when the probability measure attributes zero-mass to the complementary event Non-negativity: If (a.s.), then of expectation: The expected value operator (or expectation operator) is linear in the sense that, for any random variables and and a constant whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for random variables and constants we have If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space. Monotonicity: If (a.s.), and both and exist, then Proof follows from the linearity and the non-negativity property for since (a.s.). Non-degeneracy: If then (a.s.). If (a.s.), then In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y. If (a.s.) for some real number , then In particular, for a random variable with well-defined expectation, A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value. As a consequence of the formula as discussed above, together with the triangle inequality, it follows that for any random variable with well-defined expectation, one has Let denote the indicator function of an event , then is given by the probability of . This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above. Formulas in terms of CDF: If is the cumulative distribution function of a random variable , then where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of , it can be proved that with the integrals taken in the sense of Lebesgue. As a special case, for any random variable valued in the nonnegative integers , one has where denotes the underlying probability measure. Non-multiplicativity: In general, the expected value is not multiplicative, i.e. is not necessarily equal to If and are independent, then one can show that If the random variables are dependent, then generally although in special cases of dependency the equality may hold. Law of the unconscious statistician: The expected value of a measurable function of given that has a probability density function is given by the inner product of and : This formula also holds in multidimensional case, when is a function of several random variables, and is their joint density. Inequalities Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable and any positive number , it states that If is any random variable with finite expectation, then Markov's inequality may be applied to the random variable to obtain Chebyshev's inequality where is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables. The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory. Jensen's inequality: Let be a convex function and a random variable with finite expectation. Then Part of the assertion is that the negative part of has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that for positive numbers , one obtains the Lyapunov inequality This can also be proved by the Hölder inequality. In measure theory, this is particularly notable for proving the inclusion of , in the special case of probability spaces. Hölder's inequality: if and are numbers satisfying , then for any random variables and . The special case of is called the Cauchy–Schwarz inequality, and is particularly well-known. Minkowski inequality: given any number , for any random variables and with and both finite, it follows that is also finite and The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. Expectations under convergence of random variables In general, it is not the case that even if pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let be a random variable distributed uniformly on For define a sequence of random variables with being the indicator function of the event Then, it follows that pointwise. But, for each Hence, Analogously, for general sequence of random variables the expected value operator is not -additive, i.e. An example is easily obtained by setting and for where is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. Monotone convergence theorem: Let be a sequence of random variables, with (a.s) for each Furthermore, let pointwise. Then, the monotone convergence theorem states that Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let be non-negative random variables. It follows from the monotone convergence theorem that Fatou's lemma: Let be a sequence of non-negative random variables. Fatou's lemma states that Corollary. Let with for all If (a.s), then Proof is by observing that (a.s.) and applying Fatou's lemma. Dominated convergence theorem: Let be a sequence of random variables. If pointwise (a.s.), (a.s.), and Then, according to the dominated convergence theorem, ; Uniform integrability: In some cases, the equality holds when the sequence is uniformly integrable. Relationship with characteristic function The probability density function of a scalar random variable is related to its characteristic function by the inversion formula: For the expected value of (where is a Borel function), we can use this inversion formula to obtain If is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, where is the Fourier transform of The expression for also follows directly from the Plancherel theorem. Uses and applications The expectation of a random variable plays an important role in a variety of contexts. In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter. For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies. The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of . The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. where is the indicator function of the set In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the variance A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator operating on a quantum state vector is written as The uncertainty in can be calculated by the formula . See also Central tendency Conditional expectation Expectation (epistemic) Expectile – related to expectations in a way analogous to that in which quantiles are related to medians Law of total expectation – the expected value of the conditional expected value of X given Y is the same as the expected value of X Median – indicated by in a drawing above Nonlinear expectation – a generalization of the expected value Population mean Predicted value Wald's equation – an equation for calculating the expected value of a random number of random variables References Bibliography Theory of probability distributions Gambling terminology Articles containing proofs
Expected value
[ "Mathematics" ]
5,361
[ "Articles containing proofs" ]
9,656
https://en.wikipedia.org/wiki/Electric%20light
An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic which secures the lamp in the socket of a light fixture, which is often called a "lamp" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount. The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor. The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles. History Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires. In 1799–1800, Alessandro Volta created the voltaic pile, the first electric battery. Current from these batteries could heat copper wire to incandescence. Vasily Vladimirovich Petrov developed the first persistent electric arc in 1802, and English chemist Humphry Davy gave a practical demonstration of an arc light in 1806. It took more than a century of continuous and incremental improvement, including numerous designs, patents, and resulting intellectual property disputes, to get from these early experiments to commercially produced incandescent light bulbs in the 1920s. In 1840, Warren de la Rue enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating one of the world's first electric light bulbs. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although it was an efficient design, the cost of the platinum made it impractical for commercial use. William Greener, an English inventor, made significant contributions to early electric lighting with his lamp in 1846 (patent specification 11076), laying the groundwork for future innovations such as those by Thomas Edison. The late 1870s and 1880s were marked by intense competition and innovation, with inventors like Joseph Swan in the UK and Thomas Edison in the US independently developing functional incandescent lamps. Swan's bulbs, based on designs by William Staite, were successful, but the filaments were too thick. Edison worked to create bulbs with thinner filaments, leading to a better design. The rivalry between Swan and Edison eventually led to a merger, forming the Edison and Swan Electric Light Company. By the early twentieth century these had completely replaced arc lamps. The turn of the century saw further improvements in bulb longevity and efficiency, notably with the introduction of the tungsten filament by William D. Coolidge, who applied for a patent in 1912. This innovation became a standard for incandescent bulbs for many years. In 1910, Georges Claude introduced the first neon light, paving the way for neon signs which would become ubiquitous in advertising. In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric's Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, "A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public." The first practical LED arrived in 1962. U.S. transition to LED bulbs In the United States, incandescent light bulbs including halogen bulbs stopped being sold as of August 1, 2023, because they do not meet minimum lumens per watt performance metrics established by the U.S. Department of Energy. Compact fluorescent bulbs are also banned despite their lumens per watt performance because of their toxic mercury that can be released into the home if broken and widespread problems with proper disposal of mercury-containing bulbs. Types Incandescent In its modern form, the incandescent light bulb consists of a coiled filament of tungsten sealed in a globular glass chamber, either a vacuum or full of an inert gas such as argon. When an electric current is connected, the tungsten is heated to and glows, emitting light that approximates a continuous spectrum. Incandescent bulbs are highly inefficient, in that just 2–5% of the energy consumed is emitted as visible, usable light. The remaining 95% is lost as heat. In warmer climates, the emitted heat must then be removed, putting additional pressure on ventilation or air conditioning systems. In colder weather, the heat byproduct has some value, and has been successfully harnessed for warming in devices such as heat lamps. Incandescent bulbs are nonetheless being phased out in favor of technologies like CFLs and LED bulbs in many countries due to their low energy efficiency. The European Commission estimated in 2012 that a complete ban on incandescent bulbs would contribute 5 to 10 billion euros to the economy and save 15 billion metric tonnes of carbon dioxide emissions. Halogen Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire. Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and longer lives than non-halogen types. The light output remains almost constant throughout their life. Fluorescent Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them. LED The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Some lasers have been adapted as an alternative to LEDs to provide highly focused illumination. Carbon arc Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight. Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II. Discharge A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halides, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term "arc lamp" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp. Some lamp types contain a small amount of neon, which permits striking at normal running voltage with no external ignition circuitry. Low-pressure sodium lamps operate this way. The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults. The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra. Characteristics Form factor Many lamp units, or light bulbs, are specified in standardized shape codes and socket names. Incandescent bulbs and their retrofit replacements are often specified as "A19/A60 E26/E27", a common size for those kinds of light bulbs. In this example, the "A" parameters describe the bulb size and shape within the A-series light bulb while the "E" parameters describe the Edison screw base size and thread characteristics. Comparison parameters Common comparison parameters include: Luminous flux (in lumens) Energy consumption (in watts) Luminous efficacy (in lumens per watt) Color temperature (in kelvins) Less common parameters include color rendering index (CRI). Life expectancy Life expectancy for many types of lamp is defined as the number of hours of operation at which 50% of them fail, that is the median life of the lamps. Production tolerances as low as 1% can create a variance of 25% in lamp life, so in general some lamps will fail well before the rated life expectancy, and some will last much longer. For LEDs, lamp life is defined as the operation time at which 50% of lamps have experienced a 70% decrease in light output. In the 1900s the Phoebus cartel formed in an attempt to reduce the life of electric light bulbs, an example of planned obsolescence. Some types of lamp are also sensitive to switching cycles. Rooms with frequent switching, such as bathrooms, can expect much shorter lamp life than what is printed on the box. Compact fluorescent lamps are particularly sensitive to switching cycles. Uses The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. External lighting grew at a rate of 3–6 percent for the later half of the 20th century and is the major source of light pollution that burdens astronomers and others with 80% of the world's population living in areas with night time light pollution. Light pollution has been shown to have a negative effect on some wildlife. Electric lamps can be used as heat sources, for example in incubators, as infrared lamps in fast food restaurants and toys such as the Kenner Easy-Bake Oven. Lamps can also be used for light therapy to deal with such issues as vitamin D deficiency, skin conditions such as acne and dermatitis, skin cancers, and seasonal affective disorder. Lamps which emit a specific frequency of blue light are also used to treat neonatal jaundice with the treatment which was initially undertaken in hospitals being able to be conducted at home. Electric lamps can also be used as a grow light to aid in plant growth especially in indoor hydroponics and aquatic plants with recent research into the most effective types of light for plant growth. Due to their nonlinear resistance characteristics, tungsten filament lamps have long been used as fast-acting thermistors in electronic circuits. Popular uses have included: Stabilization of sine wave oscillators Protection of tweeters in loudspeaker enclosures; excess current that is too high for the tweeter illuminates the light rather than destroying the tweeter. Automatic volume control in telephones Cultural symbolism In Western culture, a lightbulb — in particular, the appearance of an illuminated lightbulb above a person's head — signifies sudden inspiration. A stylized depiction of a light bulb features as the logo of the Turkish AK Party. See also Flameless candle Light tube List of light sources References External links Dark Sacred Night" (2023) is a short science film from the Princeton University Office of Sustainability about lighting obscuring the stars and affecting health and the environment. Light Lighting
Electric light
[ "Technology", "Engineering" ]
3,251
[ "Electrical engineering", "Electrical components", "Components" ]
9,660
https://en.wikipedia.org/wiki/Euler%27s%20sum%20of%20powers%20conjecture
In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers and greater than 1, if the sum of many th powers of positive integers is itself a th power, then is greater than or equal to : The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case : if then . Although the conjecture holds for the case (which follows from Fermat's Last Theorem for the third powers), it was disproved for and . It is unknown whether the conjecture fails or holds for any value . Background Euler was aware of the equality involving sums of four fourth powers; this, however, is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number or the taxicab number 1729. The general solution of the equation is where , and are any rational numbers. Counterexamples Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for . This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known: (Lander & Parkin, 1966); (Scher & Seidl, 1996); (Frye, 2004). In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the case. His smallest counterexample was A particular case of Elkies' solutions can be reduced to the identity where This is an elliptic curve with a rational point at . From this initial rational point, one can compute an infinite collection of others. Substituting into the identity and removing common factors gives the numerical example cited above. In 1988, Roger Frye found the smallest possible counterexample for by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000. Generalizations In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if , where are positive integers for all and , then . In the special case , the conjecture states that if (under the conditions given above) then . The special case may be described as the problem of giving a partition of a perfect power into few like powers. For and or , there are many known solutions. Some of these are listed below. See for more data. (Plato's number 216) This is the case , of Srinivasa Ramanujan's formula A cube as the sum of three cubes can also be parameterized in one of two ways: The number 2 100 0003 can be expressed as the sum of three cubes in nine different ways. (R. Frye, 1988); (R. Norrie, smallest, 1911). (Lander & Parkin, 1966); (Lander, Parkin, Selfridge, smallest, 1967); (Lander, Parkin, Selfridge, second smallest, 1967); (Sastry, 1934, third smallest). It has been known since 2002 that there are no solutions for whose final term is ≤ 730000. (M. Dodrill, 1999). (S. Chase, 2000). See also Jacobi–Madden equation Prouhet–Tarry–Escott problem Beal's conjecture Pythagorean quadruple Generalized taxicab number Sums of powers, a list of related conjectures and theorems References External links Tito Piezas III, A Collection of Algebraic Identities Jaroslaw Wroblewski, Equal Sums of Like Powers Ed Pegg Jr., Math Games, Power Sums James Waldby, A Table of Fifth Powers equal to a Fifth Power (2009) R. Gerbicz, J.-C. Meyrignac, U. Beckert, All solutions of the Diophantine equation a6 + b6 = c6 + d6 + e6 + f6 + g6 for a,b,c,d,e,f,g < 250000 found with a distributed Boinc project EulerNet: Computing Minimal Equal Sums Of Like Powers Euler's Conjecture at library.thinkquest.org A simple explanation of Euler's Conjecture at Maths Is Good For You! Diophantine equations Disproved conjectures Leonhard Euler
Euler's sum of powers conjecture
[ "Mathematics" ]
983
[ "Diophantine equations", "Mathematical objects", "Equations", "Number theory" ]
9,670
https://en.wikipedia.org/wiki/Evolutionism
Evolutionism is a term used (often derogatorily) to denote the theory of evolution. Its exact meaning has changed over time as the study of evolution has progressed. In the 19th century, it was used to describe the belief that organisms deliberately improved themselves through progressive inherited change (orthogenesis). The teleological belief went on to include cultural evolution and social evolution. In the 1970s, the term "Neo-Evolutionism" was used to describe the idea that "human beings sought to preserve a familiar style of life unless change was forced on them by factors that were beyond their control." The term is most often used by creationists to describe adherence to the scientific consensus on evolution as equivalent to a secular religion. The term is very seldom used within the scientific community, since the scientific position on evolution is accepted by the overwhelming majority of scientists. Because evolutionary biology is the default scientific position, it is assumed that "scientists" or "biologists" are "evolutionists" unless specifically noted otherwise. In the creation–evolution controversy, creationists often call those who accept the validity of the modern evolutionary synthesis "evolutionists" and the theory itself "evolutionism". 19th-century teleological use Before its use to describe biological evolution, the term "evolution" was originally used to refer to any orderly sequence of events with the outcome somehow contained at the start. The first five editions of Darwin's in Origin of Species used the word "evolved", but the word "evolution" was only used in its sixth edition in 1872. By then, Herbert Spencer had developed the concept theory that organisms strive to evolve due to an internal "driving force" (orthogenesis) in 1862. Edward B. Tylor and Lewis H Morgan brought the term "evolution" to anthropology though they tended toward the older pre-Spencerian definition helping to form the concept of unilineal (social) evolution used during the later part of what Trigger calls the Antiquarianism-Imperial Synthesis period (c1770-c1900). The term evolutionism subsequently came to be used for the now discredited theory that evolution contained a deliberate component, rather than the selection of beneficial traits from random variation by differential survival. Modern use by creationists The term evolution is widely used, but the term evolutionism is not used in the scientific community to refer to evolutionary biology as it is redundant and anachronistic. However, the term has been used by creationists in discussing the creation–evolution controversy. For example, the Institute for Creation Research, in order to imply placement of evolution in the category of 'religions', including atheism, fascism, humanism and occultism, commonly uses the words evolutionism and evolutionist to describe the consensus of mainstream science and the scientists subscribing to it, thus implying through language that the issue is a matter of religious belief. The BioLogos Foundation, an organization that promotes the idea of theistic evolution, uses the term "evolutionism" to describe "the atheistic worldview that so often accompanies the acceptance of biological evolution in public discourse." It views this as a subset of scientism. See also Alternatives to evolution by natural selection Darwinism Evolution as fact and theory Evidence of common descent Social Darwinism Notes References Carneiro, Robert, Evolutionism in Cultural Anthropology: A Critical History (on the applicability of this notion to the study of social evolution) Review of Buckland's Bridgewater Treatise, The Times Tuesday, November 15, 1836; pg. 3; Issue 16261; col E. ("annihilates the doctrine of spontaneous and progressive evolution of life, and its impious corollary, chance") Review of Charles Darwin's The Expression of the Emotions in Man and Animals The Times Friday, December 13, 1872; pg. 4; Issue 27559; col A. ("His [Darwin's] thorough-going 'evolutionism' tends to eliminate...") Ruse, Michael. 2003. Is Evolution a Secular Religion? Science 299:1523-1524 (concluding that evolutionary biology is not a religion in any sense but noting that several evolutionary biologists, such as Edward O. Wilson, in their roles as citizens concerned about getting the public to deal with reality, have made statements like "evolution is a myth that is now ready to take over Christianity"). Biological evolution Biology theories
Evolutionism
[ "Biology" ]
904
[ "Biology theories" ]
9,672
https://en.wikipedia.org/wiki/Entscheidungsproblem
In mathematics and computer science, the ; ) is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. It asks for an algorithm that considers an inputted statement and answers "yes" or "no" according to whether it is universally valid, i.e., valid in every structure. Such an algorithm was proven to be impossible by Alonzo Church and Alan Turing in 1936. Completeness theorem By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced using logical rules and axioms, so the can also be viewed as asking for an algorithm to decide whether a given statement is provable using the rules of logic. In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the is impossible, assuming that the intuitive notion of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis. History The origin of the goes back to Gottfried Leibniz, who in the seventeenth century, after having constructed a successful mechanical calculating machine, dreamt of building a machine that could manipulate symbols in order to determine the truth values of mathematical statements. He realized that the first step would have to be a clean formal language, and much of his subsequent work was directed toward that goal. In 1928, David Hilbert and Wilhelm Ackermann posed the question in the form outlined above. In continuation of his "program", Hilbert posed three questions at an international conference in 1928, the third of which became known as "Hilbert's ". In 1929, Moses Schönfinkel published one paper on special cases of the decision problem, that was prepared by Paul Bernays. As late as 1930, Hilbert believed that there would be no such thing as an unsolvable problem. Negative answer Before the question could be answered, the notion of "algorithm" had to be formally defined. This was done by Alonzo Church in 1935 with the concept of "effective calculability" based on his λ-calculus, and by Alan Turing the next year with his concept of Turing machines. Turing immediately recognized that these are equivalent models of computation. A negative answer to the was then given by Alonzo Church in 1935–36 (Church's theorem) and independently shortly thereafter by Alan Turing in 1936 (Turing's proof). Church proved that there is no computable function which decides, for two given λ-calculus expressions, whether they are equivalent or not. He relied heavily on earlier work by Stephen Kleene. Turing reduced the question of the existence of an 'algorithm' or 'general method' able to solve the to the question of the existence of a 'general method' which decides whether any given Turing machine halts or not (the halting problem). If 'algorithm' is understood as meaning a method that can be represented as a Turing machine, and with the answer to the latter question negative (in general), the question about the existence of an algorithm for the also must be negative (in general). In his 1936 paper, Turing says: "Corresponding to each computing machine 'it' we construct a formula 'Un(it)' and we show that, if there is a general method for determining whether 'Un(it)' is provable, then there is a general method for determining whether 'it' ever prints 0". The work of both Church and Turing was heavily influenced by Kurt Gödel's earlier work on his incompleteness theorem, especially by the method of assigning numbers (a Gödel numbering) to logical formulas in order to reduce logic to arithmetic. The is related to Hilbert's tenth problem, which asks for an algorithm to decide whether Diophantine equations have a solution. The non-existence of such an algorithm, established by the work of Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam, with the final piece of the proof in 1970, also implies a negative answer to the Entscheidungsproblem. Generalizations Using the deduction theorem, the Entscheidungsproblem encompasses the more general problem of deciding whether a given first-order sentence is entailed by a given finite set of sentences, but validity in first-order theories with infinitely many axioms cannot be directly reduced to the Entscheidungsproblem. Such more general decision problems are of practical interest. Some first-order theories are algorithmically decidable; examples of this include Presburger arithmetic, real closed fields, and static type systems of many programming languages. On the other hand, the first-order theory of the natural numbers with addition and multiplication expressed by Peano's axioms cannot be decided with an algorithm. Fragments By default, the citations in the section are from Pratt-Hartmann (2023). The classical Entscheidungsproblem asks that, given a first-order formula, whether it is true in all models. The finitary problem asks whether it is true in all finite models. Trakhtenbrot's theorem shows that this is also undecidable. Some notations: means the problem of deciding whether there exists a model for a set of logical formulas . is the same problem, but for finite models. The -problem for a logical fragment is called decidable if there exists a program that can decide, for each finite set of logical formulas in the fragment, whether or not. There is a hierarchy of decidabilities. On the top are the undecidable problems. Below it are the decidable problems. Furthermore, the decidable problems can be divided into a complexity hierarchy. Aristotelian and relational Aristotelian logic considers 4 kinds of sentences: "All p are q", "All p are not q", "Some p is q", "Some p is not q". We can formalize these kinds of sentences as a fragment of first-order logic:where are atomic predicates, and . Given a finite set of Aristotelean logic formulas, it is NLOGSPACE-complete to decide its . It is also NLOGSPACE-complete to decide for a slight extension (Theorem 2.7):Relational logic extends Aristotelean logic by allowing a relational predicate. For example, "Everybody loves somebody" can be written as . Generally, we have 8 kinds of sentences:It is NLOGSPACE-complete to decide its (Theorem 2.15). Relational logic can be extended to 32 kinds of sentences by allowing , but this extension is EXPTIME-complete (Theorem 2.24). Arity The first-order logic fragment where the only variable names are is NEXPTIME-complete (Theorem 3.18). With , it is RE-complete to decide its , and co-RE-complete to decide (Theorem 3.15), thus undecidable. The monadic predicate calculus is the fragment where each formula contains only 1-ary predicates and no function symbols. Its is NEXPTIME-complete (Theorem 3.22). Quantifier prefix Any first-order formula has a prenex normal form. For each possible quantifier prefix to the prenex normal form, we have a fragment of first-order logic. For example, the Bernays–Schönfinkel class, , is the class of first-order formulas with quantifier prefix , equality symbols, and no function symbols. For example, Turing's 1936 paper (p. 263) observed that since the halting problem for each Turing machine is equivalent to a first-order logical formula of form , the problem is undecidable. The precise boundaries are known, sharply: and are co-RE-complete, and the problems are RE-complete (Theorem 5.2). Same for (Theorem 5.3). is decidable, proved independently by Gödel, Schütte, and Kalmár. is undecidable. For any , both and are NEXPTIME-complete (Theorem 5.1). This implies that is decidable, a result first published by Bernays and Schönfinkel. For any , is EXPTIME-complete (Section 5.4.1). For any , is NEXPTIME-complete (Section 5.4.2). This implies that is decidable, a result first published by Ackermann. For any , and are PSPACE-complete (Section 5.4.3). Börger et al. (2001) describes the level of computational complexity for every possible fragment with every possible combination of quantifier prefix, functional arity, predicate arity, and equality/no-equality. Practical decision procedures Having practical decision procedures for classes of logical formulas is of considerable interest for program verification and circuit verification. Pure Boolean logical formulas are usually decided using SAT-solving techniques based on the DPLL algorithm. For more general decision problems of first-order theories, conjunctive formulas over linear real or rational arithmetic can be decided using the simplex algorithm, formulas in linear integer arithmetic (Presburger arithmetic) can be decided using Cooper's algorithm or William Pugh's Omega test. Formulas with negations, conjunctions and disjunctions combine the difficulties of satisfiability testing with that of decision of conjunctions; they are generally decided nowadays using SMT-solving techniques, which combine SAT-solving with decision procedures for conjunctions and propagation techniques. Real polynomial arithmetic, also known as the theory of real closed fields, is decidable; this is the Tarski–Seidenberg theorem, which has been implemented in computers by using the cylindrical algebraic decomposition. See also Automated theorem proving Hilbert's second problem Oracle machine Turing's proof Notes References Alonzo Church, "An unsolvable problem of elementary number theory", American Journal of Mathematics, 58 (1936), pp 345–363 Alonzo Church, "A note on the Entscheidungsproblem", Journal of Symbolic Logic, 1 (1936), pp 40–41. Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Series 2, 42 (1936–7), pp 230–265. Online versions: from journal website, from Turing Digital Archive, from abelard.org. Errata appeared in Series 2, 43 (1937), pp 544–546. Davis, Martin, "The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And Computable Functions", Raven Press, New York, 1965. Turing's paper is #3 in this volume. Papers include those by Gödel, Church, Rosser, Kleene, and Post. Biography of Alan M. Turing. Cf Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Soare, Robert I., "Computability and recursion", Bull. Symbolic Logic 2 (1996), no. 3, 284–321. Toulmin, Stephen, "Fall of a Genius", a book review of "Alan Turing: The Enigma by Andrew Hodges", in The New York Review of Books, 19 January 1984, p. 3ff. Whitehead, Alfred North; Russell, Bertrand, Principia Mathematica to *56, Cambridge at the University Press, 1962. Re: the problem of paradoxes, the authors discuss the problem, that a set not be an object in any of its "determining functions", in particular "Introduction, Chap. 1 p. 24 "...difficulties which arise in formal logic", and Chap. 2.I. "The Vicious-Circle Principle" p. 37ff, and Chap. 2.VIII. "The Contradictions" p. 60 ff. External links Theory of computation Computability theory Gottfried Wilhelm Leibniz Mathematical logic Metatheorems Undecidable problems
Entscheidungsproblem
[ "Mathematics" ]
2,529
[ "Mathematical logic", "Computational problems", "Undecidable problems", "Computability theory", "Mathematical problems" ]
9,675
https://en.wikipedia.org/wiki/Ester
In chemistry, an ester is a compound derived from an acid (organic or inorganic) in which the hydrogen atom (H) of at least one acidic hydroxyl group () of that acid is replaced by an organyl group (R). These compounds contain a distinctive functional group. Analogues derived from oxygen replaced by other chalcogens belong to the ester category as well. According to some authors, organyl derivatives of acidic hydrogen of other acids are esters as well (e.g. amides), but not according to the IUPAC. Glycerides are fatty acid esters of glycerol; they are important in biology, being one of the main classes of lipids and comprising the bulk of animal fats and vegetable oils. Lactones are cyclic carboxylic esters; naturally occurring lactones are mainly 5- and 6-membered ring lactones. Lactones contribute to the aroma of fruits, butter, cheese, vegetables like celery and other foods. Esters can be formed from oxoacids (e.g. esters of acetic acid, carbonic acid, sulfuric acid, phosphoric acid, nitric acid, xanthic acid), but also from acids that do not contain oxygen (e.g. esters of thiocyanic acid and trithiocarbonic acid). An example of an ester formation is the substitution reaction between a carboxylic acid () and an alcohol (), forming an ester (), where R stands for any group (typically hydrogen or organyl) and R stands for organyl group. Organyl esters of carboxylic acids typically have a pleasant smell; those of low molecular weight are commonly used as fragrances and are found in essential oils and pheromones. They perform as high-grade solvents for a broad array of plastics, plasticizers, resins, and lacquers, and are one of the largest classes of synthetic lubricants on the commercial market. Polyesters are important plastics, with monomers linked by ester moieties. Esters of phosphoric acid form the backbone of DNA molecules. Esters of nitric acid, such as nitroglycerin, are known for their explosive properties. There are compounds in which an acidic hydrogen of acids mentioned in this article are not replaced by an organyl, but by some other group. According to some authors, those compounds are esters as well, especially when the first carbon atom of the organyl group replacing acidic hydrogen, is replaced by another atom from the group 14 elements (Si, Ge, Sn, Pb); for example, according to them, trimethylstannyl acetate (or trimethyltin acetate) is a trimethylstannyl ester of acetic acid, and dibutyltin dilaurate is a dibutylstannylene ester of lauric acid, and the Phillips catalyst is a trimethoxysilyl ester of chromic acid (). Nomenclature Etymology The word ester was coined in 1848 by a German chemist Leopold Gmelin, probably as a contraction of the German , "acetic ether". IUPAC nomenclature The names of esters that are formed from an alcohol and an acid, are derived from the parent alcohol and the parent acid, where the latter may be organic or inorganic. Esters derived from the simplest carboxylic acids are commonly named according to the more traditional, so-called "trivial names" e.g. as formate, acetate, propionate, and butyrate, as opposed to the IUPAC nomenclature methanoate, ethanoate, propanoate, and butanoate. Esters derived from more complex carboxylic acids are, on the other hand, more frequently named using the systematic IUPAC name, based on the name for the acid followed by the suffix -oate. For example, the ester hexyl octanoate, also known under the trivial name hexyl caprylate, has the formula . The chemical formulas of organic esters formed from carboxylic acids and alcohols usually take the form or RCOOR', where R and R' are the organyl parts of the carboxylic acid and the alcohol, respectively, and R can be a hydrogen in the case of esters of formic acid. For example, butyl acetate (systematically butyl ethanoate), derived from butanol and acetic acid (systematically ethanoic acid) would be written . Alternative presentations are common including BuOAc and . Cyclic esters are called lactones, regardless of whether they are derived from an organic or inorganic acid. One example of an organic lactone is γ-valerolactone. Orthoesters An uncommon class of esters are the orthoesters. One of them are the esters of orthocarboxylic acids. Those esters have the formula , where R stands for any group (organic or inorganic) and R stands for organyl group. For example, triethyl orthoformate () is derived, in terms of its name (but not its synthesis) from esterification of orthoformic acid () with ethanol. Esters of inorganic acids Esters can also be derived from inorganic acids. Perchloric acid forms perchlorate esters, e.g., methyl perchlorate () Sulfuric acid forms sulfate esters, e.g., dimethyl sulfate () and methyl bisulfate () Nitric acid forms nitrate esters, e.g. methyl nitrate () and nitroglycerin () Phosphoric acid forms phosphate esters, e.g. triphenyl phosphate () and methyl dihydrogen phosphate () Pyrophosphoric (diphosphoric) acid forms pyrophosphate esters, e.g. tetraethyl pyrophosphate, ADP, dADP, ADPR, cADPR, CDP, dCDP, GDP, dGDP, UDP, dTDP, MEcPP, HMBPP, DMAPP, IPP, GPP, FPP, GGPP, ThDP, FAD, NAD, NADP. Triphosphoric acid forms triphosphate esters, e.g. ATP, dATP, CTP, dCTP, GTP, dGTP, UTP, dTTP, ITP, XTP, ThTP, AThTP. Carbonic acid forms carbonate esters, e.g. dimethyl carbonate () and 5-membered cyclic ethylene carbonate () (if one classifies carbonic acid as an inorganic compound) Trithiocarbonic acid forms trithiocarbonate esters, e.g. dimethyl trithiocarbonate () (if one classifies trithiocarbonic acid as an inorganic compound) Chloroformic acid forms chloroformate esters, e.g. methyl chloroformate () (if one classifies chloroformic acid as an inorganic compound) Boric acid forms borate esters, e.g. trimethyl borate () Chromic acid forms di-tert-butyl chromate () Inorganic acids that exist as tautomers form two or more types of esters. Thiosulfuric acid forms two types of thiosulfate esters, e.g. O,O-dimethyl thiosulfate () and O,S-dimethyl thiosulfate () Thiocyanic acid forms thiocyanate esters, e.g. methyl thiocyanate () (if one classifies thiocyanic acid as an inorganic compound), but forms isothiocyanate "esters" as well, e.g. methyl isothiocyanate (), although organyl isothiocyanates are not classified as esters by the IUPAC Phosphorous acid forms two types of esters: phosphite esters, e.g. triethyl phosphite (), and phosphonate esters, e.g. diethyl phosphonate () Some inorganic acids that are unstable or elusive form stable esters. Sulfurous acid, which is unstable, forms stable dimethyl sulfite () Dicarbonic acid, which is unstable, forms stable dimethyl dicarbonate () In principle, a part of metal and metalloid alkoxides, of which many hundreds are known, could be classified as esters of the corresponding acids (e.g. aluminium triethoxide () could be classified as an ester of aluminic acid which is aluminium hydroxide, tetraethyl orthosilicate () could be classified as an ester of orthosilicic acid, and titanium ethoxide () could be classified as an ester of orthotitanic acid). Structure and bonding Esters derived from carboxylic acids and alcohols contain a carbonyl group C=O, which is a divalent group at C atom, which gives rise to C–C–O and O–C–O angles. Unlike amides, carboxylic acid esters are structurally flexible functional groups because rotation about the C–O–C bonds has a low barrier. Their flexibility and low polarity is manifested in their physical properties; they tend to be less rigid (lower melting point) and more volatile (lower boiling point) than the corresponding amides. The pKa of the alpha-hydrogens on esters of carboxylic acids is around 25 (alpha-hydrogen is a hydrogen bound to the carbon adjacent to the carbonyl group (C=O) of carboxylate esters). Many carboxylic acid esters have the potential for conformational isomerism, but they tend to adopt an S-cis (or Z) conformation rather than the S-trans (or E) alternative, due to a combination of hyperconjugation and dipole minimization effects. The preference for the Z conformation is influenced by the nature of the substituents and solvent, if present. Lactones with small rings are restricted to the s-trans (i.e. E) conformation due to their cyclic structure. Physical properties and characterization Esters derived from carboxylic acids and alcohols are more polar than ethers but less polar than alcohols. They participate in hydrogen bonds as hydrogen-bond acceptors, but cannot act as hydrogen-bond donors, unlike their parent alcohols. This ability to participate in hydrogen bonding confers some water-solubility. Because of their lack of hydrogen-bond-donating ability, esters do not self-associate. Consequently, esters are more volatile than carboxylic acids of similar molecular weight. Characterization and analysis Esters are generally identified by gas chromatography, taking advantage of their volatility. IR spectra for esters feature an intense sharp band in the range 1730–1750 cm−1 assigned to νC=O. This peak changes depending on the functional groups attached to the carbonyl. For example, a benzene ring or double bond in conjunction with the carbonyl will bring the wavenumber down about 30 cm−1. Applications and occurrence Esters are widespread in nature and are widely used in industry. In nature, fats are, in general, triesters derived from glycerol and fatty acids. Esters are responsible for the aroma of many fruits, including apples, durians, pears, bananas, pineapples, and strawberries. Several billion kilograms of polyesters are produced industrially annually, important products being polyethylene terephthalate, acrylate esters, and cellulose acetate. Preparation Esterification is the general name for a chemical reaction in which two reactants (typically an alcohol and an acid) form an ester as the reaction product. Esters are common in organic chemistry and biological materials, and often have a pleasant characteristic, fruity odor. This leads to their extensive use in the fragrance and flavor industry. Ester bonds are also found in many polymers. Esterification of carboxylic acids with alcohols The classic synthesis is the Fischer esterification, which involves treating a carboxylic acid with an alcohol in the presence of a dehydrating agent: The equilibrium constant for such reactions is about 5 for typical esters, e.g., ethyl acetate. The reaction is slow in the absence of a catalyst. Sulfuric acid is a typical catalyst for this reaction. Many other acids are also used such as polymeric sulfonic acids. Since esterification is highly reversible, the yield of the ester can be improved using Le Chatelier's principle: Using the alcohol in large excess (i.e., as a solvent). Using a dehydrating agent: sulfuric acid not only catalyzes the reaction but sequesters water (a reaction product). Other drying agents such as molecular sieves are also effective. Removal of water by physical means such as distillation as a low-boiling azeotrope with toluene, in conjunction with a Dean-Stark apparatus. Reagents are known that drive the dehydration of mixtures of alcohols and carboxylic acids. One example is the Steglich esterification, which is a method of forming esters under mild conditions. The method is popular in peptide synthesis, where the substrates are sensitive to harsh conditions like high heat. DCC (dicyclohexylcarbodiimide) is used to activate the carboxylic acid to further reaction. 4-Dimethylaminopyridine (DMAP) is used as an acyl-transfer catalyst. Another method for the dehydration of mixtures of alcohols and carboxylic acids is the Mitsunobu reaction: Carboxylic acids can be esterified using diazomethane: Using this diazomethane, mixtures of carboxylic acids can be converted to their methyl esters in near quantitative yields, e.g., for analysis by gas chromatography. The method is useful in specialized organic synthetic operations but is considered too hazardous and expensive for large-scale applications. Esterification of carboxylic acids with epoxides Carboxylic acids are esterified by treatment with epoxides, giving β-hydroxyesters: This reaction is employed in the production of vinyl ester resin from acrylic acid. Alcoholysis of acyl chlorides and acid anhydrides Alcohols react with acyl chlorides and acid anhydrides to give esters: The reactions are irreversible simplifying work-up. Since acyl chlorides and acid anhydrides also react with water, anhydrous conditions are preferred. The analogous acylations of amines to give amides are less sensitive because amines are stronger nucleophiles and react more rapidly than does water. This method is employed only for laboratory-scale procedures, as it is expensive. Alkylation of carboxylic acids and their salts Trimethyloxonium tetrafluoroborate can be used for esterification of carboxylic acids under conditions where acid-catalyzed reactions are infeasible: Although rarely employed for esterifications, carboxylate salts (often generated in situ) react with electrophilic alkylating agents, such as alkyl halides, to give esters. Anion availability can inhibit this reaction, which correspondingly benefits from phase transfer catalysts or such highly polar aprotic solvents as DMF. An additional iodide salt may, via the Finkelstein reaction, catalyze the reaction of a recalcitrant alkyl halide. Alternatively, salts of a coordinating metal, such as silver, may improve the reaction rate by easing halide elimination. Transesterification Transesterification, which involves changing one ester into another one, is widely practiced: Like the hydrolysation, transesterification is catalysed by acids and bases. The reaction is widely used for degrading triglycerides, e.g. in the production of fatty acid esters and alcohols. Poly(ethylene terephthalate) is produced by the transesterification of dimethyl terephthalate and ethylene glycol: A subset of transesterification is the alcoholysis of diketene. This reaction affords 2-ketoesters. Carbonylation Alkenes undergo carboalkoxylation in the presence of metal carbonyl catalysts. Esters of propanoic acid are produced commercially by this method: A preparation of methyl propionate is one illustrative example. The carbonylation of methanol yields methyl formate, which is the main commercial source of formic acid. The reaction is catalyzed by sodium methoxide: Addition of carboxylic acids to alkenes and alkynes In hydroesterification, alkenes and alkynes insert into the bond of carboxylic acids. Vinyl acetate is produced industrially by the addition of acetic acid to acetylene in the presence of zinc acetate catalysts: Vinyl acetate can also be produced by palladium-catalyzed reaction of ethylene, acetic acid, and oxygen: Silicotungstic acid is used to manufacture ethyl acetate by the alkylation of acetic acid by ethylene: From aldehydes The Tishchenko reaction involves disproportionation of an aldehyde in the presence of an anhydrous base to give an ester. Catalysts are aluminium alkoxides or sodium alkoxides. Benzaldehyde reacts with sodium benzyloxide (generated from sodium and benzyl alcohol) to generate benzyl benzoate. The method is used in the production of ethyl acetate from acetaldehyde. Other methods Favorskii rearrangement of α-haloketones in presence of base Baeyer–Villiger oxidation of ketones with peroxides Pinner reaction of nitriles with an alcohol Nucleophilic abstraction of a metal–acyl complex Hydrolysis of orthoesters in aqueous acid Cellulolysis via esterification Ozonolysis of alkenes using a work up in the presence of hydrochloric acid and various alcohols. Anodic oxidation of methyl ketones leading to methyl esters. Interesterification exchanges the fatty acid groups of different esters. Reactions Esters are less reactive than acid halides and anhydrides. As with more reactive acyl derivatives, they can react with ammonia and primary and secondary amines to give amides, although this type of reaction is not often used, since acid halides give better yields. Transesterification Esters can be converted to other esters in a process known as transesterification. Transesterification can be either acid- or base-catalyzed, and involves the reaction of an ester with an alcohol. Unfortunately, because the leaving group is also an alcohol, the forward and reverse reactions will often occur at similar rates. Using a large excess of the reactant alcohol or removing the leaving group alcohol (e.g. via distillation) will drive the forward reaction towards completion, in accordance with Le Chatelier's principle. Hydrolysis and saponification Acid-catalyzed hydrolysis of esters is also an equilibrium process – essentially the reverse of the Fischer esterification reaction. Because an alcohol (which acts as the leaving group) and water (which acts as the nucleophile) have similar pKa values, the forward and reverse reactions compete with each other. As in transesterification, using a large excess of reactant (water) or removing one of the products (the alcohol) can promote the forward reaction. Basic hydrolysis of esters, known as saponification, is not an equilibrium process; a full equivalent of base is consumed in the reaction, which produces one equivalent of alcohol and one equivalent of a carboxylate salt. The saponification of esters of fatty acids is an industrially important process, used in the production of soap. Esterification is a reversible reaction. Esters undergo hydrolysis under acidic and basic conditions. Under acidic conditions, the reaction is the reverse reaction of the Fischer esterification. Under basic conditions, hydroxide acts as a nucleophile, while an alkoxide is the leaving group. This reaction, saponification, is the basis of soap making. The alkoxide group may also be displaced by stronger nucleophiles such as ammonia or primary or secondary amines to give amides (ammonolysis reaction): This reaction is not usually reversible. Hydrazines and hydroxylamine can be used in place of amines. Esters can be converted to isocyanates through intermediate hydroxamic acids in the Lossen rearrangement. Sources of carbon nucleophiles, e.g., Grignard reagents and organolithium compounds, add readily to the carbonyl. Reduction Compared to ketones and aldehydes, esters are relatively resistant to reduction. The introduction of catalytic hydrogenation in the early part of the 20th century was a breakthrough; esters of fatty acids are hydrogenated to fatty alcohols. A typical catalyst is copper chromite. Prior to the development of catalytic hydrogenation, esters were reduced on a large scale using the Bouveault–Blanc reduction. This method, which is largely obsolete, uses sodium in the presence of proton sources. Especially for fine chemical syntheses, lithium aluminium hydride is used to reduce esters to two primary alcohols. The related reagent sodium borohydride is slow in this reaction. DIBAH reduces esters to aldehydes. Direct reduction to give the corresponding ether is difficult as the intermediate hemiacetal tends to decompose to give an alcohol and an aldehyde (which is rapidly reduced to give a second alcohol). The reaction can be achieved using triethylsilane with a variety of Lewis acids. Claisen condensation and related reactions Esters can undergo a variety of reactions with carbon nucleophiles. They react with an excess of a Grignard reagent to give tertiary alcohols. Esters also react readily with enolates. In the Claisen condensation, an enolate of one ester (1) will attack the carbonyl group of another ester (2) to give tetrahedral intermediate 3. The intermediate collapses, forcing out an alkoxide (R'O−) and producing β-keto ester 4. Crossed Claisen condensations, in which the enolate and nucleophile are different esters, are also possible. An intramolecular Claisen condensation is called a Dieckmann condensation or Dieckmann cyclization, since it can be used to form rings. Esters can also undergo condensations with ketone and aldehyde enolates to give β-dicarbonyl compounds. A specific example of this is the Baker–Venkataraman rearrangement, in which an aromatic ortho-acyloxy ketone undergoes an intramolecular nucleophilic acyl substitution and subsequent rearrangement to form an aromatic β-diketone. The Chan rearrangement is another example of a rearrangement resulting from an intramolecular nucleophilic acyl substitution reaction. Other ester reactivities Esters react with nucleophiles at the carbonyl carbon. The carbonyl is weakly electrophilic but is attacked by strong nucleophiles (amines, alkoxides, hydride sources, organolithium compounds, etc.). The C–H bonds adjacent to the carbonyl are weakly acidic but undergo deprotonation with strong bases. This process is the one that usually initiates condensation reactions. The carbonyl oxygen in esters is weakly basic, less so than the carbonyl oxygen in amides due to resonance donation of an electron pair from nitrogen in amides, but forms adducts. As for aldehydes, the hydrogen atoms on the carbon adjacent ("α to") the carboxyl group in esters are sufficiently acidic to undergo deprotonation, which in turn leads to a variety of useful reactions. Deprotonation requires relatively strong bases, such as alkoxides. Deprotonation gives a nucleophilic enolate, which can further react, e.g., the Claisen condensation and its intramolecular equivalent, the Dieckmann condensation. This conversion is exploited in the malonic ester synthesis, wherein the diester of malonic acid reacts with an electrophile (e.g., alkyl halide), and is subsequently decarboxylated. Another variation is the Fráter–Seebach alkylation. Other reactions Esters can be directly converted to nitriles. Methyl esters are often susceptible to decarboxylation in the Krapcho decarboxylation. Phenyl esters react to hydroxyarylketones in the Fries rearrangement. Specific esters are functionalized with an α-hydroxyl group in the Chan rearrangement. Esters with β-hydrogen atoms can be converted to alkenes in ester pyrolysis. Pairs of esters are coupled to give α-hydroxyketones in the acyloin condensation Protecting groups As a class, esters serve as protecting groups for carboxylic acids. Protecting a carboxylic acid is useful in peptide synthesis, to prevent self-reactions of the bifunctional amino acids. Methyl and ethyl esters are commonly available for many amino acids; the t-butyl ester tends to be more expensive. However, t-butyl esters are particularly useful because, under strongly acidic conditions, the t-butyl esters undergo elimination to give the carboxylic acid and isobutylene, simplifying work-up. List of ester odorants Many esters have distinctive fruit-like odors, and many occur naturally in the essential oils of plants. This has also led to their common use in artificial flavorings and fragrances which aim to mimic those odors. See also List of esters Amide Thioamide Carboximidate Carbamate Xanthate Amidine Cyanate Thiocyanate Selenocyanate Tellurocyanate Polyester, plastics made of polymeric ester Oligoester, a polymeric ester made of small number of ester monomers Polyolester, an ester that is a synthetic oil used in refrigeration compressors Thioester Transesterification Ether lipid, an ester that is a lipid and an ether Acylal () Ortho ester, an ester of an ortho acid (e.g. esters of orthocarboxylic acids, orthocarbonic acid, orthosilicic acid, orthotelluric acid, orthophosphoric acid, orthoboric acid, ...) Depside, a polymeric ester, a type of polyphenolic compound composed of two or more monocyclic aromatic units linked by an ester group Depsipeptide, a type of ester that is a peptide in which one or more of its amide groups () are replaced by the corresponding ester groups () Glyceride (), an ester of fatty acids and glycerol Lactone, a cyclic carboxylic ester Lactide, a type of lactone ester Vitamin C (ascorbic acid), a lactone ester, an essential nutrient for humans and other animals Phthalide, a type of lactone ester Coumarin, a type of lactone ester Macrolide, a class of natural esters that consist of a large macrocyclic lactone ring to which one or more deoxy sugars may be attached Formate Chloroformate References External links An introduction to esters Molecule of the month: Ethyl acetate and other esters Functional groups
Ester
[ "Chemistry" ]
6,043
[ "Organic compounds", "Esters", "Functional groups" ]
9,677
https://en.wikipedia.org/wiki/Endosymbiont
An endosymbiont or endobiont is an organism that lives within the body or cells of another organism. Typically the two organisms are in a mutualistic relationship. Examples are nitrogen-fixing bacteria (called rhizobia), which live in the root nodules of legumes, single-cell algae inside reef-building corals, and bacterial endosymbionts that provide essential nutrients to insects. Endosymbiosis played key roles in the development of eukaryotes and plants. Roughly 2.2 billion years ago an archaeon absorbed a bacterium through phagocytosis, that eventually became the mitochondria that provide energy to almost all living eukaryotic cells. Approximately 1 billion years ago, some of those cells absorbed cyanobacteria that eventually became chloroplasts, organelles that produce energy from sunlight. Approximately 100 million years ago, a lineage of amoeba in the genus Paulinella independently engulfed a cyanobacteria that evolved to be functionally synonymous with traditional chloroplasts, called chromatophores. Some 100 million years ago, UCYN-A, a nitrogen-fixing bacterium, became an endosymbiont of the marine alga Braarudosphaera bigelowii, eventually evolving into a nitroplast, which fixes nitrogen. Similarly, diatoms in the family Rhopalodiaceae have cyanobacterial endosymbionts, called spheroid bodies or diazoplasts, which have been proposed to be in the early stages of organelle evolution. Symbionts are either obligate (require their host to survive) or facultative (can survive independently). The most common examples of obligate endosymbiosis are mitochondria and chloroplasts; however, they do not reproduce via mitosis in tandem with their host cells. Instead, they replicate via binary fission, a replication process uncoupled from the host cells in which they reside. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated by treatments that target their bacterial host. Etymology Endosymbiosis comes from the Greek: ἔνδον endon "within", σύν syn "together" and βίωσις biosis "living". Symbiogenesis Symbiogenesis theory holds that eukaryotes evolved via absorbing prokaryotes. Typically, one organism envelopes a bacterium and the two evolve a mutualistic relationship. The absorbed bacteria (the endosymbiont) eventually lives exclusively within the host cells. This fits the concept of observed organelle development. Typically the endosymbiont's genome shrinks, discarding genes whose roles are displaced by the host. For example, the Hodgkinia genome of Magicicada cicadas is much different from the prior freestanding bacteria. The cicada life cycle involves years of stasis underground. The symbiont produces many generations during this phase, experiencing little selection pressure, allowing their genomes to diversify. Selection is episodic (when the cicadas reproduce). The original Hodgkinia genome split into three much simpler endosymbionts, each encoding only a few genes—an instance of punctuated equilibrium producing distinct lineages. The host requires all three symbionts. Transmission Symbiont transmission is the process where the host acquires its symbiont. Since symbionts are not produced by host cells, they must find their own way to reproduce and populate daughter cells as host cells divide. Horizontal, vertical, and mixed-mode (hybrid of horizonal and vertical) transmission are the three paths for symbiont transfer. Horizontal Horizontal symbiont transfer (horizontal transmission) is a process where a host acquires a facultative symbiont from the environment or another host. The Rhizobia-Legume symbiosis (bacteria-plant endosymbiosis) is a prime example of this modality. The Rhizobia-legume symbiotic relationship is important for processes such as the formation of root nodules. It starts with flavonoids released by the legume host, which causes the rhizobia species (endosymbiont) to activate its Nod genes. These Nod genes generate lipooligosaccharide signals that the legume detects, leading to root nodule formation. This process bleeds into other processes such as nitrogen fixation in plants. The evolutionary advantage of such an interaction allows genetic exchange between both organisms involved to increase the propensity for novel functions as seen in the plant-bacterium interaction (holobiont formation). Vertical Vertical transmission takes place when the symbiont moves directly from parent to offspring. In horizontal transmission each generation acquires symbionts from the environment. An example is nitrogen-fixing bacteria in certain plant roots, such as pea aphid symbionts. A third type is mixed-mode transmission, where symbionts move horizontally for some generations, after which they are acquired vertically. Wigglesworthia, a tsetse fly symbiont, is vertically transmitted (via mother's milk). When a symbiont reaches this stage, it resembles a cellular organelle, similar to mitochondria or chloroplasts. In vertical transmission, the symbionts do not need to survive independently, often leading them to have a reduced genome. For instance, pea aphid symbionts have lost genes for essential molecules and rely on the host to supply them. In return, the symbionts synthesize essential amino acids for the aphid host. When a symbiont reaches this stage, it begins to resemble a cellular organelle, similar to mitochondria or chloroplasts. Such dependent hosts and symbionts form a holobiont. In the event of a bottleneck, a decrease in symbiont diversity could compromise host-symbiont interactions, as deleterious mutations accumulate. Hosts Invertebrates The best-studied examples of endosymbiosis are in invertebrates. These symbioses affect organisms with global impact, including Symbiodinium (corals), or Wolbachia (insects). Many insect agricultural pests and human disease vectors have intimate relationships with primary endosymbionts. Insects Scientists classify insect endosymbionts as Primary or Secondary. Primary endosymbionts (P-endosymbionts) have been associated with their insect hosts for millions of years (from ten to several hundred million years). They form obligate associations and display cospeciation with their insect hosts. Secondary endosymbionts more recently associated with their hosts, may be horizontally transferred, live in the hemolymph of the insects (not specialized bacteriocytes, see below), and are not obligate. Primary Among primary endosymbionts of insects, the best-studied are the pea aphid (Acyrthosiphon pisum) and its endosymbiont Buchnera sp. APS, the tsetse fly Glossina morsitans morsitans and its endosymbiont Wigglesworthia glossinidia brevipalpis and the endosymbiotic protists in lower termites. As with endosymbiosis in other insects, the symbiosis is obligate. Nutritionally-enhanced diets allow symbiont-free specimens to survive, but they are unhealthy, and at best survive only a few generations. In some insect groups, these endosymbionts live in specialized insect cells called bacteriocytes (also called mycetocytes), and are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members. Primary endosymbionts are thought to help the host either by providing essential nutrients or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its diet of plant sap. The primary role of Wigglesworthia is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet. Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host. Primary endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes commonly found in closely related bacteria. One theory claimed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the assumption hat primary endosymbionts are transferred only vertically. Attacking obligate bacterial endosymbionts may present a way to control their hosts, many of which are pests or human disease carriers. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Studying insect endosymbionts can aid understanding the origins of symbioses in general, as a proxy for understanding endosymbiosis in other species. The best-studied ant endosymbionts are Blochmannia bacteria, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont, Candidatus Westeberhardia Cardiocondylae, was discovered in Cardiocondyla. It is reported to be a primary symbiont. Secondary The pea aphid (Acyrthosiphon pisum) contains at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This symbiosis replaces lost elements of the insect's immune response. One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These toxins represent one of the first understood examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host. Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies do not report a correlation between evolution of Sodalis and tsetse. Unlike Wigglesworthia, Sodalis has been cultured in vitro. Cardinium and many other insects have secondary endosymbionts. Marine Extracellular endosymbionts are represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the class Alphaproteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms. Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have obligate extracellular endosymbionts that fill the entire body of their host. These marine worms are nutritionally dependent on their symbiotic chemoautotrophic bacteria lacking any digestive or excretory system (no gut, mouth, or nephridia). The sea slug Elysia chlorotica's endosymbiont is the algae Vaucheria litorea. The jellyfish Mastigias have a similar relationship with an algae. Elysia chlorotica forms this relationship intracellularly with the algae's chloroplasts. These chloroplasts retain their photosynthetic capabilities and structures for several months after entering the slug's cells. Trichoplax have two bacterial endosymbionts. Ruthmannia lives inside the animal's digestive cells. Grellia lives permanently inside the endoplasmic reticulum (ER), the first known symbiont to do so. Paracatenula is a flatworm which have lived in symbiosis with an endosymbiotic bacteria for 500 million years. The bacteria produce numerous small, droplet-like vesicles that provide the host with needed nutrients. Dinoflagellates Dinoflagellate endosymbionts of the genus Symbiodinium, commonly known as zooxanthellae, are found in corals, mollusks (esp. giant clams, the Tridacna), sponges, and the unicellular foraminifera. These endosymbionts capture sunlight and provide their hosts with energy via carbonate deposition. Previously thought to be a single species, molecular phylogenetic evidence reported diversity in Symbiodinium. In some cases, the host requires a specific Symbiodinium clade. More often, however, the distribution is ecological, with symbionts switching among hosts with ease. When reefs become environmentally stressed, this distribution is related to the observed pattern of coral bleaching and recovery. Thus, the distribution of Symbiodinium on coral reefs and its role in coral bleaching is an important in coral reef ecology. Phytoplankton In marine environments, endosymbiont relationships are especially prevalent in oligotrophic or nutrient-poor regions of the ocean like that of the North Atlantic. In such waters, cell growth of larger phytoplankton such as diatoms is limited by (insufficient) nitrate concentrations.  Endosymbiotic bacteria fix nitrogen for their hosts and in turn receive organic carbon from photosynthesis. These symbioses play an important role in global carbon cycling. One known symbiosis between the diatom Hemialus spp. and the cyanobacterium Richelia intracellularis has been reported in North Atlantic, Mediterranean, and Pacific waters. Richelia is found within the diatom frustule of Hemiaulus spp., and has a reduced genome. A 2011 study measured nitrogen fixation by the cyanobacterial host Richelia intracellularis well above intracellular requirements, and found the cyanobacterium was likely fixing nitrogen for its host. Additionally, both host and symbiont cell growth were much greater than free-living Richelia intracellularis or symbiont-free Hemiaulus spp. The Hemaiulus-Richelia symbiosis is not obligatory, especially in nitrogen-replete areas. Richelia intracellularis is also found in Rhizosolenia spp., a diatom found in oligotrophic oceans. Compared to the Hemaiulus host, the endosymbiosis with Rhizosolenia is much more consistent, and Richelia intracellularis is generally found in Rhizosolenia. There are some asymbiotic (occurs without an endosymbiont) Rhizosolenia, however there appears to be mechanisms limiting growth of these organisms in low nutrient conditions. Cell division for both the diatom host and cyanobacterial symbiont can be uncoupled and mechanisms for passing bacterial symbionts to daughter cells during cell division are still relatively unknown. Other endosymbiosis with nitrogen fixers in open oceans include Calothrix in Chaetoceros spp. and UNCY-A in prymnesiophyte microalga.  The Chaetoceros-Calothrix endosymbiosis is hypothesized to be more recent, as the Calothrix genome is generally intact. While other species like that of the UNCY-A symbiont and Richelia have reduced genomes. This reduction in genome size occurs within nitrogen metabolism pathways indicating endosymbiont species are generating nitrogen for their hosts and losing the ability to use this nitrogen independently. This endosymbiont reduction in genome size, might be a step that occurred in the evolution of organelles (above). Protists Mixotricha paradoxa is a protozoan that lacks mitochondria. However, spherical bacteria live inside the cell and serve the function of the mitochondria. Mixotricha has three other species of symbionts that live on the surface of the cell. Paramecium bursaria, a species of ciliate, has a mutualistic symbiotic relationship with green alga called Zoochlorella. The algae live in its cytoplasm. Platyophrya chlorelligera is a freshwater ciliate that harbors Chlorella that perform photosynthesis. Strombidium purpureum is a marine ciliate that uses endosymbiotic, purple, non-sulphur bacteria for anoxygenic photosynthesis. Paulinella chromatophora is a freshwater amoeboid that has a cyanobacterium endosymbiont. Many foraminifera are hosts to several types of algae, such as red algae, diatoms, dinoflagellates and chlorophyta. These endosymbionts can be transmitted vertically to the next generation via asexual reproduction of the host, but because the endosymbionts are larger than the foraminiferal gametes, they need to acquire algae horizontally following sexual reproduction. Several species of radiolaria have photosynthetic symbionts. In some species the host digests algae to keep the population at a constant level. Hatena arenicola is a flagellate protist with a complicated feeding apparatus that feeds on other microbes. When it engulfs a green Nephroselmis alga, the feeding apparatus disappears and it becomes photosynthetic. During mitosis the algae is transferred to only one of the daughter cells, while the other cell restarts the cycle. In 1966, biologist Kwang W. Jeon found that a lab strain of Amoeba proteus had been infected by bacteria that lived inside the cytoplasmic vacuoles. This infection killed almost all of the infected protists. After the equivalent of 40 host generations, the two organisms become mutually interdependent. A genetic exchange between the prokaryotes and protists occurred. Vertebrates The spotted salamander (Ambystoma maculatum) lives in a relationship with the algae Oophila amblystomatis, which grows in its egg cases. Plants All vascular plants harbor endosymbionts or endophytes in this context. They include bacteria, fungi, viruses, protozoa and even microalgae. Endophytes aid in processes such as growth and development, nutrient uptake, and defense against biotic and abiotic stresses like drought, salinity, heat, and herbivores. Plant symbionts can be categorized into epiphytic, endophytic, and mycorrhizal. These relations can also be categorized as beneficial, mutualistic, neutral, and pathogenic. Microorganisms living as endosymbionts in plants can enhance their host's primary productivity either by producing or capturing important resources. These endosymbionts can also enhance plant productivity by producing toxic metabolites that aid plant defenses against herbivores. Plants are dependent on plastid or chloroplast organelles. The chloroplast is derived from a cyanobacterial primary endosymbiosis that began over one billion years ago. An oxygenic, photosynthetic free-living cyanobacterium was engulfed and kept by a heterotrophic protist and eventually evolved into the present intracellular organelle.   Mycorrhizal endosymbionts appear only in fungi. Typically, plant endosymbiosis studies focus on a single category or species to better understand their individual biological processes and functions. Fungal endophytes Fungal endophytes can be found in all plant tissues. Fungi living below the ground amidst plant roots are known as mycorrhiza, but are further categorized based on their location inside the root, with prefixes such as ecto, endo, arbuscular, ericoid, etc. Fungal endosymbionts that live in the roots and extend their extraradical hyphae into the outer rhizosphere are known as ectendosymbionts. Arbuscular Mycorrhizal Fungi (AMF) Arbuscular mycorrhizal fungi or AMF are the most diverse plant microbial endosymbionts. With exceptions such as the Ericaceae family, almost all vascular plants harbor AMF endosymbionts as endo and ecto as well. AMF plant endosymbionts systematically colonize plant roots and help the plant host acquire soil nutrients such as nitrogen. In return it absorbs plant organic carbon products. Plant root exudates contain diverse secondary metabolites, especially flavonoids and strigolactones that act as chemical signals and attracts the AMF. AMF Gigaspora margarita lives as a plant endosymbiont and also harbors further endosymbiont intracytoplasmic bacterium-like organisms. AMF generally promote plant health and growth and alleviate abiotic stresses such as salinity, drought, heat, poor nutrition, and metal toxicity. Individual AMF species have different effects in different hosts – introducing the AMF of one plant to another plant can reduce the latter's growth. Endophytic fungi Endophytic fungi in mutualistic relations directly benefit and benefit from their host plants. They also can help their hosts succeed in polluted environments such as those contaminated with toxic metals. Fungal endophytes are taxonomically diverse and are divided into categories based on mode of transmission, biodiversity, in planta colonization and host plant type. Clavicipitaceous fungi systematically colonize temperate season grasses. Non-clavicipitaceous fungi colonize higher plants and even roots and divide into subcategories. Aureobasidium and preussia species of endophytic fungi isolated from Boswellia sacra produce indole acetic acid hormone to promote plant health and development. Aphids can be found in most plants. Carnivorous ladybirds are aphid predators and are used in pest control. Plant endophytic fungus Neotyphodium lolii produces alkaloid mycotoxins in response to aphid invasions. In response, ladybird predators exhibited reduced fertility and abnormal reproduction, suggesting that the mycotoxins are transmitted along the food chain and affect the predators. Endophytic bacteria Endophytic bacteria belong to a diverse group of plant endosymbionts characterized by systematic colonization of plant tissues. The most common genera include Pseudomonas, Bacillus, Acinetobacter, Actinobacteria, Sphingomonas. Some endophytic bacteria, such as Bacillus amyloliquefaciens, a seed-born endophytic bacteria, produce plant growth by producing gibberellins, which are potent plant growth hormones. Bacillus amyloliquefaciens promotes the taller height of transgenic dwarf rice plants. Some endophytic bacteria genera additionally belong to the Enterobacteriaceae family. Endophytic bacteria typically colonize the leaf tissues from plant roots, but can also enter the plant through the leaves through leaf stomata. Generally, the endophytic bacteria are isolated from the plant tissues by surface sterilization of the plant tissue in a sterile environment. Passenger endophytic bacteria eventually colonize inner tissue of plant by stochastic events while True endophytes possess adaptive traits because of which they live strictly in association with plants. The in vitro-cultivated endophytic bacteria association with plants is considered a more intimate relationship that helps plants acclimatize to conditions and promotes health and growth. Endophytic bacteria are considered to be plant's essential endosymbionts because virtually all plants harbor them, and these endosymbionts play essential roles in host survival. This endosymbiotic relation is important in terms of ecology, evolution and diversity. Endophytic bacteria such as Sphingomonas sp. and Serratia sp. that are isolated from arid land plants regulate endogenous hormone content and promote growth. Archaea endosymbionts Archaea are members of most microbiomes. While archaea are abundant in extreme environments, they are less abundant and diverse in association with eukaryotic hosts. Nevertheless, archaea are a substantial constituent of plant-associated ecosystems in the aboveground and belowground phytobiome, and play a role in host plant's health, growth and survival amid biotic and abiotic stresses. However, few studies have investigated the role of archaea in plant health and its symbiotic relationships. Most plant endosymbiosis studies focus on fungal or bacteria using metagenomic approaches. The characterization of archaea includes crop plants such as rice and maize, but also aquatic plants. The abundance of archaea varies by tissue type; for example archaea are more abundant in the rhizosphere than the phyllosphere and endosphere. This archaeal abundance is associated with plant species type, environment and the plant's developmental stage. In a study on plant genotype-specific archaeal and bacterial endophytes, 35% of archaeal sequences were detected in overall sequences (achieved using amplicon sequencing and verified by real time-PCR). The archaeal sequences belong to the phyla Thaumarchaeota, Crenarchaeota, and Euryarchaeota. Bacteria Some Betaproteobacteria have Gammaproteobacteria endosymbionts. Fungi Fungi host endohyphal bacteria; the effects of the bacteria are not well studied. Many such fungi in turn live within plants. These fungi are otherwise known as fungal endophytes. It is hypothesized that the fungi offers a safe haven for the bacteria, and the diverse bacteria that they attract create a micro-ecosystem. These interactions may impact the way that fungi interact with the environment by modulating their phenotypes. The bacteria do this by altering the fungi's gene expression. For example, Luteibacter sp. has been shown to naturally infect the ascomycetous endophyte Pestalotiopsis sp. isolated from Platycladus orientalis. The Luteibacter sp. influences the auxin and enzyme production within its host, which, in turn, may influence the effect the fungus has on its plant host. Another interesting example of a bacterium living in symbiosis with a fungus is the fungus Mortierella. This soil-dwelling fungus lives in close association with a toxin-producing bacteria, Mycoavidus, which helps the fungus defend against nematodes. Virus endosymbionts The human genome project found several thousand endogenous retroviruses, endogenous viral elements in the genome that closely resemble and can be derived from retroviruses, organized into 24 families. See also References Endosymbiotic events Environmental microbiology Microbial population biology Symbiosis
Endosymbiont
[ "Biology", "Environmental_science" ]
6,093
[ "Behavior", "Symbiosis", "Biological interactions", "Endosymbiotic events", "Environmental microbiology" ]
9,678
https://en.wikipedia.org/wiki/Exponential%20function
In mathematics, the exponential function is the unique real function which maps zero to one and has a derivative equal to its value. The exponential of a variable is denoted or , with the two notations used interchangeably. It is called exponential because its argument can be seen as an exponent to which a constant number , the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature. The exponential function converts sums to products: it maps the additive identity to the multiplicative identity , and the exponential of a sum is equal to the product of separate exponentials, . Its inverse function, the natural logarithm, or , converts products to sums: . Other functions of the general form , with base , are also commonly called exponential functions, and share the property of converting addition to multiplication, . Where these two meanings might be confused, the exponential function of base is occasionally called the natural exponential function, matching the name natural logarithm. The generalization of the standard exponent notation to arbitrary real numbers as exponents, is usually formally defined in terms of the exponential and natural logarithm functions, as . The "natural" base is the unique base satisfying the criterion that the exponential function's derivative equals its value, , which simplifies definitions and eliminates extraneous constants when using exponential functions in calculus. Quantities which change over time in proportion to their value, for example the balance of a bank account bearing compound interest, the size of a bacterial population, the temperature of an object relative to its environment, or the amount of a radioactive substance, can be modeled using functions of the form , also sometimes called exponential functions; these quantities undergo exponential growth if is positive or exponential decay if is negative. The exponential function can be generalized to accept a complex number as its argument. This reveals a relation between the multiplication of complex numbers and rotation in the Euclidean plane, Euler's formula : the exponential of an imaginary number is a point on the complex unit circle at angle from the real axis. The identities of trigonometry can thus be translated into identities involving exponentials of imaginary quantities. The complex function is a conformal map from an infinite strip of the complex plane (which periodically repeats in the imaginary direction) onto the whole complex plane except for . The exponential function can be even further generalized to accept other types of arguments, such as matrices and elements of Lie algebras. Graph The graph of is upward-sloping, and increases faster as increases. The graph always lies above the -axis, but becomes arbitrarily close to it for large negative ; thus, the -axis is a horizontal asymptote. The equation means that the slope of the tangent to the graph at each point is equal to its height (its -coordinate) at that point. Definitions and fundamental properties There are several different definitions of the exponential function, which are all equivalent, although of very different nature. One of the simplest definitions is: The exponential function is the unique differentiable function that equals its derivative, and takes the value for the value of its variable. This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function. Uniqueness: If and are two functions satisfying the above definition, then the derivative of is zero everywhere by the quotient rule. It follows that is constant, and this constant is since . The exponential function is the inverse function of the natural logarithm. The inverse function theorem implies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has for every real number and every positive real number The exponential function is the sum of a power series: where is the factorial of (the product of the first positive integers). This series is absolutely convergent for every per the ratio test. So, the derivative of the sum can be computed by term-by-term derivation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every , and is everywhere the sum of its Maclaurin series. The exponential satisfies the functional equation: This results from the uniqueness and the fact that the function satisfies the above definition. It can be proved that a function that satisfies this functional equation is the exponential function if its derivative at is and the function is either continuous or monotonic Positiveness: For every , one has , since the functional equation implies . It results that the exponential function is positive (since , if one would have for some , the intermediate value theorem would imply the existence of some such that . It results also that the exponential function is monotonically increasing. Extension of exponentiation to positive real bases: Let be a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one has If is an integer, the functional equation of the logarithm implies Since the right-most expression is defined if is any real number, this allows defining for every positive real number and every real number : In particular, if is the Euler's number one has (inverse function) and thus This shows the equivalence of the two notations for the exponential function. The exponential function is the limit where takes only integer values (otherwise, the exponentiation would require the exponential function to be defined). By continuity of the logarithm, this can be proved by taking logarithms and proving for example with Taylor's theorem. General exponential functions The term "exponential function" is used sometimes for referring to any function whose argument appears in an exponent, such as and However, this name is commonly used for differentiable functions satisfying one of the following equivalent conditions: There exist some constants and such that for every value of . There exist some constants and such that for every value of . For every the value of is independent of that is, for all , and . In words: pairs of arguments with the same difference are mapped into pairs of values with the same ratio.G. Harnett, Quora, 2020, What is the base of an exponential function? "A (general) exponential function changes by the same factor over equal increments of the input. The factor of change over a unit increment is called the base."Mathebibel "Werden bei einer Exponentialfunktion zur basis die -Werte jeweils um einen festen Zahlenwert vergrössert, so werden die Funktionswerte mit einem konstanten Faktor vervielfacht." The value of is independent of . This constant value is sometimes called the rate constant of and denoted as ; it equals the constant of the second equivalent condition.G.F. Simmons, Differential Equations and Historical Notes, 1st ed. 1972, p. 15; 3rd ed. 2016, p. 23 "The positive constant is called the rate constant, for its value is clearly a measure of the rate at which the reaction proceeds." .   Its reciprocal, the constant value of , is, in some contexts, called the time constant of and denoted as (so, ). The value of is independent of and This constant value equals the constant of the first equivalent condition and is called the base of the exponential function. Hierarchy of types Exponential functions with quantities as elements of domain and codomain. E.g. the lilys in the pond, growing with the same factor during time intervals of equal length. In applications in empirical sciences, notations with and are commonly used. Exponential growth can be modeled by a function with its doubling time. Exponential decay can be modeled by a function with its half-life. Exponential functions with domain ; see , below. Exponential functions obeying for all , (changing additions into multiplications; the opposite of the main property of logarithmic functions: changing multiplications into additions) ; equivalent with the condition .  Usual form: Sometimes the value of is named the antilog of or the antilogarithm of . Exponential functions obeying (the function is identical with its own derivative).  Usual form: The (unique) exponential function obeying as well as is called the exponential function; sometimes the natural exponential function or the natural antilogarithm. Symbol: .  Usual form: or Two meanings of 'base' For exponential functions , to , the -independent value of is called the base of the function .  While in expressions (...)(...) and (...)^(...) the value of the first element is called the base of the exponentiation.   Example: the exponential function   has base ,  while the expression   has base (and exponent ). Properties -   The Euler number is connected with every exponential function .  When argument increases by ,   changes by factor .  For . -   Let be an arbitrary point on the graph in cartesian coordinates of an exponential function with 'time constant' . Then the constant distance on the asymptote of the graph between its intersections with the tangent in and the line through perpendicular to the asymptote, equals . -   The graph of an exponential function in polar coordinates is a logarithmic spiral or equiangular spiral.Ch.-J. de la Vallée Poussin, Cours d'Analyse Infinitésimale, Tome I, 3ième édition 1914, p. 363 More precisely: the graph in polar coordinates of an exponential function with rate constant , is a logarithmic spiral with constant pitch angle (between the directions of the spiral and the polar circle, in an arbitrary point on the spiral) . - In a logarithmic spiral with pitch angle 45o  the length of a radius vector increases by a factor when the polar angle increases by one radian. And by the factor at a 180o switch. See , logarithmic spiral, §Properties, 'Rotating, scaling'. -   An exponential function is determined by two 'points'.  With , positive, and determine the exponential function   . Overview The exponential function arises whenever a quantity grows or decays at a rate proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number now known as . Later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If a principal amount of 1 earns interest at an annual rate of compounded monthly, then the interest earned each month is times the current value, so each month the total value is multiplied by , and the value at the end of the year is . If instead interest is compounded daily, this becomes . Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, first given by Leonhard Euler. This is one of a number of characterizations of the exponential function; others involve series or differential equations. From any of these definitions it can be shown that is the reciprocal of . For example, from the differential equation definition, when and its derivative using the product rule is for all , so for all . From any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity. For example, from the power series definition, expanded by the Binomial theorem, This justifies the exponential notation for . The derivative (rate of change) of the exponential function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself is expressible in terms of the exponential function. This derivative property leads to exponential growth or exponential decay. The exponential function extends to an entire function on the complex plane. Euler's formula relates its values at purely imaginary arguments to trigonometric functions. The exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra. Derivatives and differential equations The importance of the exponential function in mathematics and the sciences stems mainly from its property as the unique function which is equal to its derivative and is equal to 1 when . That is, Functions of the form for constant are the only functions that are equal to their derivative (by the Picard–Lindelöf theorem). Other ways of saying the same thing include: The slope of the graph at any point is the height of the function at that point. The rate of increase of the function at is equal to the value of the function at . The function solves the differential equation . is a fixed point of derivative as a linear operator on function space. If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. More generally, for any real constant , a function satisfies if and only if for some constant . The constant k'' is called the decay constant, disintegration constant, rate constant, or transformation constant. Furthermore, for any differentiable function , we find, by the chain rule: Continued fractions for A continued fraction for can be obtained via an identity of Euler: The following generalized continued fraction for converges more quickly: or, by applying the substitution : with a special case for : This formula also converges, though more slowly, for . For example: Complex exponential As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. The most common definition of the complex exponential function parallels the power series definition for real arguments, where the real variable is replaced by a complex one: Alternatively, the complex exponential function may be defined by modelling the limit definition for real arguments, but with the real variable replaced by a complex one: For the power series definition, term-wise multiplication of two copies of this power series in the Cauchy sense, permitted by Mertens' theorem, shows that the defining multiplicative property of exponential functions continues to hold for all complex arguments: The definition of the complex exponential function in turn leads to the appropriate definitions extending the trigonometric functions to complex arguments. In particular, when ( real), the series definition yields the expansion In this expansion, the rearrangement of the terms into real and imaginary parts is justified by the absolute convergence of the series. The real and imaginary parts of the above expression in fact correspond to the series expansions of and , respectively. This correspondence provides motivation for cosine and sine for all complex arguments in terms of and the equivalent power series: for all The functions , , and so defined have infinite radii of convergence by the ratio test and are therefore entire functions (that is, holomorphic on ). The range of the exponential function is , while the ranges of the complex sine and cosine functions are both in its entirety, in accord with Picard's theorem, which asserts that the range of a nonconstant entire function is either all of , or excluding one lacunary value. These definitions for the exponential and trigonometric functions lead trivially to Euler's formula: We could alternatively define the complex exponential function based on this relationship. If , where and are both real, then we could define its exponential as where , , and on the right-hand side of the definition sign are to be interpreted as functions of a real variable, previously defined by other means. For , the relationship holds, so that for real and maps the real line (mod ) to the unit circle in the complex plane. Moreover, going from to , the curve defined by traces a segment of the unit circle of length starting from in the complex plane and going counterclockwise. Based on these observations and the fact that the measure of an angle in radians is the arc length on the unit circle subtended by the angle, it is easy to see that, restricted to real arguments, the sine and cosine functions as defined above coincide with the sine and cosine functions as introduced in elementary mathematics via geometric notions. The complex exponential function is periodic with period and holds for all . When its domain is extended from the real line to the complex plane, the exponential function retains the following properties: for all Extending the natural logarithm to complex arguments yields the complex logarithm , which is a multivalued function. We can then define a more general exponentiation: for all complex numbers and . This is also a multivalued function, even when is real. This distinction is problematic, as the multivalued functions and are easily confused with their single-valued equivalents when substituting a real number for . The rule about multiplying exponents for the case of positive real numbers must be modified in a multivalued context: See failure of power and logarithm identities for more about problems with combining powers. The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. Two special cases exist: when the original line is parallel to the real axis, the resulting spiral never closes in on itself; when the original line is parallel to the imaginary axis, the resulting spiral is a circle of some radius. Considering the complex exponential function as a function involving four real variables: the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of the domain, the following are depictions of the graph as variously projected into two or three dimensions. The second image shows how the domain complex plane is mapped into the range complex plane: zero is mapped to 1 the real axis is mapped to the positive real axis the imaginary axis is wrapped around the unit circle at a constant angular rate values with negative real parts are mapped inside the unit circle values with positive real parts are mapped outside of the unit circle values with a constant real part are mapped to circles centered at zero values with a constant imaginary part are mapped to rays extending from zero The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the real axis. It shows the graph is a surface of revolution about the axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the imaginary axis. It shows that the graph's surface for positive and negative values doesn't really meet along the negative real axis, but instead forms a spiral surface about the axis. Because its values have been extended to , this image also better depicts the 2π periodicity in the imaginary value. Matrices and Banach algebras The power series definition of the exponential function makes sense for square matrices (for which the function is called the matrix exponential) and more generally in any unital Banach algebra . In this setting, , and is invertible with inverse for any in . If , then , but this identity can fail for noncommuting and . Some alternative definitions lead to the same function. For instance, can be defined as Or can be defined as , where is the solution to the differential equation , with initial condition ; it follows that for every in . Lie algebras Given a Lie group and its associated Lie algebra , the exponential map is a map satisfying similar properties. In fact, since is the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie group of invertible matrices has as Lie algebra , the space of all matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. The identity can fail for Lie algebra elements and that do not commute; the Baker–Campbell–Hausdorff formula supplies the necessary correction terms. Transcendency The function is not in the rational function ring : it is not the quotient of two polynomials with complex coefficients. If are distinct complex numbers, then are linearly independent over , and hence is transcendental over . Computation The Taylor series definition above is generally efficient for computing (an approximation of) . However, when computing near the argument , the result will be close to 1, and computing the value of the difference with floating-point arithmetic may lead to the loss of (possibly all) significant figures, producing a large relative error, possibly even a meaningless result. Following a proposal by William Kahan, it may thus be useful to have a dedicated routine, often called expm1, which computes directly, bypassing computation of . For example, one may use the Taylor series: This was first implemented in 1979 in the Hewlett-Packard HP-41C calculator, and provided by several calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems, and programming languages (for example C99). In addition to base , the IEEE 754-2008 standard defines similar exponential functions near 0 for base 2 and 10: and . A similar approach has been used for the logarithm; see log1p. An identity in terms of the hyperbolic tangent, gives a high-precision value for small values of on systems that do not implement . See also Carlitz exponential, a characteristic analogue Gaussian function Half-exponential function, a compositional square root of an exponential function - Used for solving exponential equations List of exponential topics List of integrals of exponential functions Mittag-Leffler function, a generalization of the exponential function -adic exponential function Padé table for exponential function – Padé approximation of exponential function by a fraction of polynomial functions Phase factor Notes References External links Elementary special functions Analytic functions Exponentials Special hypergeometric functions E (mathematical constant)
Exponential function
[ "Mathematics" ]
4,523
[ "E (mathematical constant)", "Exponentials" ]
9,696
https://en.wikipedia.org/wiki/Erosion
Erosion is the action of surface processes (such as water flow or wind) that removes soil, rock, or dissolved material from one location on the Earth's crust and then transports it to another location where it is deposited. Erosion is distinct from weathering which involves no movement. Removal of rock or soil as clastic sediment is referred to as physical or mechanical erosion; this contrasts with chemical erosion, where soil or rock material is removed from an area by dissolution. Eroded sediment or solutes may be transported just a few millimetres, or for thousands of kilometres. Agents of erosion include rainfall; bedrock wear in rivers; coastal erosion by the sea and waves; glacial plucking, abrasion, and scour; areal flooding; wind abrasion; groundwater processes; and mass movement processes in steep landscapes like landslides and debris flows. The rates at which such processes act control how fast a surface is eroded. Typically, physical erosion proceeds the fastest on steeply sloping surfaces, and rates may also be sensitive to some climatically controlled properties including amounts of water supplied (e.g., by rain), storminess, wind speed, wave fetch, or atmospheric temperature (especially for some ice-related processes). Feedbacks are also possible between rates of erosion and the amount of eroded material that is already carried by, for example, a river or glacier. The transport of eroded materials from their original location is followed by deposition, which is arrival and emplacement of material at a new location. While erosion is a natural process, human activities have increased by 10–40 times the rate at which soil erosion is occurring globally. At agriculture sites in the Appalachian Mountains, intensive farming practices have caused erosion at up to 100 times the natural rate of erosion in the region. Excessive (or accelerated) erosion causes both "on-site" and "off-site" problems. On-site impacts include decreases in agricultural productivity and (on natural landscapes) ecological collapse, both because of loss of the nutrient-rich upper soil layers. In some cases, this leads to desertification. Off-site effects include sedimentation of waterways and eutrophication of water bodies, as well as sediment-related damage to roads and houses. Water and wind erosion are the two primary causes of land degradation; combined, they are responsible for about 84% of the global extent of degraded land, making excessive erosion one of the most significant environmental problems worldwide. Intensive agriculture, deforestation, roads, anthropogenic climate change and urban sprawl are amongst the most significant human activities in regard to their effect on stimulating erosion. However, there are many prevention and remediation practices that can curtail or limit erosion of vulnerable soils. Physical processes Rainfall and surface runoff Rainfall, and the surface runoff which may result from rainfall, produces four main types of soil erosion: splash erosion, sheet erosion, rill erosion, and gully erosion. Splash erosion is generally seen as the first and least severe stage in the soil erosion process, which is followed by sheet erosion, then rill erosion and finally gully erosion (the most severe of the four). In splash erosion, the impact of a falling raindrop creates a small crater in the soil, ejecting soil particles. The distance these soil particles travel can be as much as vertically and horizontally on level ground. If the soil is saturated, or if the rainfall rate is greater than the rate at which water can infiltrate into the soil, surface runoff occurs. If the runoff has sufficient flow energy, it will transport loosened soil particles (sediment) down the slope. Sheet erosion is the transport of loosened soil particles by overland flow. Rill erosion refers to the development of small, ephemeral concentrated flow paths which function as both sediment source and sediment delivery systems for erosion on hillslopes. Generally, where water erosion rates on disturbed upland areas are greatest, rills are active. Flow depths in rills are typically of the order of a few centimetres (about an inch) or less and along-channel slopes may be quite steep. This means that rills exhibit hydraulic physics very different from water flowing through the deeper, wider channels of streams and rivers. Gully erosion occurs when runoff water accumulates and rapidly flows in narrow channels during or immediately after heavy rains or melting snow, removing soil to a considerable depth. A gully is distinguished from a rill based on a critical cross-sectional area of at least one square foot, i.e. the size of a channel that can no longer be erased via normal tillage operations. Extreme gully erosion can progress to formation of badlands. These form under conditions of high relief on easily eroded bedrock in climates favorable to erosion. Conditions or disturbances that limit the growth of protective vegetation (rhexistasy) are a key element of badland formation. Rivers and streams Valley or stream erosion occurs with continued water flow along a linear feature. The erosion is both downward, deepening the valley, and headward, extending the valley into the hillside, creating head cuts and steep banks. In the earliest stage of stream erosion, the erosive activity is dominantly vertical, the valleys have a typical V-shaped cross-section and the stream gradient is relatively steep. When some base level is reached, the erosive activity switches to lateral erosion, which widens the valley floor and creates a narrow floodplain. The stream gradient becomes nearly flat, and lateral deposition of sediments becomes important as the stream meanders across the valley floor. In all stages of stream erosion, by far the most erosion occurs during times of flood when more and faster-moving water is available to carry a larger sediment load. In such processes, it is not the water alone that erodes: suspended abrasive particles, pebbles, and boulders can also act erosively as they traverse a surface, in a process known as traction. Bank erosion is the wearing away of the banks of a stream or river. This is distinguished from changes on the bed of the watercourse, which is referred to as scour. Erosion and changes in the form of river banks may be measured by inserting metal rods into the bank and marking the position of the bank surface along the rods at different times. Thermal erosion is the result of melting and weakening permafrost due to moving water. It can occur both along rivers and at the coast. Rapid river channel migration observed in the Lena River of Siberia is due to thermal erosion, as these portions of the banks are composed of permafrost-cemented non-cohesive materials. Much of this erosion occurs as the weakened banks fail in large slumps. Thermal erosion also affects the Arctic coast, where wave action and near-shore temperatures combine to undercut permafrost bluffs along the shoreline and cause them to fail. Annual erosion rates along a segment of the Beaufort Sea shoreline averaged per year from 1955 to 2002. Most river erosion happens nearer to the mouth of a river. On a river bend, the longest least sharp side has slower moving water. Here deposits build up. On the narrowest sharpest side of the bend, there is faster moving water so this side tends to erode away mostly. Rapid erosion by a large river can remove enough sediments to produce a river anticline, as isostatic rebound raises rock beds unburdened by erosion of overlying beds. Coastal erosion Shoreline erosion, which occurs on both exposed and sheltered coasts, primarily occurs through the action of currents and waves but sea level (tidal) change can also play a role. Hydraulic action takes place when the air in a joint is suddenly compressed by a wave closing the entrance of the joint. This then cracks it. Wave pounding is when the sheer energy of the wave hitting the cliff or rock breaks pieces off. Abrasion or corrasion is caused by waves launching sea load at the cliff. It is the most effective and rapid form of shoreline erosion (not to be confused with corrosion). Corrosion is the dissolving of rock by carbonic acid in sea water. Limestone cliffs are particularly vulnerable to this kind of erosion. Attrition is where particles/sea load carried by the waves are worn down as they hit each other and the cliffs. This then makes the material easier to wash away. The material ends up as shingle and sand. Another significant source of erosion, particularly on carbonate coastlines, is boring, scraping and grinding of organisms, a process termed bioerosion. Sediment is transported along the coast in the direction of the prevailing current (longshore drift). When the upcurrent supply of sediment is less than the amount being carried away, erosion occurs. When the upcurrent amount of sediment is greater, sand or gravel banks will tend to form as a result of deposition. These banks may slowly migrate along the coast in the direction of the longshore drift, alternately protecting and exposing parts of the coastline. Where there is a bend in the coastline, quite often a buildup of eroded material occurs forming a long narrow bank (a spit). Armoured beaches and submerged offshore sandbanks may also protect parts of a coastline from erosion. Over the years, as the shoals gradually shift, the erosion may be redirected to attack different parts of the shore. Erosion of a coastal surface, followed by a fall in sea level, can produce a distinctive landform called a raised beach. Chemical erosion Chemical erosion is the loss of matter in a landscape in the form of solutes. Chemical erosion is usually calculated from the solutes found in streams. Anders Rapp pioneered the study of chemical erosion in his work about Kärkevagge published in 1960. Formation of sinkholes and other features of karst topography is an example of extreme chemical erosion. Glaciers Glaciers erode predominantly by three different processes: abrasion/scouring, plucking, and ice thrusting. In an abrasion process, debris in the basal ice scrapes along the bed, polishing and gouging the underlying rocks, similar to sandpaper on wood. Scientists have shown that, in addition to the role of temperature played in valley-deepening, other glaciological processes, such as erosion also control cross-valley variations. In a homogeneous bedrock erosion pattern, curved channel cross-section beneath the ice is created. Though the glacier continues to incise vertically, the shape of the channel beneath the ice eventually remain constant, reaching a U-shaped parabolic steady-state shape as we now see in glaciated valleys. Scientists also provide a numerical estimate of the time required for the ultimate formation of a steady-shaped U-shaped valley—approximately 100,000 years. In a weak bedrock (containing material more erodible than the surrounding rocks) erosion pattern, on the contrary, the amount of over deepening is limited because ice velocities and erosion rates are reduced. Glaciers can also cause pieces of bedrock to crack off in the process of plucking. In ice thrusting, the glacier freezes to its bed, then as it surges forward, it moves large sheets of frozen sediment at the base along with the glacier. This method produced some of the many thousands of lake basins that dot the edge of the Canadian Shield. Differences in the height of mountain ranges are not only being the result tectonic forces, such as rock uplift, but also local climate variations. Scientists use global analysis of topography to show that glacial erosion controls the maximum height of mountains, as the relief between mountain peaks and the snow line are generally confined to altitudes less than 1500 m. The erosion caused by glaciers worldwide erodes mountains so effectively that the term glacial buzzsaw has become widely used, which describes the limiting effect of glaciers on the height of mountain ranges. As mountains grow higher, they generally allow for more glacial activity (especially in the accumulation zone above the glacial equilibrium line altitude), which causes increased rates of erosion of the mountain, decreasing mass faster than isostatic rebound can add to the mountain. This provides a good example of a negative feedback loop. Ongoing research is showing that while glaciers tend to decrease mountain size, in some areas, glaciers can actually reduce the rate of erosion, acting as a glacial armor. Ice can not only erode mountains but also protect them from erosion. Depending on glacier regime, even steep alpine lands can be preserved through time with the help of ice. Scientists have proved this theory by sampling eight summits of northwestern Svalbard using Be10 and Al26, showing that northwestern Svalbard transformed from a glacier-erosion state under relatively mild glacial maxima temperature, to a glacier-armor state occupied by cold-based, protective ice during much colder glacial maxima temperatures as the Quaternary ice age progressed. These processes, combined with erosion and transport by the water network beneath the glacier, leave behind glacial landforms such as moraines, drumlins, ground moraine (till), glaciokarst, kames, kame deltas, moulins, and glacial erratics in their wake, typically at the terminus or during glacier retreat. The best-developed glacial valley morphology appears to be restricted to landscapes with low rock uplift rates (less than or equal to 2mm per year) and high relief, leading to long-turnover times. Where rock uplift rates exceed 2mm per year, glacial valley morphology has generally been significantly modified in postglacial time. Interplay of glacial erosion and tectonic forcing governs the morphologic impact of glaciations on active orogens, by both influencing their height, and by altering the patterns of erosion during subsequent glacial periods via a link between rock uplift and valley cross-sectional shape. Floods At extremely high flows, kolks, or vortices are formed by large volumes of rapidly rushing water. Kolks cause extreme local erosion, plucking bedrock and creating pothole-type geographical features called rock-cut basins. Examples can be seen in the flood regions result from glacial Lake Missoula, which created the channeled scablands in the Columbia Basin region of eastern Washington. Wind erosion Wind erosion is a major geomorphological force, especially in arid and semi-arid regions. It is also a major source of land degradation, evaporation, desertification, harmful airborne dust, and crop damage—especially after being increased far above natural rates by human activities such as deforestation, urbanization, and agriculture. Wind erosion is of two primary varieties: deflation, where the wind picks up and carries away loose particles; and abrasion, where surfaces are worn down as they are struck by airborne particles carried by wind. Deflation is divided into three categories: (1) surface creep, where larger, heavier particles slide or roll along the ground; (2) saltation, where particles are lifted a short height into the air, and bounce and saltate across the surface of the soil; and (3) suspension, where very small and light particles are lifted into the air by the wind, and are often carried for long distances. Saltation is responsible for the majority (50–70%) of wind erosion, followed by suspension (30–40%), and then surface creep (5–25%). Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as 6100 times greater in drought years than in wet years. Mass wasting Mass wasting or mass movement is the downward and outward movement of rock and sediments on a sloped surface, mainly due to the force of gravity. Mass wasting is an important part of the erosional process and is often the first stage in the breakdown and transport of weathered materials in mountainous areas. It moves material from higher elevations to lower elevations where other eroding agents such as streams and glaciers can then pick up the material and move it to even lower elevations. Mass-wasting processes are always occurring continuously on all slopes; some mass-wasting processes act very slowly; others occur very suddenly, often with disastrous results. Any perceptible down-slope movement of rock or sediment is often referred to in general terms as a landslide. However, landslides can be classified in a much more detailed way that reflects the mechanisms responsible for the movement and the velocity at which the movement occurs. One of the visible topographical manifestations of a very slow form of such activity is a scree slope. Slumping happens on steep hillsides, occurring along distinct fracture zones, often within materials like clay that, once released, may move quite rapidly downhill. They will often show a spoon-shaped isostatic depression, in which the material has begun to slide downhill. In some cases, the slump is caused by water beneath the slope weakening it. In many cases it is simply the result of poor engineering along highways where it is a regular occurrence. Surface creep is the slow movement of soil and rock debris by gravity which is usually not perceptible except through extended observation. However, the term can also describe the rolling of dislodged soil particles in diameter by wind along the soil surface. Submarine sediment gravity flows On the continental slope, erosion of the ocean floor to create channels and submarine canyons can result from the rapid downslope flow of sediment gravity flows, bodies of sediment-laden water that move rapidly downslope as turbidity currents. Where erosion by turbidity currents creates oversteepened slopes it can also trigger underwater landslides and debris flows. Turbidity currents can erode channels and canyons into substrates ranging from recently deposited unconsolidated sediments to hard crystalline bedrock. Almost all continental slopes and deep ocean basins display such channels and canyons resulting from sediment gravity flows and submarine canyons act as conduits for the transfer of sediment from the continents and shallow marine environments to the deep sea. Turbidites, which are the sedimentary deposits resulting from turbidity currents, comprise some of the thickest and largest sedimentary sequences on Earth, indicating that the associated erosional processes must also have played a prominent role in Earth's history. Factors affecting erosion rates Climate The amount and intensity of precipitation is the main climatic factor governing soil erosion by water. The relationship is particularly strong if heavy rainfall occurs at times when, or in locations where, the soil's surface is not well protected by vegetation. This might be during periods when agricultural activities leave the soil bare, or in semi-arid regions where vegetation is naturally sparse. Wind erosion requires strong winds, particularly during times of drought when vegetation is sparse and soil is dry (and so is more erodible). Other climatic factors such as average temperature and temperature range may also affect erosion, via their effects on vegetation and soil properties. In general, given similar vegetation and ecosystems, areas with more precipitation (especially high-intensity rainfall), more wind, or more storms are expected to have more erosion. In some areas of the world (e.g. the mid-western US), rainfall intensity is the primary determinant of erosivity (for a definition of erosivity check,) with higher intensity rainfall generally resulting in more soil erosion by water. The size and velocity of rain drops is also an important factor. Larger and higher-velocity rain drops have greater kinetic energy, and thus their impact will displace soil particles by larger distances than smaller, slower-moving rain drops. In other regions of the world (e.g. western Europe), runoff and erosion result from relatively low intensities of stratiform rainfall falling onto the previously saturated soil. In such situations, rainfall amount rather than intensity is the main factor determining the severity of soil erosion by water. According to the climate change projections, erosivity will increase significantly in Europe and soil erosion may increase by 13–22.5% by 2050 In Taiwan, where typhoon frequency increased significantly in the 21st century, a strong link has been drawn between the increase in storm frequency with an increase in sediment load in rivers and reservoirs, highlighting the impacts climate change can have on erosion. Vegetative cover Vegetation acts as an interface between the atmosphere and the soil. It increases the permeability of the soil to rainwater, thus decreasing runoff. It shelters the soil from winds, which results in decreased wind erosion, as well as advantageous changes in microclimate. The roots of the plants bind the soil together, and interweave with other roots, forming a more solid mass that is less susceptible to both water and wind erosion. The removal of vegetation increases the rate of surface erosion. Topography The topography of the land determines the velocity at which surface runoff will flow, which in turn determines the erosivity of the runoff. Longer, steeper slopes (especially those without adequate vegetative cover) are more susceptible to very high rates of erosion during heavy rains than shorter, less steep slopes. Steeper terrain is also more prone to mudslides, landslides, and other forms of gravitational erosion processes. Tectonics Tectonic processes control rates and distributions of erosion at the Earth's surface. If the tectonic action causes part of the Earth's surface (e.g., a mountain range) to be raised or lowered relative to surrounding areas, this must necessarily change the gradient of the land surface. Because erosion rates are almost always sensitive to the local slope (see above), this will change the rates of erosion in the uplifted area. Active tectonics also brings fresh, unweathered rock towards the surface, where it is exposed to the action of erosion. However, erosion can also affect tectonic processes. The removal by erosion of large amounts of rock from a particular region, and its deposition elsewhere, can result in a lightening of the load on the lower crust and mantle. Because tectonic processes are driven by gradients in the stress field developed in the crust, this unloading can in turn cause tectonic or isostatic uplift in the region. In some cases, it has been hypothesised that these twin feedbacks can act to localize and enhance zones of very rapid exhumation of deep crustal rocks beneath places on the Earth's surface with extremely high erosion rates, for example, beneath the extremely steep terrain of Nanga Parbat in the western Himalayas. Such a place has been called a "tectonic aneurysm". Development Human land development, in forms including agricultural and urban development, is considered a significant factor in erosion and sediment transport, which aggravate food insecurity. In Taiwan, increases in sediment load in the northern, central, and southern regions of the island can be tracked with the timeline of development for each region throughout the 20th century. The intentional removal of soil and rock by humans is a form of erosion that has been named lisasion. Erosion at various scales Mountain ranges Mountain ranges take millions of years to erode to the degree they effectively cease to exist. Scholars Pitman and Golovchenko estimate that it takes probably more than 450 million years to erode a mountain mass similar to the Himalaya into an almost-flat peneplain if there are no significant sea-level changes. Erosion of mountains massifs can create a pattern of equally high summits called summit accordance. It has been argued that extension during post-orogenic collapse is a more effective mechanism of lowering the height of orogenic mountains than erosion. Examples of heavily eroded mountain ranges include the Timanides of Northern Russia. Erosion of this orogen has produced sediments that are now found in the East European Platform, including the Cambrian Sablya Formation near Lake Ladoga. Studies of these sediments indicate that it is likely that the erosion of the orogen began in the Cambrian and then intensified in the Ordovician. Soils If the erosion rate exceeds soil formation, erosion destroys the soil. Lower rates of erosion can prevent the formation of soil features that take time to develop. Inceptisols develop on eroded landscapes that, if stable, would have supported the formation of more developed Alfisols. While erosion of soils is a natural process, human activities have increased by 10-40 times the rate at which erosion occurs globally. Excessive (or accelerated) erosion causes both "on-site" and "off-site" problems. On-site impacts include decreases in agricultural productivity and (on natural landscapes) ecological collapse, both because of loss of the nutrient-rich upper soil layers. In some cases, the eventual result is desertification. Off-site effects include sedimentation of waterways and eutrophication of water bodies, as well as sediment-related damage to roads and houses. Water and wind erosion are the two primary causes of land degradation; combined, they are responsible for about 84% of the global extent of degraded land, making excessive erosion one of the most significant environmental problems. Often in the United States, farmers cultivating highly erodible land must comply with a conservation plan to be eligible for agricultural assistance. Consequences of human-made soil erosion See also References Further reading External links The Soil Erosion Site International Erosion Control Association Soil Erosion Data in the European Soil Portal USDA National Soil Erosion Laboratory The Soil and Water Conservation Society Soil science Agronomy Intensive farming Soil erosion Desertification
Erosion
[ "Chemistry" ]
5,151
[ "Eutrophication", "Intensive farming" ]
9,697
https://en.wikipedia.org/wiki/Euclidean%20space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces when one wants to specify their dimension. For n equal to one or two, they are commonly called respectively Euclidean lines and Euclidean planes. The qualifier "Euclidean" is used to distinguish Euclidean spaces from other spaces that were later considered in physics and modern mathematics. Ancient Greek geometers introduced Euclidean space for modeling the physical space. Their work was collected by the ancient Greek mathematician Euclid in his Elements, with the great innovation of proving all properties of the space as theorems, by starting from a few fundamental properties, called postulates, which either were considered as evident (for example, there is exactly one straight line passing through two points), or seemed impossible to prove (parallel postulate). After the introduction at the end of the 19th century of non-Euclidean geometries, the old postulates were re-formalized to define Euclidean spaces through axiomatic theory. Another definition of Euclidean spaces by means of vector spaces and linear algebra has been shown to be equivalent to the axiomatic definition. It is this definition that is more commonly used in modern mathematics, and detailed in this article. In all definitions, Euclidean spaces consist of points, which are defined only by the properties that they must have for forming a Euclidean space. There is essentially only one Euclidean space of each dimension; that is, all Euclidean spaces of a given dimension are isomorphic. Therefore, it is usually possible to work with a specific Euclidean space, denoted or , which can be represented using Cartesian coordinates as the real -space equipped with the standard dot product. Definition History of the definition Euclidean space was introduced by ancient Greeks as an abstraction of our physical space. Their great innovation, appearing in Euclid's Elements was to build and prove all geometry by starting from a few very basic properties, which are abstracted from the physical world, and cannot be mathematically proved because of the lack of more basic tools. These properties are called postulates, or axioms in modern language. This way of defining Euclidean space is still in use under the name of synthetic geometry. In 1637, René Descartes introduced Cartesian coordinates, and showed that these allow reducing geometric problems to algebraic computations with numbers. This reduction of geometry to algebra was a major change in point of view, as, until then, the real numbers were defined in terms of lengths and distances. Euclidean geometry was not applied in spaces of dimension more than three until the 19th century. Ludwig Schläfli generalized Euclidean geometry to spaces of dimension , using both synthetic and algebraic methods, and discovered all of the regular polytopes (higher-dimensional analogues of the Platonic solids) that exist in Euclidean spaces of any dimension. Despite the wide use of Descartes' approach, which was called analytic geometry, the definition of Euclidean space remained unchanged until the end of 19th century. The introduction of abstract vector spaces allowed their use in defining Euclidean spaces with a purely algebraic definition. This new definition has been shown to be equivalent to the classical definition in terms of geometric axioms. It is this algebraic definition that is now most often used for introducing Euclidean spaces. Motivation of the modern definition One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angles. For example, there are two fundamental operations (referred to as motions) on the plane. One is translation, which means a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is rotation around a fixed point in the plane, in which all points in the plane turn around that fixed point through the same angle. One of the basic tenets of Euclidean geometry is that two figures (usually considered as subsets) of the plane should be considered equivalent (congruent) if one can be transformed into the other by some sequence of translations, rotations and reflections (see below). In order to make all of this mathematically precise, the theory must clearly define what is a Euclidean space, and the related notions of distance, angle, translation, and rotation. Even when used in physical theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments, and so on. A purely mathematical definition of Euclidean space also ignores questions of units of length and other physical dimensions: the distance in a "mathematical" space is a number, not something expressed in inches or metres. The standard way to mathematically define a Euclidean space, as carried out in the remainder of this article, is as a set of points on which a real vector space acts — the space of translations which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles. The set of -tuples of real numbers equipped with the dot product is a Euclidean space of dimension . Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension and viewed as a Euclidean space. It follows that everything that can be said about a Euclidean space can also be said about Therefore, many authors, especially at elementary level, call the standard Euclidean space of dimension , or simply the Euclidean space of dimension . A reason for introducing such an abstract definition of Euclidean spaces, and for working with instead of is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no standard origin nor any standard basis in the physical world. Technical definition A is a finite-dimensional inner product space over the real numbers. A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces to distinguish them from Euclidean vector spaces. If is a Euclidean space, its associated vector space (Euclidean vector space) is often denoted The dimension of a Euclidean space is the dimension of its associated vector space. The elements of are called points, and are commonly denoted by capital letters. The elements of are called Euclidean vectors or free vectors. They are also called translations, although, properly speaking, a translation is the geometric transformation resulting from the action of a Euclidean vector on the Euclidean space. The action of a translation on a point provides a point that is denoted . This action satisfies Note: The second in the left-hand side is a vector addition; each other denotes an action of a vector on a point. This notation is not ambiguous, as, to distinguish between the two meanings of , it suffices to look at the nature of its left argument. The fact that the action is free and transitive means that, for every pair of points , there is exactly one displacement vector such that . This vector is denoted or As previously explained, some of the basic properties of Euclidean spaces result from the structure of affine space. They are described in and its subsections. The properties resulting from the inner product are explained in and its subsections. Prototypical examples For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as the associated vector space. A typical case of Euclidean vector space is viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space of dimension , the choice of a point, called an origin and an orthonormal basis of defines an isomorphism of Euclidean spaces from to As every Euclidean space of dimension is isomorphic to it, the Euclidean space is sometimes called the standard Euclidean space of dimension . Affine structure Some basic properties of Euclidean spaces depend only on the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections. Subspaces Let be a Euclidean space and its associated vector space. A flat, Euclidean subspace or affine subspace of is a subset of such that as the associated vector space of is a linear subspace (vector subspace) of A Euclidean subspace is a Euclidean space with as the associated vector space. This linear subspace is also called the direction of . If is a point of then Conversely, if is a point of and is a linear subspace of then is a Euclidean subspace of direction . (The associated vector space of this subspace is .) A Euclidean vector space (that is, a Euclidean space that is equal to ) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector. Lines and segments In a Euclidean space, a line is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector, a line is a set of the form where and are two distinct points of the Euclidean space as a part of the line. It follows that there is exactly one line that passes through (contains) two distinct points. This implies that two distinct lines intersect in at most one point. A more symmetric representation of the line passing through and is where is an arbitrary point (not necessary on the line). In a Euclidean vector space, the zero vector is usually chosen for ; this allows simplifying the preceding formula into A standard convention allows using this formula in every Euclidean space, see . The line segment, or simply segment, joining the points and is the subset of points such that in the preceding formulas. It is denoted or ; that is Parallelism Two subspaces and of the same dimension in a Euclidean space are parallel if they have the same direction (i.e., the same associated vector space). Equivalently, they are parallel, if there is a translation vector that maps one to the other: Given a point and a subspace , there exists exactly one subspace that contains and is parallel to , which is In the case where is a line (subspace of dimension one), this property is Playfair's axiom. It follows that in a Euclidean plane, two lines either meet in one point or are parallel. The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other. Metric structure The vector space associated to a Euclidean space is an inner product space. This implies a symmetric bilinear form that is positive definite (that is is always positive for ). The inner product of a Euclidean space is often called dot product and denoted . This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is will be denoted in the remainder of this article. The Euclidean norm of a vector is The inner product and the norm allows expressing and proving metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. In these subsections, denotes an arbitrary Euclidean space, and denotes its vector space of translations. Distance and length The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is The length of a segment is the distance between its endpoints P and Q. It is often denoted . The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality Moreover, the equality is true if and only if a point belongs to the segment . This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term triangle inequality. With the Euclidean distance, every Euclidean space is a complete metric space. Orthogonality Two nonzero vectors and of (the associated vector space of a Euclidean space ) are perpendicular or orthogonal if their inner product is zero: Two linear subspaces of are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspaces is reduced to the zero vector. Two lines, and more generally two Euclidean subspaces (A line can be considered as one Euclidean subspace.) are orthogonal if their directions (the associated vector spaces of the Euclidean subspaces) are orthogonal. Two orthogonal lines that intersect are said perpendicular. Two segments and that share a common endpoint are perpendicular or form a right angle if the vectors and are orthogonal. If and form a right angle, one has This is the Pythagorean theorem. Its proof is easy in this context, as, expressing this in terms of the inner product, one has, using bilinearity and symmetry of the inner product: Here, is used since these two vectors are orthogonal. Angle The (non-oriented) angle between two nonzero vectors and in is where is the principal value of the arccosine function. By Cauchy–Schwarz inequality, the argument of the arccosine is in the interval . Therefore is real, and (or if angles are measured in degrees). Angles are not useful in a Euclidean line, as they can be only 0 or . In an oriented Euclidean plane, one can define the oriented angle of two vectors. The oriented angle of two vectors and is then the opposite of the oriented angle of and . In this case, the angle of two vectors can have any value modulo an integer multiple of . In particular, a reflex angle equals the negative angle . The angle of two vectors does not change if they are multiplied by positive numbers. More precisely, if and are two vectors, and and are real numbers, then If , , and are three points in a Euclidean space, the angle of the segments and is the angle of the vectors and As the multiplication of vectors by positive numbers do not change the angle, the angle of two half-lines with initial point can be defined: it is the angle of the segments and , where and are arbitrary points, one on each half-line. Although this is less used, one can define similarly the angle of segments or half-lines that do not share an initial point. The angle of two lines is defined as follows. If is the angle of two segments, one on each line, the angle of any two other segments, one on each line, is either or . One of these angles is in the interval , and the other being in . The non-oriented angle of the two lines is the one in the interval . In an oriented Euclidean plane, the oriented angle of two lines belongs to the interval . Cartesian coordinates Every Euclidean vector space has an orthonormal basis (in fact, infinitely many in dimension higher than one, and two in dimension one), that is a basis of unit vectors () that are pairwise orthogonal ( for ). More precisely, given any basis the Gram–Schmidt process computes an orthonormal basis such that, for every , the linear spans of and are equal. Given a Euclidean space , a Cartesian frame is a set of data consisting of an orthonormal basis of and a point of , called the origin and often denoted . A Cartesian frame allows defining Cartesian coordinates for both and in the following way. The Cartesian coordinates of a vector of are the coefficients of on the orthonormal basis For example, the Cartesian coordinates of a vector on an orthonormal basis (that may be named as as a convention) in a 3-dimensional Euclidean space is if . As the basis is orthonormal, the -th coefficient is equal to the dot product The Cartesian coordinates of a point of are the Cartesian coordinates of the vector Other coordinates As a Euclidean space is an affine space, one can consider an affine frame on it, which is the same as a Euclidean frame, except that the basis is not required to be orthonormal. This define affine coordinates, sometimes called skew coordinates for emphasizing that the basis vectors are not pairwise orthogonal. An affine basis of a Euclidean space of dimension is a set of points that are not contained in a hyperplane. An affine basis define barycentric coordinates for every point. Many other coordinates systems can be defined on a Euclidean space of dimension , in the following way. Let be a homeomorphism (or, more often, a diffeomorphism) from a dense open subset of to an open subset of The coordinates of a point of are the components of . The polar coordinate system (dimension 2) and the spherical and cylindrical coordinate systems (dimension 3) are defined this way. For points that are outside the domain of , coordinates may sometimes be defined as the limit of coordinates of neighbour points, but these coordinates may be not uniquely defined, and may be not continuous in the neighborhood of the point. For example, for the spherical coordinate system, the longitude is not defined at the pole, and on the antimeridian, the longitude passes discontinuously from –180° to +180°. This way of defining coordinates extends easily to other mathematical structures, and in particular to manifolds. Isometries An isometry between two metric spaces is a bijection preserving the distance, that is In the case of a Euclidean vector space, an isometry that maps the origin to the origin preserves the norm since the norm of a vector is its distance from the zero vector. It preserves also the inner product since An isometry of Euclidean vector spaces is a linear isomorphism. An isometry of Euclidean spaces defines an isometry of the associated Euclidean vector spaces. This implies that two isometric Euclidean spaces have the same dimension. Conversely, if and are Euclidean spaces, , , and is an isometry, then the map defined by is an isometry of Euclidean spaces. It follows from the preceding results that an isometry of Euclidean spaces maps lines to lines, and, more generally Euclidean subspaces to Euclidean subspaces of the same dimension, and that the restriction of the isometry on these subspaces are isometries of these subspaces. Isometry with prototypical examples If is a Euclidean space, its associated vector space can be considered as a Euclidean space. Every point defines an isometry of Euclidean spaces which maps to the zero vector and has the identity as associated linear map. The inverse isometry is the map A Euclidean frame allows defining the map which is an isometry of Euclidean spaces. The inverse isometry is This means that, up to an isomorphism, there is exactly one Euclidean space of a given dimension. This justifies that many authors talk of as the Euclidean space of dimension . Euclidean group An isometry from a Euclidean space onto itself is called Euclidean isometry, Euclidean transformation or rigid transformation. The rigid transformations of a Euclidean space form a group (under composition), called the Euclidean group and often denoted of . The simplest Euclidean transformations are translations They are in bijective correspondence with vectors. This is a reason for calling space of translations the vector space associated to a Euclidean space. The translations form a normal subgroup of the Euclidean group. A Euclidean isometry of a Euclidean space defines a linear isometry of the associated vector space (by linear isometry, it is meant an isometry that is also a linear map) in the following way: denoting by the vector if is an arbitrary point of , one has It is straightforward to prove that this is a linear map that does not depend from the choice of The map is a group homomorphism from the Euclidean group onto the group of linear isometries, called the orthogonal group. The kernel of this homomorphism is the translation group, showing that it is a normal subgroup of the Euclidean group. The isometries that fix a given point form the stabilizer subgroup of the Euclidean group with respect to . The restriction to this stabilizer of above group homomorphism is an isomorphism. So the isometries that fix a given point form a group isomorphic to the orthogonal group. Let be a point, an isometry, and the translation that maps to . The isometry fixes . So and the Euclidean group is the semidirect product of the translation group and the orthogonal group. The special orthogonal group is the normal subgroup of the orthogonal group that preserves handedness. It is a subgroup of index two of the orthogonal group. Its inverse image by the group homomorphism is a normal subgroup of index two of the Euclidean group, which is called the special Euclidean group or the displacement group. Its elements are called rigid motions or displacements. Rigid motions include the identity, translations, rotations (the rigid motions that fix at least a point), and also screw motions. Typical examples of rigid transformations that are not rigid motions are reflections, which are rigid transformations that fix a hyperplane and are not the identity. They are also the transformations consisting in changing the sign of one coordinate over some Euclidean frame. As the special Euclidean group is a subgroup of index two of the Euclidean group, given a reflection , every rigid transformation that is not a rigid motion is the product of and a rigid motion. A glide reflection is an example of a rigid transformation that is not a rigid motion or a reflection. All groups that have been considered in this section are Lie groups and algebraic groups. Topology The Euclidean distance makes a Euclidean space a metric space, and thus a topological space. This topology is called the Euclidean topology. In the case of this topology is also the product topology. The open sets are the subsets that contains an open ball around each of their points. In other words, open balls form a base of the topology. The topological dimension of a Euclidean space equals its dimension. This implies that Euclidean spaces of different dimensions are not homeomorphic. Moreover, the theorem of invariance of domain asserts that a subset of a Euclidean space is open (for the subspace topology) if and only if it is homeomorphic to an open subset of a Euclidean space of the same dimension. Euclidean spaces are complete and locally compact. That is, a closed subset of a Euclidean space is compact if it is bounded (that is, contained in a ball). In particular, closed balls are compact. Axiomatic definitions The definition of Euclidean spaces that has been described in this article differs fundamentally of Euclid's one. In reality, Euclid did not define formally the space, because it was thought as a description of the physical world that exists independently of human mind. The need of a formal definition appeared only at the end of 19th century, with the introduction of non-Euclidean geometries. Two different approaches have been used. Felix Klein suggested to define geometries through their symmetries. The presentation of Euclidean spaces given in this article, is essentially issued from his Erlangen program, with the emphasis given on the groups of translations and isometries. On the other hand, David Hilbert proposed a set of axioms, inspired by Euclid's postulates. They belong to synthetic geometry, as they do not involve any definition of real numbers. Later G. D. Birkhoff and Alfred Tarski proposed simpler sets of axioms, which use real numbers (see Birkhoff's axioms and Tarski's axioms). In Geometric Algebra, Emil Artin has proved that all these definitions of a Euclidean space are equivalent. It is rather easy to prove that all definitions of Euclidean spaces satisfy Hilbert's axioms, and that those involving real numbers (including the above given definition) are equivalent. The difficult part of Artin's proof is the following. In Hilbert's axioms, congruence is an equivalence relation on segments. One can thus define the length of a segment as its equivalence class. One must thus prove that this length satisfies properties that characterize nonnegative real numbers. Artin proved this with axioms equivalent to those of Hilbert. Usage Since the ancient Greeks, Euclidean space has been used for modeling shapes in the physical world. It is thus used in many sciences, such as physics, mechanics, and astronomy. It is also widely used in all technical areas that are concerned with shapes, figure, location and position, such as architecture, geodesy, topography, navigation, industrial design, or technical drawing. Space of dimensions higher than three occurs in several modern theories of physics; see Higher dimension. They occur also in configuration spaces of physical systems. Beside Euclidean geometry, Euclidean spaces are also widely used in other areas of mathematics. Tangent spaces of differentiable manifolds are Euclidean vector spaces. More generally, a manifold is a space that is locally approximated by Euclidean spaces. Most non-Euclidean geometries can be modeled by a manifold, and embedded in a Euclidean space of higher dimension. For example, an elliptic space can be modeled by an ellipsoid. It is common to represent in a Euclidean space mathematical objects that are a priori not of a geometrical nature. An example among many is the usual representation of graphs. Other geometric spaces Since the introduction, at the end of 19th century, of non-Euclidean geometries, many sorts of spaces have been considered, about which one can do geometric reasoning in the same way as with Euclidean spaces. In general, they share some properties with Euclidean spaces, but may also have properties that could appear as rather strange. Some of these spaces use Euclidean geometry for their definition, or can be modeled as subspaces of a Euclidean space of higher dimension. When such a space is defined by geometrical axioms, embedding the space in a Euclidean space is a standard way for proving consistency of its definition, or, more precisely for proving that its theory is consistent, if Euclidean geometry is consistent (which cannot be proved). Affine space A Euclidean space is an affine space equipped with a metric. Affine spaces have many other uses in mathematics. In particular, as they are defined over any field, they allow doing geometry in other contexts. As soon as non-linear questions are considered, it is generally useful to consider affine spaces over the complex numbers as an extension of Euclidean spaces. For example, a circle and a line have always two intersection points (possibly not distinct) in the complex affine space. Therefore, most of algebraic geometry is built in complex affine spaces and affine spaces over algebraically closed fields. The shapes that are studied in algebraic geometry in these affine spaces are therefore called affine algebraic varieties. Affine spaces over the rational numbers and more generally over algebraic number fields provide a link between (algebraic) geometry and number theory. For example, the Fermat's Last Theorem can be stated "a Fermat curve of degree higher than two has no point in the affine plane over the rationals." Geometry in affine spaces over a finite fields has also been widely studied. For example, elliptic curves over finite fields are widely used in cryptography. Projective space Originally, projective spaces have been introduced by adding "points at infinity" to Euclidean spaces, and, more generally to affine spaces, in order to make true the assertion "two coplanar lines meet in exactly one point". Projective space share with Euclidean and affine spaces the property of being isotropic, that is, there is no property of the space that allows distinguishing between two points or two lines. Therefore, a more isotropic definition is commonly used, which consists as defining a projective space as the set of the vector lines in a vector space of dimension one more. As for affine spaces, projective spaces are defined over any field, and are fundamental spaces of algebraic geometry. Non-Euclidean geometries Non-Euclidean geometry refers usually to geometrical spaces where the parallel postulate is false. They include elliptic geometry, where the sum of the angles of a triangle is more than 180°, and hyperbolic geometry, where this sum is less than 180°. Their introduction in the second half of 19th century, and the proof that their theory is consistent (if Euclidean geometry is not contradictory) is one of the paradoxes that are at the origin of the foundational crisis in mathematics of the beginning of 20th century, and motivated the systematization of axiomatic theories in mathematics. Curved spaces A manifold is a space that in the neighborhood of each point resembles a Euclidean space. In technical terms, a manifold is a topological space, such that each point has a neighborhood that is homeomorphic to an open subset of a Euclidean space. Manifolds can be classified by increasing degree of this "resemblance" into topological manifolds, differentiable manifolds, smooth manifolds, and analytic manifolds. However, none of these types of "resemblance" respect distances and angles, even approximately. Distances and angles can be defined on a smooth manifold by providing a smoothly varying Euclidean metric on the tangent spaces at the points of the manifold (these tangent spaces are thus Euclidean vector spaces). This results in a Riemannian manifold. Generally, straight lines do not exist in a Riemannian manifold, but their role is played by geodesics, which are the "shortest paths" between two points. This allows defining distances, which are measured along geodesics, and angles between geodesics, which are the angle of their tangents in the tangent space at their intersection. So, Riemannian manifolds behave locally like a Euclidean space that has been bent. Euclidean spaces are trivially Riemannian manifolds. An example illustrating this well is the surface of a sphere. In this case, geodesics are arcs of great circle, which are called orthodromes in the context of navigation. More generally, the spaces of non-Euclidean geometries can be realized as Riemannian manifolds. Pseudo-Euclidean space An inner product of a real vector space is a positive definite bilinear form, and so characterized by a positive definite quadratic form. A pseudo-Euclidean space is an affine space with an associated real vector space equipped with a non-degenerate quadratic form (that may be indefinite). A fundamental example of such a space is the Minkowski space, which is the space-time of Einstein's special relativity. It is a four-dimensional space, where the metric is defined by the quadratic form where the last coordinate (t) is temporal, and the other three (x, y, z) are spatial. To take gravity into account, general relativity uses a pseudo-Riemannian manifold that has Minkowski spaces as tangent spaces. The curvature of this manifold at a point is a function of the value of the gravitational field at this point. See also Hilbert space, a generalization to infinite dimension, used in functional analysis Position space, an application in physics Footnotes References Euclidean geometry Linear algebra Homogeneous spaces Norms (mathematics)
Euclidean space
[ "Physics", "Mathematics" ]
6,482
[ "Mathematical analysis", "Group actions", "Homogeneous spaces", "Algebra", "Space (mathematics)", "Topological spaces", "Norms (mathematics)", "Geometry", "Linear algebra", "Symmetry" ]
9,703
https://en.wikipedia.org/wiki/Evolutionary%20psychology
Evolutionary psychology is a theoretical approach in psychology that examines cognition and behavior from a modern evolutionary perspective. It seeks to identify human psychological adaptations with regards to the ancestral problems they evolved to solve. In this framework, psychological traits and mechanisms are either functional products of natural and sexual selection or non-adaptive by-products of other adaptive traits. Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and the liver, is common in evolutionary biology. Evolutionary psychologists apply the same thinking in psychology, arguing that just as the heart evolved to pump blood, the liver evolved to detoxify poisons, and the kidneys evolved to filter turbid fluids there is modularity of mind in that different psychological mechanisms evolved to solve different adaptive problems. These evolutionary psychologists argue that much of human behavior is the output of psychological adaptations that evolved to solve recurrent problems in human ancestral environments. Some evolutionary psychologists argue that evolutionary theory can provide a foundational, metatheoretical framework that integrates the entire field of psychology in the same way evolutionary biology has for biology. Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations, including the abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, and cooperate with others. Findings have been made regarding human social behaviour related to infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price, and parental investment. The theories and findings of evolutionary psychology have applications in many fields, including economics, environment, health, law, management, psychiatry, politics, and literature. Criticism of evolutionary psychology involves questions of testability, cognitive and evolutionary assumptions (such as modular functioning of the brain, and large uncertainty about the ancestral environment), importance of non-genetic and non-adaptive explanations, as well as political and ethical issues due to interpretations of research results. Evolutionary psychologists frequently engage with and respond to such criticisms. Scope Principles Its central assumption is that the human brain is composed of a large number of specialized mechanisms that were shaped by natural selection over a vast period of time to solve the recurrent information-processing problems faced by our ancestors. These problems involve food choices, social hierarchies, distributing resources to offspring, and selecting mates. Proponents suggest that it seeks to integrate psychology into the other natural sciences, rooting it in the organizing theory of biology (evolutionary theory), and thus understanding psychology as a branch of biology. Anthropologist John Tooby and psychologist Leda Cosmides note: Just as human physiology and evolutionary physiology have worked to identify physical adaptations of the body that represent "human physiological nature," the purpose of evolutionary psychology is to identify evolved emotional and cognitive adaptations that represent "human psychological nature." According to Steven Pinker, it is "not a single theory but a large set of hypotheses" and a term that "has also come to refer to a particular way of applying evolutionary theory to the mind, with an emphasis on adaptation, gene-level selection, and modularity." Evolutionary psychology adopts an understanding of the mind that is based on the computational theory of mind. It describes mental processes as computational operations, so that, for example, a fear response is described as arising from a neurological computation that inputs the perceptional data, e.g. a visual image of a spider, and outputs the appropriate reaction, e.g. fear of possibly dangerous animals. Under this view, any domain-general learning is impossible because of the combinatorial explosion. Evolutionary Psychology specifies the domain as the problems of survival and reproduction. While philosophers have generally considered the human mind to include broad faculties, such as reason and lust, evolutionary psychologists describe evolved psychological mechanisms as narrowly focused to deal with specific issues, such as catching cheaters or choosing mates. The discipline sees the human brain as having evolved specialized functions, called cognitive modules, or psychological adaptations which are shaped by natural selection. Examples include language-acquisition modules, incest-avoidance mechanisms, cheater-detection mechanisms, intelligence and sex-specific mating preferences, foraging mechanisms, alliance-tracking mechanisms, agent-detection mechanisms, and others. Some mechanisms, termed domain-specific, deal with recurrent adaptive problems over the course of human evolutionary history. Domain-general mechanisms, on the other hand, are proposed to deal with evolutionary novelty. Evolutionary psychology has roots in cognitive psychology and evolutionary biology but also draws on behavioral ecology, artificial intelligence, genetics, ethology, anthropology, archaeology, biology, ecopsycology and zoology. It is closely linked to sociobiology, but there are key differences between them including the emphasis on domain-specific rather than domain-general mechanisms, the relevance of measures of current fitness, the importance of mismatch theory, and psychology rather than behavior. Nikolaas Tinbergen's four categories of questions can help to clarify the distinctions between several different, but complementary, types of explanations. Evolutionary psychology focuses primarily on the "why?" questions, while traditional psychology focuses on the "how?" questions. Premises Evolutionary psychology is founded on several core premises. The brain is an information processing device, and it produces behavior in response to external and internal inputs. The brain's adaptive mechanisms were shaped by natural and sexual selection. Different neural mechanisms are specialized for solving problems in humanity's evolutionary past. The brain has evolved specialized neural mechanisms that were designed for solving problems that recurred over deep evolutionary time, giving modern humans stone-age minds. Most contents and processes of the brain are unconscious; and most mental problems that seem easy to solve are actually extremely difficult problems that are solved unconsciously by complicated neural mechanisms. Human psychology consists of many specialized mechanisms, each sensitive to different classes of information or inputs. These mechanisms combine to manifest behavior. History Evolutionary psychology has its historical roots in Charles Darwin's theory of natural selection. In The Origin of Species, Darwin predicted that psychology would develop an evolutionary basis: Two of his later books were devoted to the study of animal emotions and psychology; The Descent of Man, and Selection in Relation to Sex in 1871 and The Expression of the Emotions in Man and Animals in 1872. Darwin's work inspired William James's functionalist approach to psychology. Darwin's theories of evolution, adaptation, and natural selection have provided insight into why brains function the way they do. The content of evolutionary psychology has derived from, on the one hand, the biological sciences (especially evolutionary theory as it relates to ancient human environments, the study of paleoanthropology and animal behavior) and, on the other, the human sciences, especially psychology. Evolutionary biology as an academic discipline emerged with the modern synthesis in the 1930s and 1940s. In the 1930s the study of animal behavior (ethology) emerged with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch. W.D. Hamilton's (1964) papers on inclusive fitness and Robert Trivers's (1972) theories on reciprocity and parental investment helped to establish evolutionary thinking in psychology and the other social sciences. In 1975, Edward O. Wilson combined evolutionary theory with studies of animal and social behavior, building on the works of Lorenz and Tinbergen, in his book Sociobiology: The New Synthesis. In the 1970s, two major branches developed from ethology. Firstly, the study of animal social behavior (including humans) generated sociobiology, defined by its pre-eminent proponent Edward O. Wilson in 1975 as "the systematic study of the biological basis of all social behavior" and in 1978 as "the extension of population biology and evolutionary theory to social organization." Secondly, there was behavioral ecology which placed less emphasis on social behavior; it focused on the ecological and evolutionary basis of animal and human behavior. In the 1970s and 1980s university departments began to include the term evolutionary biology in their titles. The modern era of evolutionary psychology was ushered in, in particular, by Donald Symons' 1979 book The Evolution of Human Sexuality and Leda Cosmides and John Tooby's 1992 book The Adapted Mind. David Buller observed that the term "evolutionary psychology" is sometimes seen as denoting research based on the specific methodological and theoretical commitments of certain researchers from the Santa Barbara school (University of California), thus some evolutionary psychologists prefer to term their work "human ecology", "human behavioural ecology" or "evolutionary anthropology" instead. From psychology there are the primary streams of developmental, social and cognitive psychology. Establishing some measure of the relative influence of genetics and environment on behavior has been at the core of behavioral genetics and its variants, notably studies at the molecular level that examine the relationship between genes, neurotransmitters and behavior. Dual inheritance theory (DIT), developed in the late 1970s and early 1980s, has a slightly different perspective by trying to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. DIT is seen by some as a "middle-ground" between views that emphasize human universals versus those that emphasize cultural variation. Theoretical foundations The theories on which evolutionary psychology is based originated with Charles Darwin's work, including his speculations about the evolutionary origins of social instincts in humans. Modern evolutionary psychology, however, is possible only because of advances in evolutionary theory in the 20th century. Evolutionary psychologists say that natural selection has provided humans with many psychological adaptations, in much the same way that it generated humans' anatomical and physiological adaptations. As with adaptations in general, psychological adaptations are said to be specialized for the environment in which an organism evolved, the environment of evolutionary adaptedness. Sexual selection provides organisms with adaptations related to mating. For male mammals, which have a relatively high maximal potential reproduction rate, sexual selection leads to adaptations that help them compete for females. For female mammals, with a relatively low maximal potential reproduction rate, sexual selection leads to choosiness, which helps females select higher quality mates. Charles Darwin described both natural selection and sexual selection, and he relied on group selection to explain the evolution of altruistic (self-sacrificing) behavior. But group selection was considered a weak explanation, because in any group the less altruistic individuals will be more likely to survive, and the group will become less self-sacrificing as a whole. In 1964, the evolutionary biologist William D. Hamilton proposed inclusive fitness theory, emphasizing a gene-centered view of evolution. Hamilton noted that genes can increase the replication of copies of themselves into the next generation by influencing the organism's social traits in such a way that (statistically) results in helping the survival and reproduction of other copies of the same genes (most simply, identical copies in the organism's close relatives). According to Hamilton's rule, self-sacrificing behaviors (and the genes influencing them) can evolve if they typically help the organism's close relatives so much that it more than compensates for the individual animal's sacrifice. Inclusive fitness theory resolved the issue of how altruism can evolve. Other theories also help explain the evolution of altruistic behavior, including evolutionary game theory, tit-for-tat reciprocity, and generalized reciprocity. These theories help to explain the development of altruistic behavior, and account for hostility toward cheaters (individuals that take advantage of others' altruism). Several mid-level evolutionary theories inform evolutionary psychology. The r/K selection theory proposes that some species prosper by having many offspring, while others follow the strategy of having fewer offspring but investing much more in each one. Humans follow the second strategy. Parental investment theory explains how parents invest more or less in individual offspring based on how successful those offspring are likely to be, and thus how much they might improve the parents' inclusive fitness. According to the Trivers–Willard hypothesis, parents in good conditions tend to invest more in sons (who are best able to take advantage of good conditions), while parents in poor conditions tend to invest more in daughters (who are best able to have successful offspring even in poor conditions). According to life history theory, animals evolve life histories to match their environments, determining details such as age at first reproduction and number of offspring. Dual inheritance theory posits that genes and human culture have interacted, with genes affecting the development of culture, and culture, in turn, affecting human evolution on a genetic level, in a similar way to the Baldwin effect. Evolved psychological mechanisms Evolutionary psychology is based on the hypothesis that, just like hearts, lungs, livers, kidneys, and immune systems, cognition has a functional structure that has a genetic basis, and therefore has evolved by natural selection. Like other organs and tissues, this functional structure should be universally shared amongst a species and should solve important problems of survival and reproduction. Evolutionary psychologists seek to understand psychological mechanisms by understanding the survival and reproductive functions they might have served over the course of evolutionary history. These might include abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, cooperate with others and follow leaders. Consistent with the theory of natural selection, evolutionary psychology sees humans as often in conflict with others, including mates and relatives. For instance, a mother may wish to wean her offspring from breastfeeding earlier than does her infant, which frees up the mother to invest in additional offspring. Evolutionary psychology also recognizes the role of kin selection and reciprocity in evolving prosocial traits such as altruism. Like chimpanzees and bonobos, humans have subtle and flexible social instincts, allowing them to form extended families, lifelong friendships, and political alliances. In studies testing theoretical predictions, evolutionary psychologists have made modest findings on topics such as infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price and parental investment. Another example would be the evolved mechanism in depression. Clinical depression is maladaptive and should have evolutionary approaches so it can become adaptive. Over the centuries animals and humans have gone through hard times to stay alive, which made our fight or flight senses evolve tremendously. For instances, mammalians have separation anxiety from their guardians which causes distress and sends signals to their hypothalamic pituitary adrenal axis, and emotional/behavioral changes. Going through these types of circumstances helps mammals cope with separation anxiety. Historical topics Proponents of evolutionary psychology in the 1990s made some explorations in historical events, but the response from historical experts was highly negative and there has been little effort to continue that line of research. Historian Lynn Hunt says that the historians complained that the researchers: Hunt states that "the few attempts to build up a subfield of psychohistory collapsed under the weight of its presuppositions." She concludes that, as of 2014, the "'iron curtain' between historians and psychology...remains standing." Products of evolution: adaptations, exaptations, byproducts, and random variation Not all traits of organisms are evolutionary adaptations. As noted in the table below, traits may also be exaptations, byproducts of adaptations (sometimes called "spandrels"), or random variation between individuals. Psychological adaptations are hypothesized to be innate or relatively easy to learn and to manifest in cultures worldwide. For example, the ability of toddlers to learn a language with virtually no training is likely to be a psychological adaptation. On the other hand, ancestral humans did not read or write, thus today, learning to read and write requires extensive training, and presumably involves the repurposing of cognitive capacities that evolved in response to selection pressures unrelated to written language. However, variations in manifest behavior can result from universal mechanisms interacting with different local environments. For example, Caucasians who move from a northern climate to the equator will have darker skin. The mechanisms regulating their pigmentation do not change; rather the input to those mechanisms change, resulting in different outputs. One of the tasks of evolutionary psychology is to identify which psychological traits are likely to be adaptations, byproducts or random variation. George C. Williams suggested that an "adaptation is a special and onerous concept that should only be used where it is really necessary." As noted by Williams and others, adaptations can be identified by their improbable complexity, species universality, and adaptive functionality. Obligate and facultative adaptations A question that may be asked about an adaptation is whether it is generally obligate (relatively robust in the face of typical environmental variation) or facultative (sensitive to typical environmental variation). The sweet taste of sugar and the pain of hitting one's knee against concrete are the result of fairly obligate psychological adaptations; typical environmental variability during development does not much affect their operation. By contrast, facultative adaptations are somewhat like "if-then" statements. For example, The adaptation for skin to tan is conditional to exposure to sunlight; this is an example of another facultative adaptation. When a psychological adaptation is facultative, evolutionary psychologists concern themselves with how developmental and environmental inputs influence the expression of the adaptation. Cultural universals Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations. Cultural universals include behaviors related to language, cognition, social roles, gender roles, and technology. Evolved psychological adaptations (such as the ability to learn a language) interact with cultural inputs to produce specific behaviors (e.g., the specific language learned). Basic gender differences, such as greater eagerness for sex among men and greater coyness among women, are explained as sexually dimorphic psychological adaptations that reflect the different reproductive strategies of males and females. It has been found that both male and female personality traits differ on a large spectrum. Males had a higher rate of traits relating to dominance, tension, and directness. Females had higher rates organizational behavior and more emotional based characteristics. Evolutionary psychologists contrast their approach to what they term the "standard social science model," according to which the mind is a general-purpose cognition device shaped almost entirely by culture. Environment of evolutionary adaptedness Evolutionary psychology argues that to properly understand the functions of the brain, one must understand the properties of the environment in which the brain evolved. That environment is often referred to as the "environment of evolutionary adaptedness". The idea of an environment of evolutionary adaptedness was first explored as a part of attachment theory by John Bowlby. This is the environment to which a particular evolved mechanism is adapted. More specifically, the environment of evolutionary adaptedness is defined as the set of historically recurring selection pressures that formed a given adaptation, as well as those aspects of the environment that were necessary for the proper development and functioning of the adaptation. Humans, the genus Homo, appeared between 1.5 and 2.5 million years ago, a time that roughly coincides with the start of the Pleistocene 2.6 million years ago. Because the Pleistocene ended a mere 12,000 years ago, most human adaptations either newly evolved during the Pleistocene, or were maintained by stabilizing selection during the Pleistocene. Evolutionary psychology, therefore, proposes that the majority of human psychological mechanisms are adapted to reproductive problems frequently encountered in Pleistocene environments. In broad terms, these problems include those of growth, development, differentiation, maintenance, mating, parenting, and social relationships. The environment of evolutionary adaptedness is significantly different from modern society. The ancestors of modern humans lived in smaller groups, had more cohesive cultures, and had more stable and rich contexts for identity and meaning. Researchers look to existing hunter-gatherer societies for clues as to how hunter-gatherers lived in the environment of evolutionary adaptedness. Unfortunately, the few surviving hunter-gatherer societies are different from each other, and they have been pushed out of the best land and into harsh environments, so it is not clear how closely they reflect ancestral culture. However, all around the world small-band hunter-gatherers offer a similar developmental system for the young ("hunter-gatherer childhood model," Konner, 2005; "evolved developmental niche" or "evolved nest;" Narvaez et al., 2013). The characteristics of the niche are largely the same as for social mammals, who evolved over 30 million years ago: soothing perinatal experience, several years of on-request breastfeeding, nearly constant affection or physical proximity, responsiveness to need (mitigating offspring distress), self-directed play, and for humans, multiple responsive caregivers. Initial studies show the importance of these components in early life for positive child outcomes. Evolutionary psychologists sometimes look to chimpanzees, bonobos, and other great apes for insight into human ancestral behavior. Mismatches Since an organism's adaptations were suited to its ancestral environment, a new and different environment can create a mismatch. Because humans are mostly adapted to Pleistocene environments, psychological mechanisms sometimes exhibit "mismatches" to the modern environment. One example is the fact that although over 20,000 people are murdered by guns in the US annually, whereas spiders and snakes kill only a handful, people nonetheless learn to fear spiders and snakes about as easily as they do a pointed gun, and more easily than an unpointed gun, rabbits or flowers. A potential explanation is that spiders and snakes were a threat to human ancestors throughout the Pleistocene, whereas guns (and rabbits and flowers) were not. There is thus a mismatch between humans' evolved fear-learning psychology and the modern environment. This mismatch also shows up in the phenomena of the supernormal stimulus, a stimulus that elicits a response more strongly than the stimulus for which the response evolved. The term was coined by Niko Tinbergen to refer to non-human animal behavior, but psychologist Deirdre Barrett said that supernormal stimulation governs the behavior of humans as powerfully as that of other animals. She explained junk food as an exaggerated stimulus to cravings for salt, sugar, and fats, and she says that television is an exaggeration of social cues of laughter, smiling faces and attention-grabbing action. Magazine centerfolds and double cheeseburgers pull instincts intended for an environment of evolutionary adaptedness where breast development was a sign of health, youth and fertility in a prospective mate, and fat was a rare and vital nutrient. The psychologist Mark van Vugt recently argued that modern organizational leadership is a mismatch. His argument is that humans are not adapted to work in large, anonymous bureaucratic structures with formal hierarchies. The human mind still responds to personalized, charismatic leadership primarily in the context of informal, egalitarian settings. Hence the dissatisfaction and alienation that many employees experience. Salaries, bonuses and other privileges exploit instincts for relative status, which attract particularly males to senior executive positions. Research methods Evolutionary theory is heuristic in that it may generate hypotheses that might not be developed from other theoretical approaches. One of the main goals of adaptationist research is to identify which organismic traits are likely to be adaptations, and which are byproducts or random variations. As noted earlier, adaptations are expected to show evidence of complexity, functionality, and species universality, while byproducts or random variation will not. In addition, adaptations are expected to be presented as proximate mechanisms that interact with the environment in either a generally obligate or facultative fashion (see above). Evolutionary psychologists are also interested in identifying these proximate mechanisms (sometimes termed "mental mechanisms" or "psychological adaptations") and what type of information they take as input, how they process that information, and their outputs. Evolutionary developmental psychology, or "evo-devo," focuses on how adaptations may be activated at certain developmental times (e.g., losing baby teeth, adolescence, etc.) or how events during the development of an individual may alter life-history trajectories. Evolutionary psychologists use several strategies to develop and test hypotheses about whether a psychological trait is likely to be an evolved adaptation. Buss (2011) notes that these methods include: Evolutionary psychologists also use various sources of data for testing, including experiments, archaeological records, data from hunter-gatherer societies, observational studies, neuroscience data, self-reports and surveys, public records, and human products. Recently, additional methods and tools have been introduced based on fictional scenarios, mathematical models, and multi-agent computer simulations. Main areas of research Foundational areas of research in evolutionary psychology can be divided into broad categories of adaptive problems that arise from evolutionary theory itself: survival, mating, parenting, family and kinship, interactions with non-kin, and cultural evolution. Survival and individual-level psychological adaptations Problems of survival are clear targets for the evolution of physical and psychological adaptations. Major problems the ancestors of present-day humans faced included food selection and acquisition; territory selection and physical shelter; and avoiding predators and other environmental threats. Consciousness Consciousness meets George Williams' criteria of species universality, complexity, and functionality, and it is a trait that apparently increases fitness. In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness. In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social and natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine. Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars. Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought. Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches. Consistent with this hypothesis, Gordon Gallup found that chimpanzees and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests. The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behavior involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviors are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place seemingly outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so. Evolutionary psychology approaches self-deception as an adaptation that can improve one's results in social exchanges. Sleep may have evolved to conserve energy when activity would be less fruitful or more dangerous, such as at night, and especially during the winter season. Sensation and perception Many experts, such as Jerry Fodor, write that the purpose of perception is knowledge, but evolutionary psychologists hold that its primary purpose is to guide action. For example, they say, depth perception seems to have evolved not to help us know the distances to other objects but rather to help us move around in space. Evolutionary psychologists say that animals from fiddler crabs to humans use eyesight for collision avoidance, suggesting that vision is basically for directing action, not providing knowledge. Building and maintaining sense organs is metabolically expensive, so these organs evolve only when they improve an organism's fitness. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources, so the senses must provide exceptional benefits to fitness. Perception accurately mirrors the world; animals get useful, accurate information through their senses. Scientists who study perception and sensation have long understood the human senses as adaptations to their surrounding worlds. Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world. Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects. Sound waves go around corners and interact with obstacles, creating a complex pattern that includes useful information about the sources of and distances to objects. Larger animals naturally make lower-pitched sounds as a consequence of their size. The range over which an animal hears, on the other hand, is determined by adaptation. Homing pigeons, for example, can hear the very low-pitched sound (infrasound) that carries great distances, even though most smaller animals detect higher-pitched sounds. Taste and smell respond to chemicals in the environment that are thought to have been significant for fitness in the environment of evolutionary adaptedness. For example, salt and sugar were apparently both valuable to the human or pre-human inhabitants of the environment of evolutionary adaptedness, so present-day humans have an intrinsic hunger for salty and sweet tastes. The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain. Pain, while unpleasant, is adaptive. An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation. For example, one's eyes automatically adjust to dim or bright ambient light. Sensory abilities of different organisms often coevolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make. Evolutionary psychologists contend that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks. For example, people with damage to a particular part of the brain have the specific defect of not being able to recognize faces (prosopagnosia). Evolutionary psychology suggests that this indicates a so-called face-reading module. Learning and facultative adaptations In evolutionary psychology, learning is said to be accomplished through evolved capacities, specifically facultative adaptations. Facultative adaptations express themselves differently depending on input from the environment. Sometimes the input comes during development and helps shape that development. For example, migrating birds learn to orient themselves by the stars during a critical period in their maturation. Evolutionary psychologists believe that humans also learn language along an evolved program, also with critical periods. The input can also come during daily tasks, helping the organism cope with changing environmental conditions. For example, animals evolved Pavlovian conditioning in order to solve problems about causal relationships. Animals accomplish learning tasks most easily when those tasks resemble problems that they faced in their evolutionary past, such as a rat learning where to find food or water. Learning capacities sometimes demonstrate differences between the sexes. In many animal species, for example, males can solve spatial problems faster and more accurately than females, due to the effects of male hormones during development. The same might be true of humans. Emotion and motivation Motivations direct and energize behavior, while emotions provide the affective component to motivation, positive or negative. In the early 1970s, Paul Ekman and colleagues began a line of research which suggests that many emotions are universal. He found evidence that humans share at least five basic emotions: fear, sadness, happiness, anger, and disgust. Social emotions evidently evolved to motivate social behaviors that were adaptive in the environment of evolutionary adaptedness. For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status. Motivation has a neurobiological basis in the reward system of the brain. Recently, it has been suggested that reward systems may evolve in such a way that there may be an inherent or unavoidable trade-off in the motivational system for activities of short versus long duration. Cognition Cognition refers to internal representations of the world and internal information processing. From an evolutionary psychology perspective, cognition is not "general purpose". Cognition uses heuristics, or strategies, that generally increase the likelihood of solving problems that the ancestors of present-day humans routinely faced in their lives. For example, present-day humans are far more likely to solve logic problems that involve detecting cheating (a common problem given humans' social nature) than the same logic problem put in purely abstract terms. Since the ancestors of present-day humans did not encounter truly random events and lived under simpler life terms, present-day humans may be cognitively predisposed to incorrectly identify patterns in random sequences. "Gamblers' Fallacy" is one example of this. Gamblers may falsely believe that they have hit a "lucky streak" even when each outcome is actually random and independent of previous trials. Most people believe that if a fair coin has been flipped 9 times and Heads appears each time, that on the tenth flip, there is a greater than 50% chance of getting Tails. Humans find it far easier to make diagnoses or predictions using frequency data than when the same information is presented as probabilities or percentages. This could be due to the ancestors of present-day humans living in relatively small tribes (usually with fewer than 150 people) where frequency information was more readily available and experienced less random occurrences in their lives. Personality Evolutionary psychology is primarily interested in finding commonalities between people, or basic human psychological nature. From an evolutionary perspective, the fact that people have fundamental differences in personality traits initially presents something of a puzzle. (Note: The field of behavioral genetics is concerned with statistically partitioning differences between people into genetic and environmental sources of variance. However, understanding the concept of heritability can be tricky – heritability refers only to the differences between people, never the degree to which the traits of an individual are due to environmental or genetic factors, since traits are always a complex interweaving of both.) Personality traits are conceptualized by evolutionary psychologists as due to normal variation around an optimum, due to frequency-dependent selection (behavioral polymorphisms), or as facultative adaptations. Like variability in height, some personality traits may simply reflect inter-individual variability around a general optimum. Or, personality traits may represent different genetically predisposed "behavioral morphs" – alternate behavioral strategies that depend on the frequency of competing behavioral strategies in the population. For example, if most of the population is generally trusting and gullible, the behavioral morph of being a "cheater" (or, in the extreme case, a sociopath) may be advantageous. Finally, like many other psychological adaptations, personality traits may be facultative – sensitive to typical variations in the social environment, especially during early development. For example, later-born children are more likely than firstborns to be rebellious, less conscientious and more open to new experiences, which may be advantageous to them given their particular niche in family structure. Shared environmental influences do play a role in personality and are not always of less importance than genetic factors. However, shared environmental influences often decrease to near zero after adolescence but do not completely disappear. Language According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's The Language Instinct). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop. Pinker follows Chomsky in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction. Chomsky himself does not believe language to have evolved as an adaptation, but suggests that it likely evolved as a byproduct of some other adaptation, a so-called spandrel. But Pinker and Bloom argue that the organic nature of language strongly suggests that it has an adaptational origin. Evolutionary psychologists hold that the FOXP2 gene may well be associated with the evolution of human language. In the 1980s, psycholinguist Myrna Gopnik identified a dominant gene that causes language impairment in the KE family of Britain. This gene turned out to be a mutation of the FOXP2 gene. Humans have a unique allele of this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans. However, the once-popular idea that FOXP2 is a 'grammar gene' or that it triggered the emergence of language in Homo sapiens is now widely discredited. Currently, several competing theories about the evolutionary origin of language coexist, none of them having achieved a general consensus. Researchers of language acquisition in primates and humans such as Michael Tomasello and Talmy Givón, argue that the innatist framework has understated the role of imitation in learning and that it is not at all necessary to posit the existence of an innate grammar module to explain human language acquisition. Tomasello argues that studies of how children and primates actually acquire communicative skills suggest that humans learn complex behavior through experience, so that instead of a module specifically dedicated to language acquisition, language is acquired by the same cognitive mechanisms that are used to acquire all other kinds of socially transmitted behavior. On the issue of whether language is best seen as having evolved as an adaptation or as a spandrel, evolutionary biologist W. Tecumseh Fitch, following Stephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation. He criticizes some strands of evolutionary psychology for suggesting a pan-adaptionist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading. He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made by Terrence Deacon who in The Symbolic Species argues that the different features of language have co-evolved with the evolution of the mind and that the ability to use symbolic communication is integrated in all other cognitive processes. If the theory that language could have evolved as a single adaptation is accepted, the question becomes which of its many functions has been the basis of adaptation. Several evolutionary hypotheses have been posited: that language evolved for the purpose of social grooming, that it evolved as a way to show mating potential or that it evolved to form social contracts. Evolutionary psychologists recognize that these theories are all speculative and that much more evidence is required to understand how language might have been selectively adapted. Mating Given that sexual reproduction is the means by which genes are propagated into future generations, sexual selection plays a large role in human evolution. Human mating, then, is of interest to evolutionary psychologists who aim to investigate evolved mechanisms to attract and secure mates. Several lines of research have stemmed from this interest, such as studies of mate selection mate poaching, mate retention, mating preferences and conflict between the sexes. In 1972 Robert Trivers published an influential paper on sex differences that is now referred to as parental investment theory. The size differences of gametes (anisogamy) is the fundamental, defining difference between males (small gametes – sperm) and females (large gametes – ova). Trivers noted that anisogamy typically results in different levels of parental investment between the sexes, with females initially investing more. Trivers proposed that this difference in parental investment leads to the sexual selection of different reproductive strategies between the sexes and to sexual conflict. For example, he suggested that the sex that invests less in offspring will generally compete for access to the higher-investing sex to increase their inclusive fitness. Trivers posited that differential parental investment led to the evolution of sexual dimorphisms in mate choice, intra- and inter- sexual reproductive competition, and courtship displays. In mammals, including humans, females make a much larger parental investment than males (i.e. gestation followed by childbirth and lactation). Parental investment theory is a branch of life history theory. Buss and Schmitt's (1993) sexual strategies theory proposed that, due to differential parental investment, humans have evolved sexually dimorphic adaptations related to "sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment." Their strategic interference theory suggested that conflict between the sexes occurs when the preferred reproductive strategies of one sex interfere with those of the other sex, resulting in the activation of emotional responses such as anger or jealousy. Women are generally more selective when choosing mates, especially under long-term mating conditions. However, under some circumstances, short term mating can provide benefits to women as well, such as fertility insurance, trading up to better genes, reducing the risk of inbreeding, and insurance protection of her offspring. Due to male paternity uncertainty, sex differences have been found in the domains of sexual jealousy. Females generally react more adversely to emotional infidelity and males will react more to sexual infidelity. This particular pattern is predicted because the costs involved in mating for each sex are distinct. Women, on average, should prefer a mate who can offer resources (e.g., financial, commitment), thus, a woman risks losing such resources with a mate who commits emotional infidelity. Men, on the other hand, are never certain of the genetic paternity of their children because they do not bear the offspring themselves. This suggests that for men sexual infidelity would generally be more aversive than emotional infidelity because investing resources in another man's offspring does not lead to the propagation of their own genes. Another interesting line of research is that which examines women's mate preferences across the ovulatory cycle. The theoretical underpinning of this research is that ancestral women would have evolved mechanisms to select mates with certain traits depending on their hormonal status. Known as the ovulatory shift hypothesis, the theory posits that, during the ovulatory phase of a woman's cycle (approximately days 10–15 of a woman's cycle), a woman who mated with a male with high genetic quality would have been more likely, on average, to produce and bear a healthy offspring than a woman who mated with a male with low genetic quality. These putative preferences are predicted to be especially apparent for short-term mating domains because a potential male mate would only be offering genes to a potential offspring. This hypothesis allows researchers to examine whether women select mates who have characteristics that indicate high genetic quality during the high fertility phase of their ovulatory cycles. Indeed, studies have shown that women's preferences vary across the ovulatory cycle. In particular, Haselton and Miller (2006) showed that highly fertile women prefer creative but poor men as short-term mates. Creativity may be a proxy for good genes. Research by Gangestad et al. (2004) indicates that highly fertile women prefer men who display social presence and intrasexual competition; these traits may act as cues that would help women predict which men may have, or would be able to acquire, resources. Parenting Reproduction is always costly for women, and can also be for men. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival and further reproductive output. Parental investment is any parental expenditure (time, energy etc.) that benefits one offspring at a cost to parents' ability to invest in other components of fitness (Clutton-Brock 1991: 9; Trivers 1972). Components of fitness (Beatty 1992) include the well-being of existing offspring, parents' future reproduction, and inclusive fitness through aid to kin (Hamilton, 1964). Parental investment theory is a branch of life history theory. The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately, on the reproductive success of the offspring. However, these benefits can come at the cost of the parent's ability to reproduce in the future e.g. through the increased risk of injury when defending offspring against predators, the loss of mating opportunities whilst rearing offspring, and an increase in the time to the next reproduction. Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will likely evolve when the benefits exceed the costs. The Cinderella effect is an alleged high incidence of stepchildren being physically, emotionally or sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than their genetic counterparts. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters. Daly and Wilson (1996) noted: "Evolutionary thinking led to the discovery of the most important risk factor for child homicide – the presence of a stepparent. Parental efforts and investments are valuable resources, and selection favors those parental psyches that allocate effort effectively to promote fitness. The adaptive problems that challenge parental decision-making include both the accurate identification of one's offspring and the allocation of one's resources among them with sensitivity to their needs and abilities to convert parental investment into fitness increments…. Stepchildren were seldom or never so valuable to one's expected fitness as one's own offspring would be, and those parental psyches that were easily parasitized by just any appealing youngster must always have incurred a selective disadvantage"(Daly & Wilson, 1996, pp. 64–65). However, they note that not all stepparents will "want" to abuse their partner's children, or that genetic parenthood is any insurance against abuse. They see step parental care as primarily "mating effort" towards the genetic parent. Family and kin Inclusive fitness is the sum of an organism's classical fitness (how many of its own offspring it produces and supports) and the number of equivalents of its own offspring it can add to the population by supporting others. The first component is called classical fitness by Hamilton (1964). From the gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Until 1964, it was generally believed that genes only achieved this by causing the individual to leave the maximum number of viable offspring. However, in 1964 W. D. Hamilton proved mathematically that, because close relatives of an organism share some identical genes, a gene can also increase its evolutionary success by promoting the reproduction and survival of these related or otherwise similar individuals. Hamilton concluded that this leads natural selection to favor organisms that would behave in ways that maximize their inclusive fitness. It is also true that natural selection favors behavior that maximizes personal fitness. Hamilton's rule describes mathematically whether or not a gene for altruistic behavior will spread in a population: where is the reproductive cost to the altruist, is the reproductive benefit to the recipient of the altruistic behavior, and is the probability, above the population average, of the individuals sharing an altruistic gene – commonly viewed as "degree of relatedness". The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. Altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene (Chapter 6) and The Extended Phenotype, this must be distinguished from the green-beard effect. Although it is generally true that humans tend to be more altruistic toward their kin than toward non-kin, the relevant proximate mechanisms that mediate this cooperation have been debated (see kin recognition), with some arguing that kin status is determined primarily via social and cultural factors (such as co-residence, maternal association of sibs, etc.), while others have argued that kin recognition can also be mediated by biological factors such as facial resemblance and immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007) (PDF). Whatever the proximate mechanisms of kin recognition there is substantial evidence that humans act generally more altruistically to close genetic kin compared to genetic non-kin. Interactions with non-kin / reciprocity Although interactions with non-kin are generally less altruistic compared to those with kin, cooperation can be maintained with non-kin via mutually beneficial reciprocity as was proposed by Robert Trivers. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favored even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: w > c/b Reciprocity can also be indirect if information about previous interactions is shared. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help. The calculations of indirect reciprocity are complicated and only a tiny fraction of this universe has been uncovered, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone's reputation exceeds the cost-to-benefit ratio of the altruistic act: q > c/b One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known. Trivers argues that friendship and various social emotions evolved in order to manage reciprocity. Liking and disliking, he says, evolved to help present-day humans' ancestors form coalitions with others who reciprocated and to exclude those who did not reciprocate. Moral indignation may have evolved to prevent one's altruism from being exploited by cheaters, and gratitude may have motivated present-day humans' ancestors to reciprocate appropriately after benefiting from others' altruism. Likewise, present-day humans feel guilty when they fail to reciprocate. These social motivations match what evolutionary psychologists expect to see in adaptations that evolved to maximize the benefits and minimize the drawbacks of reciprocity. Evolutionary psychologists say that humans have psychological adaptations that evolved specifically to help us identify nonreciprocators, commonly referred to as "cheaters." In 1993, Robert Frank and his associates found that participants in a prisoner's dilemma scenario were often able to predict whether their partners would "cheat", based on a half-hour of unstructured social interaction. In a 1996 experiment, for example, Linda Mealey and her colleagues found that people were better at remembering the faces of people when those faces were associated with stories about those individuals cheating (such as embezzling money from a church). Strong reciprocity (or "tribal reciprocity") Humans may have an evolved set of psychological adaptations that predispose them to be more cooperative than otherwise would be expected with members of their tribal in-group, and, more nasty to members of tribal out groups. These adaptations may have been a consequence of tribal warfare. Humans may also have predispositions for "altruistic punishment" – to punish in-group members who violate in-group rules, even when this altruistic behavior cannot be justified in terms of helping those you are related to (kin selection), cooperating with those who you will interact with again (direct reciprocity), or cooperating to better your reputation with others (indirect reciprocity). Evolutionary psychology and culture Though evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations, considerable work has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989). Biological explanations of human culture also brought criticism to evolutionary psychology: Evolutionary psychologists see the human psyche and physiology as a genetic product and assume that genes contain the information for the development and control of the organism and that this information is transmitted from one generation to the next via genes. Evolutionary psychologists thereby see physical and psychological characteristics of humans as genetically programmed. Even then, when evolutionary psychologists acknowledge the influence of the environment on human development, they understand the environment only as an activator or trigger for the programmed developmental instructions encoded in genes. Evolutionary psychologists, for example, believe that the human brain is made up of innate modules, each of which is specialised only for very specific tasks, e. g. an anxiety module. According to evolutionary psychologists, these modules are given before the organism actually develops and are then activated by some environmental event. Critics object that this view is reductionist and that cognitive specialisation only comes about through the interaction of humans with their real environment, rather than the environment of distant ancestors. Interdisciplinary approaches are increasingly striving to mediate between these opposing points of view and to highlight that biological and cultural causes need not be antithetical in explaining human behaviour and even complex cultural achievements. In psychology sub-fields Developmental psychology According to Paul Baltes, the benefits granted by evolutionary selection decrease with age. Natural selection has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this might have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases. Social psychology As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences. When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help. Abnormal psychology Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects of both nature and nurture, and often have multiple contributing causes. Evolutionary psychologists have suggested that schizophrenia and bipolar disorder may reflect a side-effect of genes with fitness benefits, such as increased creativity. (Some individuals with bipolar disorder are especially creative during their manic phases and the close relatives of people with schizophrenia have been found to be more likely to have creative professions.) A 1994 report by the American Psychiatry Association found that people with schizophrenia at roughly the same rate in Western and non-Western cultures, and in industrialized and pastoral societies, suggesting that schizophrenia is not a disease of civilization nor an arbitrary social invention. Sociopathy may represent an evolutionarily stable strategy, by which a small number of people who cheat on social contracts benefit in a society consisting mostly of non-sociopaths. Mild depression may be an adaptive response to withdraw from, and re-evaluate, situations that have led to disadvantageous outcomes (the "analytical rumination hypothesis") (see Evolutionary approaches to depression). Trofimova reviewed the most consistent psychological and behavioural sex differences in psychological abilities and disabilities and linked them to the Geodakyan's evolutionary theory of sex (ETS). She pointed out that a pattern of consistent sex differences in physical, verbal and social dis/abilities corresponds to the idea of the ETS considering sex dimorphism as a functional specialization of a species. Sex differentiation, according to the ETS, creates two partitions within a species, (1) conservational (females), and (2) variational (males). In females, superiority in verbal abilities, higher rule obedience, socialisation, empathy and agreeableness can be presented as a reflection of the systemic conservation function of the female sex. Male superiority is mostly noted in exploratory abilities - in risk- and sensation seeking, spacial orientation, physical strength and higher rates in physical aggression. In combination with higher birth and accidental death rates this pattern might be a reflection of the systemic variational function (testing the boundaries of beneficial characteristics) of the male sex. As a result, psychological sex differences might be influenced by a global tendency within a species to expand its norm of reaction, but at the same time to preserve the beneficial properties of the species. Moreover, Trofimova suggested a "redundancy pruning" hypothesis as an upgrade of the ETS theory. She pointed out to higher rates of psychopathy, dyslexia, autism and schizophrenia in males, in comparison to females. She suggested that the variational function of the "male partition" might also provide irrelevance/redundancy pruning of an excess in a bank of beneficial characteristics of a species, with a continuing resistance to any changes from the norm-driven conservational partition of species. This might explain a contradictory allocation of a high drive for social status/power in the male sex with the their least (among two sexes) abilities for social interaction. The high rates of communicative disorders and psychopathy in males might facilitate their higher rates of disengagement from normative expectations and their insensitivity to social disapproval, when they deliberately do not follow social norms. Some of these speculations have yet to be developed into fully testable hypotheses, and a great deal of research is required to confirm their validity. Antisocial and criminal behavior Evolutionary psychology has been applied to explain criminal or otherwise immoral behavior as being adaptive or related to adaptive behaviors. Males are generally more aggressive than females, who are more selective of their partners because of the far greater effort they have to contribute to pregnancy and child-rearing. Males being more aggressive is hypothesized to stem from the more intense reproductive competition faced by them. Males of low status may be especially vulnerable to being childless. It may have been evolutionary advantageous to engage in highly risky and violently aggressive behavior to increase their status and therefore reproductive success. This may explain why males are generally involved in more crimes, and why low status and being unmarried are associated with criminality. Furthermore, competition over females is argued to have been particularly intensive in late adolescence and young adulthood, which is theorized to explain why crime rates are particularly high during this period. Some sociologists have underlined differential exposure to androgens as the cause of these behaviors, notably Lee Ellis in his evolutionary neuroandrogenic (ENA) theory. Many conflicts that result in harm and death involve status, reputation, and seemingly trivial insults. Steven Pinker in his book The Better Angels of Our Nature argues that in non-state societies without a police it was very important to have a credible deterrence against aggression. Therefore, it was important to be perceived as having a credible reputation for retaliation, resulting in humans developing instincts for revenge as well as for protecting reputation ("honor"). Pinker argues that the development of the state and the police have dramatically reduced the level of violence compared to the ancestral environment. Whenever the state breaks down, which can be very locally such as in poor areas of a city, humans again organize in groups for protection and aggression and concepts such as violent revenge and protecting honor again become extremely important. Rape is theorized to be a reproductive strategy that facilitates the propagation of the rapist's progeny. Such a strategy may be adopted by men who otherwise are unlikely to be appealing to women and therefore cannot form legitimate relationships, or by high-status men on socially vulnerable women who are unlikely to retaliate to increase their reproductive success even further. The sociobiological theories of rape are highly controversial, as traditional theories typically do not consider rape to be a behavioral adaptation, and objections to this theory are made on ethical, religious, political, as well as scientific grounds. Psychology of religion Adaptationist perspectives on religious belief suggest that, like all behavior, religious behaviors are a product of the human brain. As with all other organ functions, cognition's functional structure has been argued to have a genetic foundation, and is therefore subject to the effects of natural selection and sexual selection. Like other organs and tissues, this functional structure should be universally shared amongst humans and should have solved important problems of survival and reproduction in ancestral environments. However, evolutionary psychologists remain divided on whether religious belief is more likely a consequence of evolved psychological adaptations, or a byproduct of other cognitive adaptations. Coalitional psychology Coalitional psychology is an approach to explain political behaviors between different coalitions and the conditionality of these behaviors in evolutionary psychological perspective. This approach assumes that since human beings appeared on the earth, they have evolved to live in groups instead of living as individuals to achieve benefits such as more mating opportunities and increased status. Human beings thus naturally think and act in a way that manages and negotiates group dynamics. Coalitional psychology offers falsifiable ex ante prediction by positing five hypotheses on how these psychological adaptations operate: Humans represent groups as a special category of individual, unstable and with a short shadow of the future Political entrepreneurs strategically manipulate the coalitional environment, often appealing to emotional devices such as "outrage" to inspire collective action. Relative gains dominate relations with enemies, whereas absolute gains characterize relations with allies. Coalitional size and male physical strength will positively predict individual support for aggressive foreign policies. Individuals with children, particularly women, will vary in adopting aggressive foreign policies than those without progeny. Reception and criticism Critics of evolutionary psychology accuse it of promoting genetic determinism, pan-adaptationism (the idea that all behaviors and anatomical features are adaptations), unfalsifiable hypotheses, distal or ultimate explanations of behavior when proximate explanations are superior, and malevolent political or moral ideas. Ethical implications Critics have argued that evolutionary psychology might be used to justify existing social hierarchies and reactionary policies. It has also been suggested by critics that evolutionary psychologists' theories and interpretations of empirical data rely heavily on ideological assumptions about race and gender. In response to such criticism, evolutionary psychologists often caution against committing the naturalistic fallacy – the assumption that "what is natural" is necessarily a moral good. However, their caution against committing the naturalistic fallacy has been criticized as means to stifle legitimate ethical discussions. Contradictions in models Some criticisms of evolutionary psychology point at contradictions between different aspects of adaptive scenarios posited by evolutionary psychology. One example is the evolutionary psychology model of extended social groups selecting for modern human brains, a contradiction being that the synaptic function of modern human brains require high amounts of many specific essential nutrients so that such a transition to higher requirements of the same essential nutrients being shared by all individuals in a population would decrease the possibility of forming large groups due to bottleneck foods with rare essential nutrients capping group sizes. It is mentioned that some insects have societies with different ranks for each individual and that monkeys remain socially functioning after the removal of most of the brain as additional arguments against big brains promoting social networking. The model of males as both providers and protectors is criticized for the impossibility of being in two places at once, the male cannot both protect his family at home and be out hunting at the same time. In the case of the claim that a provider male could buy protection service for his family from other males by bartering food that he had hunted, critics point at the fact that the most valuable food (the food that contained the rarest essential nutrients) would be different in different ecologies and as such vegetable in some geographical areas and animal in others, making it impossible for hunting styles relying on physical strength or risk-taking to be universally of similar value in bartered food and instead of making it inevitable that in some parts of Africa, food gathered with no need for major physical strength would be the most valuable to barter for protection. A contradiction between evolutionary psychology's claim of men needing to be more sexually visual than women for fast speed of assessing women's fertility than women needed to be able to assess the male's genes and its claim of male sexual jealousy guarding against infidelity is also pointed at, as it would be pointless for a male to be fast to assess female fertility if he needed to assess the risk of there being a jealous male mate and in that case his chances of defeating him before mating anyway (pointlessness of assessing one necessary condition faster than another necessary condition can possibly be assessed). Standard social science model Evolutionary psychology has been entangled in the larger philosophical and social science controversies related to the debate on nature versus nurture. Evolutionary psychologists typically contrast evolutionary psychology with what they call the standard social science model (SSSM). They characterize the SSSM as the "blank slate", "relativist", "social constructionist", and "cultural determinist" perspective that they say dominated the social sciences throughout the 20th century and assumed that the mind was shaped almost entirely by culture. Critics have argued that evolutionary psychologists created a false dichotomy between their own view and the caricature of the SSSM. Other critics regard the SSSM as a rhetorical device or a straw man and suggest that the scientists whom evolutionary psychologists associate with the SSSM did not believe that the mind was a blank state devoid of any natural predispositions. Reductionism and determinism Some critics view evolutionary psychology as a form of genetic reductionism and genetic determinism, a common critique being that evolutionary psychology does not address the complexity of individual development and experience and fails to explain the influence of genes on behavior in individual cases. Evolutionary psychologists respond that they are working within a nature-nurture interactionist framework that acknowledges that many psychological adaptations are facultative (sensitive to environmental variations during individual development). The discipline is generally not focused on proximate analyses of behavior, but rather its focus is on the study of distal/ultimate causality (the evolution of psychological adaptations). The field of behavioral genetics is focused on the study of the proximate influence of genes on behavior. Testability of hypotheses A frequent critique of the discipline is that the hypotheses of evolutionary psychology are frequently arbitrary and difficult or impossible to adequately test, thus questioning its status as an actual scientific discipline, for example because many current traits probably evolved to serve different functions than they do now. Thus because there are a potentially infinite number of alternative explanations for why a trait evolved, critics contend that it is impossible to determine the exact explanation. While evolutionary psychology hypotheses are difficult to test, evolutionary psychologists assert that it is not impossible. Part of the critique of the scientific base of evolutionary psychology includes a critique of the concept of the Environment of Evolutionary Adaptation (EEA). Some critics have argued that researchers know so little about the environment in which Homo sapiens evolved that explaining specific traits as an adaption to that environment becomes highly speculative. Evolutionary psychologists respond that they do know many things about this environment, including the facts that present day humans' ancestors were hunter-gatherers, that they generally lived in small tribes, etc. Edward Hagen argues that the human past environments were not radically different in the same sense as the Carboniferous or Jurassic periods and that the animal and plant taxa of the era were similar to those of the modern world, as was the geology and ecology. Hagen argues that few would deny that other organs evolved in the EEA (for example, lungs evolving in an oxygen rich atmosphere) yet critics question whether or not the brain's EEA is truly knowable, which he argues constitutes selective scepticism. Hagen also argues that most evolutionary psychology research is based on the fact that females can get pregnant and males cannot, which Hagen observes was also true in the EEA. John Alcock describes this as the "No Time Machine Argument", as critics are arguing that since it is not possible to travel back in time to the EEA, then it cannot be determined what was going on there and thus what was adaptive. Alcock argues that present-day evidence allows researchers to be reasonably confident about the conditions of the EEA and that the fact that so many human behaviours are adaptive in the current environment is evidence that the ancestral environment of humans had much in common with the present one, as these behaviours would have evolved in the ancestral environment. Thus Alcock concludes that researchers can make predictions on the adaptive value of traits. Similarly, Dominic Murphy argues that alternative explanations cannot just be forwarded but instead need their own evidence and predictions - if one explanation makes predictions that the others cannot, it is reasonable to have confidence in that explanation. In addition, Murphy argues that other historical sciences also make predictions about modern phenomena to come up with explanations about past phenomena, for example, cosmologists look for evidence for what we would expect to see in the modern-day if the Big Bang was true, while geologists make predictions about modern phenomena to determine if an asteroid wiped out the dinosaurs. Murphy argues that if other historical disciplines can conduct tests without a time machine, then the onus is on the critics to show why evolutionary psychology is untestable if other historical disciplines are not, as "methods should be judged across the board, not singled out for ridicule in one context." Modularity of mind Evolutionary psychologists generally presume that, like the body, the mind is made up of many evolved modular adaptations, although there is some disagreement within the discipline regarding the degree of general plasticity, or "generality," of some modules. It has been suggested that modularity evolves because, compared to non-modular networks, it would have conferred an advantage in terms of fitness and because connection costs are lower. In contrast, some academics argue that it is unnecessary to posit the existence of highly domain specific modules, and, suggest that the neural anatomy of the brain supports a model based on more domain general faculties and processes. Moreover, empirical support for the domain-specific theory stems almost entirely from performance on variations of the Wason selection task which is extremely limited in scope as it only tests one subtype of deductive reasoning. Cultural rather than genetic development of cognitive tools Psychologist Cecilia Heyes has argued that the picture presented by some evolutionary psychology of the human mind as a collection of cognitive instinctsorgans of thought shaped by genetic evolution over very long time periodsdoes not fit research results. She posits instead that humans have cognitive gadgets"special-purpose organs of thought" built in the course of development through social interaction. Similar criticisms are articulated by Subrena E. Smith of the University of New Hampshire. Response by evolutionary psychologists Evolutionary psychologists have addressed many of their critics (e.g. in books by Segerstråle (2000), Barkow (2005), and Alcock (2001)). Among their rebuttals are that some criticisms are straw men, or are based on an incorrect nature versus nurture dichotomy or on basic misunderstandings of the discipline. Robert Kurzban suggested that "...critics of the field, when they err, are not slightly missing the mark. Their confusion is deep and profound. It's not like they are marksmen who can't quite hit the center of the target; they're holding the gun backwards." Many have written specifically to correct basic misconceptions. See also Affective neuroscience Behavioural genetics Biocultural evolution Biosocial criminology Collective unconscious Cognitive neuroscience Cultural neuroscience Darwinian Happiness Darwinian literary studies Deep social mind Dunbar's number Evolution of the brain List of evolutionary psychologists Evolutionary origin of religions Evolutionary psychology and culture Molecular evolution Primate cognition Hominid intelligence Human ethology Great ape language Chimpanzee intelligence Cooperative eye hypothesis Id, ego, and superego Intersubjectivity Mirror neuron Origin of language Origin of speech Ovulatory shift hypothesis Primate empathy Shadow (psychology) Simulation theory of empathy Theory of mind Neuroethology Paleolithic diet Paleolithic lifestyle r/K selection theory Social neuroscience Sociobiology Universal Darwinism Notes References Buss, D. M. (1994). The evolution of desire: Strategies of human mating. New York: Basic Books. Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary psychology. Prentice Hall. 2003. Nesse, R.M. (2000). Tingergen's Four Questions Organized . Schacter, Daniel L, Daniel Wegner and Daniel Gilbert. 2007. Psychology. Worth Publishers. . Further reading Heylighen F. (2012). "Evolutionary Psychology", in: A. Michalos (ed.): Encyclopedia of Quality of Life Research (Springer, Berlin). Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Oikkonen, Venla: Gender, Sexuality and Reproduction in Evolutionary Narratives. London: Routledge, 2013. External links PsychTable.org Collaborative effort to catalog human psychological adaptations What Is Evolutionary Psychology? by Clinical Evolutionary Psychologist Dale Glaebach. Evolutionary Psychology – Approaches in Psychology Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Academic societies Human Behavior and Evolution Society; international society dedicated to using evolutionary theory to study human nature The International Society for Human Ethology; promotes ethological perspectives on the study of humans worldwide European Human Behaviour and Evolution Association an interdisciplinary society that supports the activities of European researchers with an interest in evolutionary accounts of human cognition, behavior and society The Association for Politics and the Life Sciences; an international and interdisciplinary association of scholars, scientists, and policymakers concerned with evolutionary, genetic, and ecological knowledge and its bearing on political behavior, public policy and ethics. Society for Evolutionary Analysis in Law a scholarly association dedicated to fostering interdisciplinary exploration of issues at the intersection of law, biology, and evolutionary theory The New England Institute for Cognitive Science and Evolutionary Psychology aims to foster research and education into the interdisciplinary nexus of cognitive science and evolutionary studies The NorthEastern Evolutionary Psychology Society; regional society dedicated to encouraging scholarship and dialogue on the topic of evolutionary psychology Feminist Evolutionary Psychology Society researchers that investigate the active role that females have had in human evolution Journals Evolutionary Psychology – free access online scientific journal Evolution and Human Behavior – journal of the Human Behavior and Evolution Society Evolutionary Psychological Science - An international, interdisciplinary forum for original research papers that address evolved psychology. Spans social and life sciences, anthropology, philosophy, criminology, law and the humanities. Politics and the Life Sciences – an interdisciplinary peer-reviewed journal published by the Association for Politics and the Life Sciences Human Nature: An Interdisciplinary Biosocial Perspective – advances the interdisciplinary investigation of the biological, social, and environmental factors that underlie human behavior. It focuses primarily on the functional unity in which these factors are continuously and mutually interactive. These include the evolutionary, biological, and sociological processes as they interact with human social behavior. Biological Theory: Integrating Development, Evolution and Cognition – devoted to theoretical advances in the fields of biology and cognition, with an emphasis on the conceptual integration afforded by evolutionary and developmental approaches. Evolutionary Anthropology Behavioral and Brain Sciences – interdisciplinary articles in psychology, neuroscience, behavioral biology, cognitive science, artificial intelligence, linguistics and philosophy. About 30% of the articles have focused on evolutionary analyses of behavior. Evolution and Development – research relevant to interface of evolutionary and developmental biology The Evolutionary Review – Art, Science, and Culture Videos Brief video clip from the "Evolution" PBS Series TED talk by Steven Pinker about his book The Blank Slate: The Modern Denial of Human Nature RSA talk by evolutionary psychologist Robert Kurzban on modularity of mind, based on his book Why Everyone (Else) is a Hypocrite Richard Dawkins' lecture on natural selection and evolutionary psychology Evolutionary Psychology – Steven Pinker & Frans de Waal Audio recording Stone Age Minds: A conversation with evolutionary psychologists Leda Cosmides and John Tooby Margaret Mead and Samoa. Review of the nature versus nurture debate triggered by Mead's book "Coming of Age in Samoa." "Evolutionary Psychology", In Our Time, BBC Radio 4 discussion with Janet Radcliffe Richards, Nicholas Humphrey and Steven Rose (November 2, 2000) psychology
Evolutionary psychology
[ "Biology" ]
16,133
[ "Evolutionary biology" ]
9,707
https://en.wikipedia.org/wiki/Electronegativity
Electronegativity, symbolized as χ, is the tendency for an atom of a given chemical element to attract shared electrons (or electron density) when forming a chemical bond. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, and the sign and magnitude of a bond's chemical polarity, which characterizes a bond along the continuous scale from covalent to ionic bonding. The loosely defined term electropositivity is the opposite of electronegativity: it characterizes an element's tendency to donate valence electrons. On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number and location of other electrons in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result, the less positive charge they will experience—both because of their increased distance from the nucleus and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus). The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811, though the concept was known before that and was studied by many chemists including Avogadro. In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements. The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (χr), on a relative scale running from 0.79 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units. As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Even so, the electronegativity of an atom is strongly correlated with the first ionization energy. The electronegativity is slightly negatively correlated (for smaller electronegativity values) and rather strongly positively correlated (for most and larger electronegativity values) with the electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations. Caesium is the least electronegative element (0.79); fluorine is the most (3.98). Methods of calculation Pauling electronegativity Pauling first proposed the concept of electronegativity in 1932 to explain why the covalent bond between two different atoms (A–B) is stronger than the average of the A–A and the B–B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding. The difference in electronegativity between atoms A and B is given by: where the dissociation energies, Ed, of the A–B, A–A and B–B bonds are expressed in electronvolts, the factor (eV)− being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H–Br, 3.79 eV; H–H, 4.52 eV; Br–Br 2.00 eV) As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br− ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point has been fixed (usually, for H or F). To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bonds formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used. The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely: or sometimes, a more accurate fit These are approximate equations but they hold with good accuracy. Pauling obtained the first equation by noting that a bond can be approximately represented as a quantum mechanical superposition of a covalent bond and two ionic bond-states. The covalent energy of a bond is approximate, by quantum mechanical calculations, the geometric mean of the two energies of covalent bonds of the same molecules, and there is additional energy that comes from ionic factors, i.e. polar character of the bond. The geometric mean is approximately equal to the arithmetic mean—which is applied in the first formula above—when the energies are of a similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is these semi-empirical formulas for bond energy that underlie the concept of Pauling electronegativity. The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of the polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data. In more complex compounds, there is an additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The enthalpy of formation of a molecule containing only single bonds can subsequently be estimated based on an electronegativity table, and it depends on the constituents and the sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has a relative error on the order of 10% but can be used to get a rough qualitative idea and understanding of a molecule. Mulliken electronegativity Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons: As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts, and for energies in kilojoules per mole, The Mulliken electronegativity can only be calculated for an element whose electron affinity is known. Measured values are available for 72 elements, while approximate values have been estimated or calculated for the remaining elements. The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e., Allred–Rochow electronegativity A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, Zeff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, rcov. When rcov is expressed in picometres, Sanderson electronegativity equalization R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, s-electron energy, NMR spin-spin coupling constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics. Allen electronegativity Perhaps the simplest definition of electronegativity is that of Leland C. Allen, who has proposed that it is related to the average energy of the valence electrons in a free atom, where εs,p are the one-electron energies of s- and p-electrons in the free atom and ns,p are the number of s- and p-electrons in the valence shell. The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method. On this scale, neon has the highest electronegativity of all elements, followed by fluorine, helium, and oxygen. Correlation of electronegativity with other properties The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties that might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate the "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse. Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself". Trends in electronegativity Periodic trends In general, electronegativity increases on passing from left to right along a period and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available. There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity and Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state with a Pauling value of 1.87 instead of the +4 state. Variation of electronegativity with oxidation number In inorganic chemistry, it is common to consider a single value of electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is not an invariable atomic property and, in particular, increases with the oxidation state of the element. Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data were available. However, for most elements, there are not enough different covalent compounds for which bond dissociation energies are known to make this approach feasible. The chemical effects of this increase in electronegativity can be seen both in the structures of oxides and halides and in the acidity of oxides and oxoacids. Hence CrO3 and Mn2O7 are acidic oxides with low melting points, while Cr2O3 is amphoteric and Mn2O3 is a completely basic oxide. The effect can also be clearly seen in the dissociation constants pKa of the oxoacids of chlorine. The effect is much larger than could be explained by the negative charge being shared among a larger number of oxygen atoms, which would lead to a difference in pKa of log10() = –0.6 between hypochlorous acid and perchloric acid. As the oxidation state of the central chlorine atom increases, more electron density is drawn from the oxygen atoms onto the chlorine, diminishing the partial negative charge of individual oxygen atoms. At the same time, the positive partial charge on the hydrogen increases with a higher oxidation state. This explains the observed increased acidity with an increasing oxidation state in the oxoacids of chlorine. Electronegativity and hybridization scheme The electronegativity of an atom changes depending on the hybridization of the orbital employed in bonding. Electrons in s orbitals are held more tightly than electrons in p orbitals. Hence, a bond to an atom that employs an spx hybrid orbital for bonding will be more heavily polarized to that atom when the hybrid orbital has more s character. That is, when electronegativities are compared for different hybridization schemes of a given element, the order holds (the trend should apply to non-integer hybridization indices as well). Group electronegativity In organic chemistry, electronegativity is associated more with different functional groups than with individual atoms. The terms group electronegativity and substituent electronegativity are used synonymously. However, it is common to distinguish between the inductive effect and the resonance effect, which might be described as σ- and π-electronegativities, respectively. There are a number of linear free-energy relationships that have been used to quantify these effects, of which the Hammett equation is the best known. Kabachnik Parameters are group electronegativities for use in organophosphorus chemistry. Electropositivity Electropositivity is a measure of an element's ability to donate electrons, and therefore form positive ions; thus, it is antipode to electronegativity. Mainly, this is an attribute of metals, meaning that, in general, the greater the metallic character of an element the greater the electropositivity. Therefore, the alkali metals are the most electropositive of all. This is because they have a single electron in their outer shell and, as this is relatively far from the nucleus of the atom, it is easily lost; in other words, these metals have low ionization energies. While electronegativity increases along periods in the periodic table, and decreases down groups, electropositivity decreases along periods (from left to right) and increases down groups. This means that elements in the upper right of the periodic table of elements (oxygen, sulfur, chlorine, etc.) will have the greatest electronegativity, and those in the lower-left (rubidium, caesium, and francium) the greatest electropositivity. See also Chemical polarity Electron affinity Electronegativities of the elements (data page) Ionization energy Metallic bonding Miedema's model Orbital hybridization Oxidation state Periodic table References Bibliography External links WebElements, lists values of electronegativities by a number of different methods of calculation Video explaining electronegativity Electronegativity Chart, a summary listing of the electronegativity of each element along with an interactive periodic table Chemical properties Chemical bonding Dimensionless numbers of chemistry
Electronegativity
[ "Physics", "Chemistry", "Materials_science" ]
3,977
[ "Dimensionless numbers of chemistry", "Chemical bonding", "Condensed matter physics", "nan" ]
9,710
https://en.wikipedia.org/wiki/Elementary%20algebra
Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values). This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations. Algebraic operations Algebraic notation Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression has the following components: A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. ) are typically used to represent constants, and those toward the end of the alphabet (e.g. and ) are used to represent variables. They are usually printed in italics. Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, is written as , and may be written . Usually terms with the highest power (exponent), are written on the left, for example, is written to the left of . When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ). When the exponent is zero, the result is always 1 (e.g. is always rewritten to ). However , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents. Alternative notation Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., , in plain text, and in the TeX mark-up language, the caret symbol represents exponentiation, so is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, is written "3*x". Concepts Variables Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons. Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as . Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to seconds. A more general (algebraic) description may state that the number of seconds, , where m is the number of minutes. Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by . Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as . Simplifying expressions Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example, Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient). Multiplied terms are simplified using exponents. For example, is represented as Like terms are added together, for example, is written as , because the terms containing are added together, and, the terms containing are added together. Brackets can be "multiplied out", using the distributive property. For example, can be written as which can be written as Expressions can be factored. For example, , by dividing both terms by the common factor, can be written as Equations An equation states that two expressions are equal using the symbol for equality, (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle: This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by and . An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. Properties of equality By definition, equality is an equivalence relation, meaning it is reflexive (i.e. ), symmetric (i.e. if then ), and transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties: if and then and ; if then and ; more generally, for any function , if then . Properties of inequality The relations less than and greater than have the property of transitivity: If     and     then   ; If     and     then   ; If     and     then   ; If     and     then   . By reversing the inequation, and can be swapped, for example: is equivalent to Substitution Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for in the expression makes a new expression with meaning . Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if is meant as the definition of as the product of with itself, substituting for informs the reader of this statement that means . Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement , if is substituted with , this implies , which is false, which implies that if then cannot be . If and are integers, rationals, or real numbers, then implies or . Consider . Then, substituting for and for , we learn or . Then we can substitute again, letting and , to show that if then or . Therefore, if , then or ( or ), so implies or or . If the original fact were stated as " implies or ", then when saying "consider ," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if then or or if, instead of letting and , one substitutes for and for (and with , substituting for and for ). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression into the term of the original equation, the substituted does not refer to the in the statement " implies or ." Solving algebraic equations The following sections lay out examples of some of the types of algebraic equations that may be encountered. Linear equations with one variable Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider: Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child? Equivalent equation: where represent the child's age To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows: In words: the child is 4 years old. The general form of a linear equation with one variable, can be written as: Following the same procedure (i.e. subtract from both sides, and then divide by ), the general solution is given by Linear equations with two variables A linear equation with two variables has many (i.e. an infinite number of) solutions. For example: Problem in words: A father is 22 years older than his son. How old are they? Equivalent equation: where is the father's age, is the son's age. That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above. To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that: Problem in words In 10 years, the father will be twice as old as his son. Equivalent equation Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method): In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations. For other ways to solve this kind of equations, see below, System of linear equations. Quadratic equations A quadratic equation is one which includes a term with an exponent of 2, for example, , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form , where is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term , which is known as the quadratic term. Hence , and so we may divide by and rearrange the equation into the standard form where and . Solving this, by a process known as completing the square, leads to the quadratic formula where the symbol "±" indicates that both are solutions of the quadratic equation. Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring: which is the same thing as It follows from the zero-product property that either or are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example, has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as: For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as Complex numbers All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation has solutions Since is not any real number, both of these solutions for x are complex numbers. Exponential and logarithmic equations An exponential equation is one which has the form for , which has solution when . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain whence or A logarithmic equation is an equation of the form for , which has solution For example, if then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get whence from which we obtain Radical equations A radical equation is one that includes a radical sign, which includes square roots, cube roots, , and nth roots, . Recall that an nth root can be rewritten in exponential format, so that is equivalent to . Combined with regular exponents (powers), then (the square root of cubed), can be rewritten as . So a common form of a radical equation is (equivalent to ) where and are integers. It has real solution(s): For example, if: then and thus System of linear equations There are different methods to solve a system of linear equations with two variables. Elimination method An example of solving a system of linear equations is by using the elimination method: Multiplying the terms in the second equation by 2: Adding the two equations together to get: which simplifies to Since the fact that is known, it is then possible to deduce that by either of the original two equations (by using 2 instead of ) The full solution to this problem is then This is not the only way to solve this specific system; could have been resolved before . Substitution method Another way of solving the same system of linear equations is by substitution. An equivalent for can be deduced by using one of the two equations. Using the second equation: Subtracting from each side of the equation: and multiplying by −1: Using this value in the first equation in the original system: Adding 2 on each side of the equation: which simplifies to Using this value in one of the equations, the same solution as in the previous method is obtained. This is not the only way to solve this specific system; in this case as well, could have been solved before . Other types of systems of linear equations Inconsistent systems In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution. However, not all inconsistent systems are recognized at first sight. As an example, consider the system Multiplying by 2 both sides of the second equation, and adding it to the first one results in which clearly has no solution. Undetermined systems There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for and ) For example: Isolating in the second equation: And using this value in the first equation in the system: The equality is true, but it does not provide a value for . Indeed, one can easily verify (by just filling in some values of ) that for any there is a solution as long as . There is an infinite number of solutions for this system. Over- and underdetermined systems Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any. A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others. See also History of algebra Binary operation Gaussian elimination Mathematics education Number line Polynomial Cancelling out Tarski's high school algebra problem References Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, , also online digitized editions 2006, 1822. Charles Smith, A Treatise on Algebra, in Cornell University Library Historical Math Monographs. Redden, John. Elementary Algebra . Flat World Knowledge, 2011 External links Algebra
Elementary algebra
[ "Mathematics" ]
3,758
[ "Elementary mathematics", "Algebra", "Elementary algebra" ]
9,723
https://en.wikipedia.org/wiki/Edward%20Waring
Edward Waring (15 August 1798) was a British mathematician. He entered Magdalene College, Cambridge as a sizar and became Senior wrangler in 1757. He was elected a Fellow of Magdalene and in 1760 Lucasian Professor of Mathematics, holding the chair until his death. He made the assertion known as Waring's problem without proof in his writings Meditationes Algebraicae. Waring was elected a Fellow of the Royal Society in 1763 and awarded the Copley Medal in 1784. Early years Waring was the eldest son of John and Elizabeth Waring, a prosperous farming couple. He received his early education in Shrewsbury School under a Mr Hotchkin and was admitted as a sizar at Magdalene College, Cambridge, on 24 March 1753, being also Millington exhibitioner. His extraordinary talent for mathematics was recognised from his early years in Cambridge. In 1757 he graduated BA as senior wrangler and on 24 April 1758 was elected to a fellowship at Magdalene. He belonged to the Hyson Club, whose members included William Paley. Career At the end of 1759 Waring published the first chapter of Miscellanea Analytica. On 28 January the next year he was appointed Lucasian professor of mathematics, one of the highest positions in Cambridge. William Samuel Powell, then tutor in St John's College, Cambridge opposed Waring's election and instead supported the candidacy of William Ludlam. In the polemic with Powell, Waring was backed by John Wilson. In fact Waring was very young and did not hold the MA, necessary for qualifying for the Lucasian chair, but this was granted him in 1760 by royal mandate. In 1762 he published the full Miscellanea Analytica, mainly devoted to the theory of numbers and algebraic equations. In 1763 he was elected to the Royal Society. He was awarded its Copley Medal in 1784 but withdrew from the society in 1795, after he had reached sixty, 'on account of [his] age'. Waring was also a member of the academies of sciences of Göttingen and Bologna. In 1767 he took an MD degree, but his activity in medicine was quite limited. He carried out dissections with Richard Watson, professor of chemistry and later bishop of Llandaff. From about 1770 he was physician at Addenbrooke's Hospital at Cambridge, and he also practised at St Ives, Huntingdonshire, where he lived for some years after 1767. His career as a physician was not very successful since he was seriously short-sighted and a very shy man. Personal life Waring had a younger brother, Humphrey, who obtained a fellowship at Magdalene in 1775. In 1776 Waring married Mary Oswell, sister of a draper in Shrewsbury; they moved to Shrewsbury and then retired to Plealey, 8 miles out of the town, where Waring owned an estate of 215 acres in 1797 Work Waring wrote a number of papers in the Philosophical Transactions of the Royal Society, dealing with the resolution of algebraic equations, number theory, series, approximation of roots, interpolation, the geometry of conic sections, and dynamics. The Meditationes Algebraicae (1770), where many of the results published in Miscellanea Analytica were reworked and expanded, was described by Joseph-Louis Lagrange as 'a work full of excellent researches'. In this work Waring published many theorems concerning the solution of algebraic equations which attracted the attention of continental mathematicians, but his best results are in number theory. Included in this work was the so-called Goldbach conjecture (every even integer is the sum of two primes), and also the following conjecture: every odd integer is a prime or the sum of three primes. Lagrange had proved that every positive integer is the sum of not more than four squares; Waring suggested that every positive integer is either a cube or the sum of not more than nine cubes. He also advanced the hypothesis that every positive integer is either a biquadrate (fourth power) or the sum of not more than nineteen biquadrates. These hypotheses form what is known as Waring's problem. He also published a theorem, due to his friend John Wilson, concerning prime numbers; it was later proven rigorously by Lagrange. In Proprietates Algebraicarum Curvarum (1772) Waring reissued in a much revised form the first four chapters of the second part of Miscellanea Analytica. He devoted himself to the classification of higher plane curves, improving results obtained by Isaac Newton, James Stirling, Leonhard Euler, and Gabriel Cramer. In 1794 he published a few copies of a philosophical work entitled An Essay on the Principles of Human Knowledge, which were circulated among his friends. Waring's mathematical style is highly analytical. In fact he criticised those British mathematicians who adhered too strictly to geometry. It is indicative that he was one of the subscribers of John Landen's Residual Analysis (1764), one of the works in which the tradition of the Newtonian fluxional calculus was more severely criticised. In the preface of Meditationes Analyticae Waring showed a good knowledge of continental mathematicians such as Alexis Clairaut, Jean le Rond d'Alembert, and Euler. He lamented the fact that in Great Britain mathematics was cultivated with less interest than on the continent, and clearly desired to be considered as highly as the great names in continental mathematics—there is no doubt that he was reading their work at a level never reached by any other eighteenth-century British mathematician. Most notably, at the end of chapter three of Meditationes Analyticae Waring presents some partial fluxional equations (partial differential equations in Leibnizian terminology); such equations are a mathematical instrument of great importance in the study of continuous bodies which was almost completely neglected in Britain before Waring's researches. One of the most interesting results in Meditationes Analyticae is a test for the convergence of series generally attributed to d'Alembert (the 'ratio test'). The theory of convergence of series (the object of which is to establish when the summation of an infinite number of terms can be said to have a finite 'sum') was not much advanced in the eighteenth century. Waring's work was known both in Britain and on the continent, but it is difficult to evaluate his impact on the development of mathematics. His work on algebraic equations contained in Miscellanea Analytica was translated into Italian by Vincenzo Riccati in 1770. Waring's style is not systematic and his exposition is often obscure. It seems that he never lectured and did not habitually correspond with other mathematicians. After Jérôme Lalande in 1796 observed, in Notice sur la vie de Condorcet, that in 1764 there was not a single first-rate analyst in England, Waring's reply, published after his death as 'Original letter of Dr Waring' in the Monthly Magazine, stated that he had given 'somewhere between three and four hundred new propositions of one kind or another'. Death During his last years he sank into a deep religious melancholy, and a violent cold caused his death, in Plealey, on 15 August 1798. He was buried in the churchyard at Fitz, Shropshire. See also Lagrange polynomial References External links 1730s births 1798 deaths 18th-century English mathematicians Alumni of Magdalene College, Cambridge Fellows of Magdalene College, Cambridge Fellows of the Royal Society Lucasian Professors of Mathematics Number theorists Scientists from Shrewsbury Recipients of the Copley Medal Senior Wranglers Date of birth unknown
Edward Waring
[ "Mathematics" ]
1,537
[ "Number theorists", "Number theory" ]
9,730
https://en.wikipedia.org/wiki/Electron%20microscope
An electron microscope is a microscope that uses a beam of electrons as a source of illumination. They use electron optics that are analogous to the glass lenses of an optical light microscope to control the electron beam, for instance focusing them to produce magnified images or electron diffraction patterns. As the wavelength of an electron can be up to 100,000 times smaller than that of visible light, electron microscopes have a much higher resolution of about 0.1 nm, which compares to about 200 nm for light microscopes. Electron microscope may refer to: Transmission electron microscopy (TEM) where swift electrons go through a thin sample Scanning transmission electron microscopy (STEM) which is similar to TEM with a scanned electron probe Scanning electron microscope (SEM) which is similar to STEM, but with thick samples Electron microprobe similar to a SEM, but more for chemical analysis Low-energy electron microscopy (LEEM), used to image surfaces Photoemission electron microscopy (PEEM) which is similar to LEEM using electrons emitted from surfaces by photons Additional details can be found in the above links. This article contains some general information mainly about transmission electron microscopes. History Many developments laid the groundwork of the electron optics used in microscopes. One significant step was the work of Hertz in 1883 who made a cathode-ray tube with electrostatic and magnetic deflection, demonstrating manipulation of the direction of an electron beam. Others were focusing of the electrons by an axial magnetic field by Emil Wiechert in 1899, improved oxide-coated cathodes which produced more electrons by Arthur Wehnelt in 1905 and the development of the electromagnetic lens in 1926 by Hans Busch. According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince him to build an electron microscope, for which Szilárd had filed a patent. To this day the issue of who invented the transmission electron microscope is controversial. In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), Adolf Matthias (Professor of High Voltage Technology and Electrical Installations) appointed Max Knoll to lead a team of researchers to advance research on electron beams and cathode-ray oscilloscopes. The team consisted of several PhD students including Ernst Ruska. In 1931, Max Knoll and Ernst Ruska successfully generated magnified images of mesh grids placed over an anode aperture. The device, a replicate of which is shown in the figure, used two magnetic lenses to achieve higher magnifications, the first electron microscope. (Max Knoll died in 1969, so did not receive a share of the 1986 Nobel prize for the invention of electron microscopes.) Apparently independent of this effort was work at Siemens-Schuckert by Reinhold Rüdenberg. According to patent law (U.S. Patent No. 2058914 and 2070318, both filed in 1932), he is the inventor of the electron microscope, but it is not clear when he had a working instrument. He stated in a very brief article in 1932 that Siemens had been working on this for some years before the patents were filed in 1932, claiming that his effort was parallel to the university development. He died in 1961, so similar to Max Knoll, was not eligible for a share of the 1986 Nobel prize. In the following year, 1933, Ruska and Knoll built the first electron microscope that exceeded the resolution of an optical (light) microscope. Four years later, in 1937, Siemens financed the work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska, Ernst's brother, to develop applications for the microscope, especially with biological specimens. Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope. Siemens produced the first commercial electron microscope in 1938. The first North American electron microscopes were constructed in the 1930s, at the Washington State University by Anderson and Fitzsimmons and at the University of Toronto by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus. Siemens produced a transmission electron microscope (TEM) in 1939. Although current transmission electron microscopes are capable of two million times magnification, as scientific instruments they remain similar but with improved optics. In the 1940s, high-resolution electron microscopes were developed, enabling greater magnification and resolution. By 1965, Albert Crewe at the University of Chicago introduced the scanning transmission electron microscope using a field emission source, enabling scanning microscopes at high resolution. By the early 1980s improvements in mechanical stability as well as the use of higher accelerating voltages enabled imaging of materials at the atomic scale. In the 1980s, the field emission gun became common for electron microscopes, improving the image quality due to the additional coherence and lower chromatic aberrations. The 2000s were marked by advancements in aberration-corrected electron microscopy, allowing for significant improvements in resolution and clarity of images. Types Transmission electron microscope (TEM) The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. An electron beam is produced by an electron gun, with the electrons typically having energies in the range 20 to 400 keV, focused by electromagnetic lenses, and transmitted through the specimen. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by lenses of the microscope. The spatial variation in this information (the "image") may be viewed by projecting the magnified electron image onto a detector. For example, the image may be viewed directly by an operator using a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. A high-resolution phosphor may also be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. Direct electron detectors have no scintillator and are directly exposed to the electron beam, which addresses some of the limitations of scintillator-coupled cameras. The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development. Scanning transmission electron microscope (STEM) The STEM rasters a focused incident probe across a specimen. The high resolution of the TEM is thus possible in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical techniques, but also means that image data is acquired in serial rather than in parallel fashion. Scanning electron microscope (SEM) The SEM produces images by probing the specimen with a focused electron beam that is scanned across the specimen (raster scanning). When the electron beam interacts with the specimen, it loses energy by a variety of mechanisms. These interactions lead to, among other events, emission of low-energy secondary electrons and high-energy backscattered electrons, light emission (cathodoluminescence) or X-ray emission, all of which provide signals carrying information about the properties of the specimen surface, such as its topography and composition. The image displayed by SEM represents the varying intensity of any of these signals into the image in a position corresponding to the position of the beam on the specimen when the signal was generated. SEMs are different from TEMs in that they use electrons with much lower energy, generally below 20 keV, while TEMs generally use electrons with energies in the range of 80-300 keV. Thus, the electron sources and optics of the two microscopes have different designs, and they are normally separate instruments. Main operating modes Diffraction contrast imaging Diffraction contrast uses the variation in either or both the direction of diffracted electrons or their amplitude as the contrast mechanism. Phase contrast imaging Phase contrast imaging involves generating contrast, for instance around edges, by defocusing the micriscope. High resolution imaging Chemical analysis Electron diffraction Transmission electron microscopes can be used in electron diffraction mode where a map of the angles of the electrons leaving the sample is produced. The advantages of electron diffraction over X-ray crystallography are primarily in the size of the crystals. In X-ray crystallography, crystals are commonly visible by the naked eye and are generally in the hundreds of micrometers in length. In comparison, crystals for electron diffraction must be less than a few hundred nanometers in thickness, and have no lower boundary of size. Additionally, electron diffraction is done on a TEM, which can also be used to obtain many other types of information, rather than requiring a separate instrument. Sample preparation Samples for electron microscopes mostly cannot be observed directly. The samples need to be prepared to stabilize the sample and enhance contrast. Preparation techniques differ vastly in respect to the sample and its specific qualities to be observed as well as the specific microscope used. Scanning Electron Microscope (SEM) To prevent charging and enhance the signal in SEM, non-conductive samples (e.g. biological samples as in figure) can be sputter-coated in a thin film of metal. Transmission electron microscope Materials to be viewed in a transmission electron microscope (TEM) may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required: Chemical fixation – for biological specimens this aims to stabilize the specimen's mobile macromolecular structure by chemical crosslinking of proteins with aldehydes such as formaldehyde and glutaraldehyde, and lipids with osmium tetroxide. Cryofixation – freezing a specimen so that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its native state. Methods to achieve this vitrification include plunge freezing rapidly in liquid ethane, and high pressure freezing. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS) and cryo-focused ion beam milling of lamellae, it is now possible to observe samples from virtually any biological specimen close to its native state. Dehydration – replacement of water with organic solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins. See also freeze drying. Embedding, biological specimens – after dehydration, tissue for observation in the transmission electron microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through a 'transition solvent' such as propylene oxide (epoxypropane) or acetone and then infiltrated with an epoxy resin such as Araldite, Epon, or Durcupan; tissues may also be embedded directly in water-miscible acrylic resin. After the resin has been polymerized (hardened) the sample is sectioned by ultramicrotomy and stained. Embedding, materials – after embedding in resin, the specimen is usually ground and polished to a mirror-like finish using ultra-fine abrasives. Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixation), then fractured by breaking (or by using a microtome) while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about −100 °C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. The second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve the stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-floating replica is thoroughly washed free from residual chemicals, carefully fished up on fine grids, dried then viewed in the TEM. Freeze-fracture replica immunogold labeling (FRIL) – the freeze-fracture method has been modified to allow the identification of the components of the fracture face by immunogold labeling. Instead of removing all the underlying tissue of the thawed replica as the final step before viewing in the microscope the tissue thickness is minimized during or after the fracture process. The thin layer of tissue remains bound to the metal replica so it can be immunogold labeled with antibodies to the structures of choice. The thin layer of the original specimen on the replica with gold attached allows the identification of structures in the fracture plane. There are also related methods which label the surface of etched cells and other replica labeling variations. Ion beam milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface. A subclass of this is focused ion beam milling, where gallium ions are used to produce an electron transparent membrane or 'lamella' in a specific region of the sample, for example through a device within a microprocessor or a focused ion beam SEM. Ion beam milling may also be used for cross-section polishing prior to analysis of materials that are difficult to prepare using mechanical polishing. Negative stain – suspensions containing nanoparticles or fine biological material (such as viruses and bacteria) are briefly mixed with a dilute solution of an electron-opaque solution such as ammonium molybdate, uranyl acetate (or formate), or phosphotungstic acid. This mixture is applied to an EM grid, pre-coated with a plastic film such as formvar, blotted, then allowed to dry. Viewing of this preparation in the TEM should be carried out without delay for best results. The method is important in microbiology for fast but crude morphological identification, but can also be used as the basis for high-resolution 3D reconstruction using EM tomography methodology when carbon films are used for support. Sectioning – produces thin slices of the specimen, semitransparent to electrons. These can be cut using ultramicrotomy on an ultramicrotome with a glass or diamond knife to produce ultra-thin sections about 60–90 nm thick. Disposable glass knives are also used because they can be made in the lab and are much cheaper. Sections can also be created in situ by milling in a focused ion beam SEM, where the section is known as a lamella. Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). In biology, specimens can be stained "en bloc" before embedding and also later after sectioning. Typically thin sections are stained for several minutes with an aqueous or alcoholic solution of uranyl acetate followed by aqueous lead citrate. EM workflows In their most common configurations, electron microscopes produce images with a single brightness value per pixel, with the results usually rendered in greyscale. However, often these images are then colourized through the use of feature-detection software, or simply by hand-editing using a graphics editor. This may be done to clarify structure or for aesthetic effect and generally does not add new information about the specimen. Electron microscopes are now frequently used in more complex workflows, with each workflow typically using multiple technologies to enable more complex and/or more quantitative analyses of a sample. A few examples are outlined below, but this should not be considered an exhaustive list. The choice of workflow will be highly dependent on the application and the requirements of the corresponding scientific questions, such as resolution, volume, nature of the target molecule, etc. For example, images from light and electron microscopy of the same region of a sample can be overlaid to correlate the data from the two modalities. This is commonly used to provide higher resolution contextual EM information about a fluorescently labelled structure. This correlative light and electron microscopy (CLEM) is one of a range of correlative workflows now available. Another example is high resolution mass spectrometry (ion microscopy), which has been used to provide correlative information about subcellular antibiotic localisation, data that would be difficult to obtain by other means. The initial role of electron microscopes in imaging two-dimensional slices (TEM) or a specimen surface (SEM with secondary electrons) has also increasingly expanded into the depth of samples. An early example of these ‘volume EM’ workflows was simply to stack TEM images of serial sections cut through a sample. The next development was virtual reconstruction of a thick section (200-500 nm) volume by backprojection of a set of images taken at different tilt angles - TEM tomography. Serial imaging for volume EM To acquire volume EM datasets of larger depths than TEM tomography (micrometers or millimeters in the z axis), a series of images taken through the sample depth can be used. For example, ribbons of serial sections can be imaged in a TEM as described above, and when thicker sections are used, serial TEM tomography can be used to increase the z-resolution. More recently, back scattered electron (BSE) images can be acquired of a larger series of sections collected on silicon wafers, known as SEM array tomography. An alternative approach is to use BSE SEM to image the block surface instead of the section, after each section has been removed. By this method, an ultramicrotome installed in an SEM chamber can increase automation of the workflow; the specimen block is loaded in the chamber and the system programmed to continuously cut and image through the sample. This is known as serial block face SEM. A related method uses focused ion beam milling instead of an ultramicrotome to remove sections. In these serial imaging methods, the output is essentially a sequence of images through a specimen block that can be digitally aligned in sequence and thus reconstructed into a volume EM dataset. The increased volume available in these methods has expanded the capability of electron microscopy to address new questions, such as mapping neural connectivity in the brain, and membrane contact sites between organelles. Disadvantages Electron microscopes are expensive to build and maintain. Microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field canceling systems. The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. An exception is liquid-phase electron microscopy using either a closed liquid cell or an environmental chamber, for example, in the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to ) wet environment. Various techniques for in situ electron microscopy of gaseous samples have been developed. Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon, osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or environmental) scanning electron microscope. Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens, have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in artifacts, but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique. See also List of materials analysis methods Electron diffraction Electron energy loss spectroscopy (EELS) Electron microscope images Energy filtered transmission electron microscopy (EFTEM) Environmental scanning electron microscope (ESEM) Immune electron microscopy In situ electron microscopy Low-energy electron microscopy Microscope image processing Microscopy Nanotechnology Scanning confocal electron microscopy Scanning electron microscope (SEM) Thin section Transmission Electron Aberration-Corrected Microscope References External links An Introduction to Microscopy : resources for teachers and students Cell Centered Database – Electron microscopy data Science Aid: Electron Microscopy:By Kaden park Microscopes Accelerator physics Anatomical pathology Pathology German inventions Protein imaging 20th-century inventions
Electron microscope
[ "Physics", "Chemistry", "Technology", "Engineering", "Biology" ]
4,391
[ "Electron", "Biochemistry methods", "Electron microscopy", "Applied and interdisciplinary physics", "Pathology", "Measuring instruments", "Microscopes", "Experimental physics", "Microscopy", "Accelerator physics", "Protein imaging" ]
9,735
https://en.wikipedia.org/wiki/Electromagnetic%20field
An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field. Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave. The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion. The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics. History The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831. In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics. Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience. Mathematical description There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field). If only the electric field () is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field () is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations. With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws. The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are: Gauss's law Gauss's law for magnetism Faraday's law Ampère–Maxwell law where is the charge density, which is a function of time and position, is the vacuum permittivity, is the vacuum permeability, and is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. The Lorentz force law governs the interaction of the electromagnetic field with charged matter. When a field travels across to different media, the behavior of the field changes according to the properties of the media. Properties of the field Electrostatics and magnetostatics The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field, and along with two formulae that involve the magnetic field: and These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena. Transformations of electromagnetic fields Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a field must be present. In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields. Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently. Reciprocal behavior of electric and magnetic fields The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator. Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor. Behavior of the fields in the absence of charges or currents Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where and are zero, the electric and magnetic fields satisfy these electromagnetic wave equations: James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum. Time-varying EM fields in Maxwell's equations An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles. A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen. A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field. Changing dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances. Changing dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies. Health and safety The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances. See also Classification of electromagnetic fields Electric field Electromagnetism Electromagnetic propagation Electromagnetic radiation Electromagnetic spectrum Electromagnetic field measurements Magnetic field Maxwell's equations Photoelectric effect Photon Quantization of the electromagnetic field Quantum electrodynamics References Citations Sources Further reading (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) External links Electromagnetism
Electromagnetic field
[ "Physics" ]
2,445
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
9,737
https://en.wikipedia.org/wiki/Eugenics
Eugenics ( ; ) is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have attempted to alter the frequency of various human phenotypes by inhibiting the fertility of people and groups they considered inferior, or promoting that of those considered superior. The contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, British-Indian scientist J. B. S. Haldane wrote in 1940 that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Early eugenicists were mostly concerned with factors of measured intelligence that often correlated strongly with social class. Although it originated as a progressive social movement in the 19th century, in contemporary usage in the 21st century, the term is closely associated with scientific racism. New, liberal eugenics seeks to dissociate itself from old, authoritarian eugenics by rejecting coercive state programs and relying on parental choice. Common distinctions Eugenic programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. In other words, positive eugenics is aimed at encouraging reproduction among the genetically advantaged, for example, the eminently intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit. As opposed to "euthenics" Historical eugenics Ancient and medieval origins Academic origins The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, directly drawing on the recent work delineating natural selection by his half-cousin Charles Darwin. He published his observations and conclusions chiefly in his influential book Inquiries into Human Faculty and Its Development. Galton himself defined it as "the study of all agencies under human control which can improve or impair the racial quality of future generations". The first to systematically apply Darwinism theory to human relations, Galton believed that various desirable human qualities were also hereditary ones, although Darwin strongly disagreed with this elaboration of his theory. Eugenics became an academic discipline at many colleges and universities and received funding from various sources. Organizations were formed to win public support for and to sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes. Three International Eugenics Conferences presented a global venue for eugenicists, with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies in the United States were first implemented by state-level legislators in the early 1900s. Eugenic policies also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden. Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed eugenics as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics"). In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty. As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics"; which focuses on individual freedom and allegedly pulls away from racism, sexism or a focus on intelligence. Early opposition Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Franz Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists who were themselves eugenicists, such as J. B. S. Haldane and R. A. Fisher, however, also expressed skepticism in the belief that sterilization of "defectives" (i.e. a purely negative eugenics) would lead to the disappearance of undesirable genetic traits. Among institutions, the Catholic Church was an opponent of state-enforced sterilizations, but accepted isolating people with hereditary diseases so as not to let them reproduce. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason." In fact, more generally, "[m]uch of the opposition to eugenics during that era, at least in Europe, came from the right." The eugenicists' political successes in Germany and Scandinavia were not at all matched in such countries as Poland and Czechoslovakia, even though measures had been proposed there, largely because of the Catholic church's moderating influence. Concerns over human devolution Dysgenics Compulsory sterilization Eugenic feminism North American eugenics Eugenics in Mexico Nazism and the decline of eugenics The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust. By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". In Singapore Lee Kuan Yew, the founding father of Singapore, actively promoted eugenics as late as 1983. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. For this purpose was introduced the "Graduate Mother Scheme" that incentivized graduate women to get married as much as the rest of their populace. The incentives were extremely unpopular and regarded as eugenic, and were seen as discriminatory towards Singapore's non-Chinese ethnic population. In 1985, the incentives were partly abandoned as ineffective, while the government matchmaking agency, the Social Development Network, remains active. Modern eugenics Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, sparking renewed interest in the topic. Liberal eugenics, also called new eugenics, aims to make genetic interventions morally acceptable by rejecting coercive state programs and relying on parental choice. Bioethicist Nicholas Agar, who coined the term, argues for example that the state should only intervene to forbid interventions that excessively limit a child’s ability to shape their own future. Unlike "authoritarian" or "old" eugenics, liberal eugenics draws on modern scientific knowledge of genomics to enable informed choices aimed at improving well-being. Julien Savulescu further argues that some eugenic practices like prenatal screening for Down syndrome are already widely practiced, without being labeled "eugenics", as they are seen as enhancing freedom rather than restricting it. Some critics, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a "back door to eugenics". This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". The United Nations' International Bioethics Committee also noted that while human genetic engineering should not be confused with the 20th century eugenics movements, it nonetheless challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want or cannot afford the technology. In 2025, geneticist Peter Visscher published a paper in Nature, arguing genome editing of human embryos and germ cells may become feasible in the 21st century, and raising ethical considerations in the context of previous eugenics movements. A response argued that human embryo genetic editing is "unsafe and unproven". Nature also published an editorial, stating: "The fear that polygenic gene editing could be used for eugenics looms large among them, and is, in part, why no country currently allows genome editing in a human embryo, even for single variants". Contested scientific status One general concern that many bring to the table, is that the reduced genetic diversity some argue to be a likely feature of long-term, species-wide eugenics plans, could eventually result in inbreeding depression, increased spread of infectious disease, and decreased resilience to changes in the environment. Arguments for scientific validity In his original lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson claimed that everything concerning eugenics fell into the field of medicine. Anthropologist Aleš Hrdlička said in 1918 that "[t]he growing science of eugenics will essentially become applied anthropology." The economist John Maynard Keynes was a lifelong proponent of eugenics and described it as a branch of sociology. In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction. Objections to scientific validity Amanda Caleb, Professor of Medical Humanities at Geisinger Commonwealth School of Medicine, says "Eugenic laws and policies are now understood as part of a specious devotion to a pseudoscience that actively dehumanizes to support political agendas and not true science or medicine." The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wroclaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pękalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together. While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common. Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. This aspect of eugenics is often considered to be tainted with scientific racism and pseudoscience. Contested ethical status Contemporary ethical opposition In a book directly addressed at socialist eugenicist J.B.S. Haldane and his once-influential Daedalus, Betrand Russell, had one serious objection of his own: eugenic policies might simply end up being used to reproduce existing power relations "rather than to make men happy." Environmental ethicist Bill McKibben argued against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, he argues, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using Ming China, Tokugawa Japan and the contemporary Amish as examples. Contemporary ethical advocacy Bioethicist Stephen Wilkinsonhas said that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. Historian Nathaniel C. Comfort has claimed that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making process from the state to patients and their families. In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements. In science fiction The novel Brave New World by the English author Aldous Huxley (1931), is a dystopian social science fiction novel which is set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy. Various works by the author Robert A. Heinlein mention the Howard Foundation, a group which attempts to improve human longevity through selective breeding. Among Frank Herbert's other works, the Dune series, starting with the eponymous 1965 novel, describes selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach. The Star Trek franchise features a race of genetically engineered humans which is known as "Augments", the most notable of them is Khan Noonien Singh. These "supermen" were the cause of the Eugenics Wars, a dark period in Earth's fictional history, before they were deposed and exiled. They appear in many of the franchise's story arcs, most frequently, they appear as villains. The film Gattaca (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. The title alludes to the letters G, A, T and C, the four nucleobases of DNA, and depicts the possible consequences of genetic discrimination in the present societal framework. Relegated to the role of a cleaner owing to his genetically projected death at age 32 due to a heart condition (being told: "The only way you'll see the inside of a spaceship is if you were cleaning it"), the protagonist observes enhanced astronauts as they are demonstrating their superhuman athleticism. Although it was not a box office success, it was critically acclaimed and influenced the debate over human genetic engineering in the public consciousness. As to its accuracy, its production company, Sony Pictures, consulted with a gene therapy researcher and prominent critic of eugenics known to have stated that "[w]e should not step over the line that delineates treatment from enhancement", W. French Anderson, to ensure that the portrayal of science was realistic. Disputing their success in this mission, Philim Yam of Scientific American called the film "science bashing" and Nature's Kevin Davies called it a "surprisingly pedestrian affair", while molecular biologist Lee Silver described its extreme determinism as "a straw man". In his 2018 book Blueprint, the behavioral geneticist Robert Plomin writes that while Gattaca warned of the dangers of genetic information being used by a totalitarian state, genetic testing could also favor better meritocracy in democratic societies which already administer a variety of standardized tests to select people for education and employment. He suggests that polygenic scores might supplement testing in a manner that is essentially free of biases. See also Ableism Bioconservatism Culling Dor Yeshorim Dysgenics Eugenic feminism Genetic engineering Genetic enhancement Hereditarianism Heritability of IQ New eugenics Mendelian traits in humans Simple Mendelian genetics in humans Moral enhancement Project Prevention Social Darwinism Wrongful life Eugenics in France References Notes Further reading Paul, Diane B.; Spencer, Hamish G. (1998). "Did Eugenics Rest on an Elementary Mistake?" (PDF). In: The Politics of Heredity: Essays on Eugenics, Biomedicine, and the Nature-Nurture Debate, SUNY Press (pp. 102–118) Gantsho, Luvuyo (2022). "The principle of procreative beneficence and its implications for genetic engineering." Theoretical Medicine and Bioethics 43 (5):307-328. . Harris, John (2009). "Enhancements are a Moral Obligation." In J. Savulescu & N. Bostrom (Eds.), Human Enhancement, Oxford University Press, pp. 131–154 Kamm, Frances (2010). "What Is And Is Not Wrong With Enhancement?" In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. Kamm, Frances (2005). "Is There a Problem with Enhancement?", The American Journal of Bioethics, 5(3), 5–14. PMID 16006376 . Ranisch, Robert (2022). "Procreative Beneficence and Genome Editing", The American Journal of Bioethics, 22(9), 20–22. . Robertson, John (2021). Children of Choice: Freedom and the New Reproductive Technologies. Princeton University Press, . Saunders, Ben (2015). "Why Procreative Preferences May be Moral – And Why it May not Matter if They Aren't." Bioethics, 29(7), 499–506. . Savulescu, Julian (2001). Procreative beneficence: why we should select the best children. Bioethics. 15(5–6): pp. 413–26 Singer, Peter (2010). "Parental Choice and Human Improvement." In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. South, David (1993). Award-winning research on history of eugenics reaps honours. Hannah Institute for the History of Medicine Number 19 Fall 1993, p. 3 Wikler, Daniel (1999). "Can we learn from eugenics?" (PDF). J Med Ethics. 25(2):183-94. . PMID 10226926; PMCID: PMC479205. External links Embryo Editing for Intelligence: A cost-benefit analysis of CRISPR-based editing for intelligence with 2015-2016 state-of-the-art Embryo Selection For Intelligence: A cost-benefit analysis of the marginal cost of IVF-based embryo selection for intelligence and other traits with 2016-2017 state-of-the-art Eugenics: Its Origin and Development (1883–Present) by the National Human Genome Research Institute (30 November 2021) Eugenics and Scientific Racism Fact Sheet by the National Human Genome Research Institute (3 November 2021) Ableism Applied genetics Bioethics Nazism Pseudo-scholarship Pseudoscience Racism Technological utopianism White supremacy
Eugenics
[ "Technology" ]
5,337
[ "Bioethics", "Ethics of science and technology" ]
9,738
https://en.wikipedia.org/wiki/Email
Electronic mail (usually shortened to email; alternatively hyphenated e-mail) is a method of transmitting and receiving digital messages using electronic devices over a computer network. It was conceived in the late–20th century as the digital version of, or counterpart to, mail (hence e- + mail). Email is a ubiquitous and very widely used communication medium; in current use, an email address is often treated as a basic and necessary part of many processes in business, commerce, government, education, entertainment, and other spheres of daily life in most countries. Email operates across computer networks, primarily the Internet, and also local area networks. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect, typically to a mail server or a webmail interface to send or receive messages or download it. Originally a text-only ASCII communications medium, Internet email was extended by MIME to carry text in expanded character sets and multimedia content such as images. International email, with internationalized email addresses using UTF-8, is standardized but not widely adopted. Terminology The term electronic mail has been in use with its modern meaning since 1975, and variations of the shorter E-mail have been in use since 1979: email is now the common form, and recommended by style guides. It is the form required by IETF Requests for Comments (RFC) and working groups. This spelling also appears in most dictionaries. e-mail was originally the form favored in edited published American English and British English writing, and was formerly preferred by some style guides. E-mail is sometimes used. The original usage in June 1979 occurred in the journal Electronics in reference to the United States Postal Service initiative called E-COM, which was developed in the late 1970s and operated in the early 1980s. EMAIL was used by CompuServe starting in April 1981, which popularized the term. EMail is a traditional form used in RFCs for the "Author's Address". The service is often simply referred to as mail, and a single piece of electronic mail is called a message. The conventions for fields within emails—the "To", "From", "CC", "BCC" etc.—began with RFC-680 in 1975. An Internet email consists of an envelope and content; the content consists of a header and a body. History Computer-based messaging between users of the same system became possible after the advent of time-sharing in the early 1960s, with a notable implementation by MIT's CTSS project in 1965. Most developers of early mainframes and minicomputers developed similar, but generally incompatible, mail applications. In 1971 the first ARPANET network mail was sent, introducing the now-familiar address syntax with the '@' symbol designating the user's system address. Over a series of RFCs, conventions were refined for sending mail messages over the File Transfer Protocol. Proprietary electronic mail systems soon began to emerge. IBM, CompuServe and Xerox used in-house mail systems in the 1970s; CompuServe sold a commercial intraoffice mail product in 1978 to IBM and to Xerox from 1981. DEC's ALL-IN-1 and Hewlett-Packard's HPMAIL (later HP DeskManager) were released in 1982; development work on the former began in the late 1970s and the latter became the world's largest selling email system. The Simple Mail Transfer Protocol (SMTP) was implemented on the ARPANET in 1983. LAN email systems emerged in the mid-1980s. For a time in the late 1980s and early 1990s, it seemed likely that either a proprietary commercial system or the X.400 email system, part of the Government Open Systems Interconnection Profile (GOSIP), would predominate. However, once the final restrictions on carrying commercial traffic over the Internet ended in 1995, a combination of factors made the current Internet suite of SMTP, POP3 and IMAP email protocols the standard (see Protocol Wars). Operation The following is a typical sequence of events that takes place when sender Alice transmits a message using a mail user agent (MUA) addressed to the email address of the recipient. The MUA formats the message in email format and uses the submission protocol, a profile of the Simple Mail Transfer Protocol (SMTP), to send the message content to the local mail submission agent (MSA), in this case smtp.a.org. The MSA determines the destination address provided in the SMTP protocol (not from the message header)—in this case, bob@b.org—which is a fully qualified domain address (FQDA). The part before the @ sign is the local part of the address, often the username of the recipient, and the part after the @ sign is a domain name. The MSA resolves a domain name to determine the fully qualified domain name of the mail server in the Domain Name System (DNS). The DNS server for the domain b.org (ns.b.org) responds with any MX records listing the mail exchange servers for that domain, in this case mx.b.org, a message transfer agent (MTA) server run by the recipient's ISP. smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the message to other MTAs before the message reaches the final message delivery agent (MDA). The MDA delivers it to the mailbox of user bob. Bob's MUA picks up the message using either the Post Office Protocol (POP3) or the Internet Message Access Protocol (IMAP). In addition to this example, alternatives and complications exist in the email system: Alice or Bob may use a client connected to a corporate email system, such as IBM Lotus Notes or Microsoft Exchange. These systems often have their own internal email format and their clients typically communicate with the email server using a vendor-specific, proprietary protocol. The server sends or receives email via the Internet through the product's Internet mail gateway which also does any necessary reformatting. If Alice and Bob work for the same company, the entire transaction may happen completely within a single corporate email system. Alice may not have an MUA on her computer but instead may connect to a webmail service. Alice's computer may run its own MTA, so avoiding the transfer at step 1. Bob may pick up his email in many ways, for example logging into mx.b.org and reading it directly, or by using a webmail service. Domains usually have several mail exchange servers so that they can continue to accept mail even if the primary is not available. Many MTAs used to accept messages for any recipient on the Internet and do their best to deliver them. Such MTAs are called open mail relays. This was very important in the early days of the Internet when network connections were unreliable. However, this mechanism proved to be exploitable by originators of unsolicited bulk email and as a consequence open mail relays have become rare, and many MTAs do not accept messages from open mail relays. Message format The basic Internet message format used for email is defined by , with encoding of non-ASCII data and multimedia content attachments defined in RFC 2045 through RFC 2049, collectively called Multipurpose Internet Mail Extensions or MIME. The extensions in International email apply only to email. RFC 5322 replaced RFC 2822 in 2008. Earlier, in 2001, RFC 2822 had in turn replaced RFC 822, which had been the standard for Internet email for decades. Published in 1982, RFC 822 was based on the earlier RFC 733 for the ARPANET. Internet email messages consist of two sections, "header" and "body". These are known as "content". The header is structured into fields such as From, To, CC, Subject, Date, and other information about the email. In the process of transporting email messages between systems, SMTP communicates delivery parameters and information using message header fields. The body contains the message, as unstructured text, sometimes containing a signature block at the end. The header is separated from the body by a blank line. Message header RFC 5322 specifies the syntax of the email header. Each email message has a header (the "header section" of the message, according to the specification), comprising a number of fields ("header fields"). Each field has a name ("field name" or "header field name"), followed by the separator character ":", and a value ("field body" or "header field body"). Each field name begins in the first character of a new line in the header section, and begins with a non-whitespace printable character. It ends with the separator character ":". The separator is followed by the field value (the "field body"). The value can continue onto subsequent lines if those lines have space or tab as their first character. Field names and, without SMTPUTF8, field bodies are restricted to 7-bit ASCII characters. Some non-ASCII values may be represented using MIME encoded words. Header fields Email header fields can be multi-line, with each line recommended to be no more than 78 characters, although the limit is 998 characters. Header fields defined by RFC 5322 contain only US-ASCII characters; for encoding characters in other sets, a syntax specified in RFC 2047 may be used. In some examples, the IETF EAI working group defines some standards track extensions, replacing previous experimental extensions so UTF-8 encoded Unicode characters may be used within the header. In particular, this allows email addresses to use non-ASCII characters. Such addresses are supported by Google and Microsoft products, and promoted by some government agents. The message header must include at least the following fields: From: The email address, and, optionally, the name of the author(s). Some email clients are changeable through account settings. Date: The local time and date the message was written. Like the From: field, many email clients fill this in automatically before sending. The recipient's client may display the time in the format and time zone local to them. RFC 3864 describes registration procedures for message header fields at the IANA; it provides for permanent and provisional field names, including also fields defined for MIME, netnews, and HTTP, and referencing relevant RFCs. Common header fields for email include: To: The email address(es), and optionally name(s) of the message's recipient(s). Indicates primary recipients (multiple allowed), for secondary recipients see Cc: and Bcc: below. Subject: A brief summary of the topic of the message. Certain abbreviations are commonly used in the subject, including "RE:" and "FW:". Cc: Carbon copy; Many email clients mark email in one's inbox differently depending on whether they are in the To: or Cc: list. Bcc: Blind carbon copy; addresses are usually only specified during SMTP delivery, and not usually listed in the message header. Content-Type: Information about how the message is to be displayed, usually a MIME type. Precedence: commonly with values "bulk", "junk", or "list"; used to indicate automated "vacation" or "out of office" responses should not be returned for this mail, e.g. to prevent vacation notices from sent to all other subscribers of a mailing list. Sendmail uses this field to affect prioritization of queued email, with "Precedence: special-delivery" messages delivered sooner. With modern high-bandwidth networks, delivery priority is less of an issue than it was. Microsoft Exchange respects a fine-grained automatic response suppression mechanism, the X-Auto-Response-Suppress field. Message-ID: Also an automatic-generated field to prevent multiple deliveries and for reference in In-Reply-To: (see below). In-Reply-To: Message-ID of the message this is a reply to. Used to link related messages together. This field only applies to reply messages. List-Unsubscribe: HTTP link to unsubscribe from a mailing list. References: Message-ID of the message this is a reply to, and the message-id of the message the previous reply was a reply to, etc. : Address should be used to reply to the message. Sender: Address of the sender acting on behalf of the author listed in the From: field (secretary, list manager, etc.). Archived-At: A direct link to the archived form of an individual email message. The To: field may be unrelated to the addresses to which the message is delivered. The delivery list is supplied separately to the transport protocol, SMTP, which may be extracted from the header content. The "To:" field is similar to the addressing at the top of a conventional letter delivered according to the address on the outer envelope. In the same way, the "From:" field may not be the sender. Some mail servers apply email authentication systems to messages relayed. Data pertaining to the server's activity is also part of the header, as defined below. SMTP defines the trace information of a message saved in the header using the following two fields: Received: after an SMTP server accepts a message, it inserts this trace record at the top of the header (last to first). Return-Path: after the delivery SMTP server makes the final delivery of a message, it inserts this field at the top of the header. Other fields added on top of the header by the receiving server may be called trace fields. Authentication-Results: after a server verifies authentication, it can save the results in this field for consumption by downstream agents. Received-SPF: stores results of SPF checks in more detail than Authentication-Results. DKIM-Signature: stores results of DomainKeys Identified Mail (DKIM) decryption to verify the message was not changed after it was sent. Auto-Submitted: is used to mark automatic-generated messages. VBR-Info: claims VBR whitelisting Message body Content encoding Internet email was designed for 7-bit ASCII. Most email software is 8-bit clean, but must assume it will communicate with 7-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7-bit content with a few characters outside that range and base64 for arbitrary binary data. The 8BITMIME and BINARY extensions were introduced to allow transmission of mail without the need for these encodings, but many mail transport agents may not support them. In some countries, e-mail software violates by sending raw non-ASCII text and several encoding schemes co-exist; as a result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is a coincidence if the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity. Plain text and HTML Most modern graphic email clients allow the use of either plain text or HTML for the message body at the option of the user. HTML email messages often include an automatic-generated plain text copy for compatibility. Advantages of HTML include the ability to include in-line links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. Disadvantages include the increased size of the email, privacy concerns about web bugs, abuse of HTML email as a vector for phishing attacks and the spread of malicious software. Some e-mail clients interpret the body as HTML even in the absence of a Content-Type: html header field; this may cause various problems. Some web-based mailing lists recommend all posts be made in plain text, with 72 or 80 characters per line for all the above reasons, and because they have a significant number of readers using text-based email clients such as Mutt. Various informal conventions evolved for marking up plain text in email and usenet posts, which later led to the development of formal languages like setext (c. 1992) and many others, the most popular of them being markdown. Some Microsoft email clients may allow rich formatting using their proprietary Rich Text Format (RTF), but this should be avoided unless the recipient is guaranteed to have a compatible email client. Servers and client applications Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents (MTAs); and delivered to a mail store by programs called mail delivery agents (MDAs, also sometimes called local delivery agents, LDAs). Accepting a message obliges an MTA to deliver it, and when a message cannot be delivered, that MTA must send a bounce message back to the sender, indicating the problem. Users can retrieve their messages from servers using standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs). When opening an email, it is marked as "read", which typically visibly distinguishes it from "unread" messages on clients' user interfaces. Email clients may allow hiding read emails from the inbox so the user can focus on the unread. Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol. Many current email users do not run MTA, MDA or MUA programs themselves, but use a web-based email platform, such as Gmail or Yahoo! Mail, that performs the same tasks. Such webmail interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on a local email client. Filename extensions Upon reception of email messages, email client applications save messages in operating system files in the file system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the mbox format. The specific format used is often indicated by special filename extensions: eml Used by many email clients including Novell GroupWise, Microsoft Outlook Express, Lotus notes, Windows Mail, Mozilla Thunderbird, and Postbox. The files contain the email contents as plain text in MIME format, containing the email header and body, including attachments in one or more of several formats. emlx Used by Apple Mail. msg Used by Microsoft Office Outlook and OfficeLogic Groupware. mbx Used by Opera Mail, KMail, and Apple Mail based on the mbox format. Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory. URI scheme mailto The URI scheme, as registered with the IANA, defines the mailto: scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the To: field. Many clients also support query string parameters for the other email fields, such as its subject line or carbon copy recipients. Types Web-based email Many email providers have a web-based email client. This allows users to log into the email account by using any compatible web browser to send and receive their email. Mail is typically not downloaded to the web client, so it cannot be read without a current Internet connection. POP3 email servers The Post Office Protocol 3 (POP3) is a mail access protocol used by a client application to read messages from the mail server. Received messages are often deleted from the server. POP supports simple download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC's). POP3 allows downloading messages on a local computer and reading them even when offline. IMAP email servers The Internet Message Access Protocol (IMAP) provides features to manage a mailbox from multiple devices. Small portable devices like smartphones are increasingly used to check email while traveling and to make brief replies, larger devices with better keyboard access being used to reply at greater length. IMAP shows the headers of messages, the sender and the subject and the device needs to request to download specific messages. Usually, the mail is left in folders in the mail server. MAPI email servers Messaging Application Programming Interface (MAPI) is used by Microsoft Outlook to communicate to Microsoft Exchange Server—and to a range of other email server products such as Axigen Mail Server, Kerio Connect, Scalix, Zimbra, HP OpenMail, IBM Lotus Notes, Zarafa, and Bynari where vendors have added MAPI support to allow their products to be accessed directly via Outlook. Uses Business and organizational use Email has been widely accepted by businesses, governments and non-governmental organizations in the developed world, and it is one of the key parts of an 'e-revolution' in workplace communication (with the other key plank being widespread adoption of highspeed Internet). A sponsored 2010 study on workplace communication found 83% of U.S. knowledge workers felt email was critical to their success and productivity at work. It has some key benefits to business and other organizations, including: Facilitating logistics Much of the business world relies on communications between people who are not physically in the same building, area, or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a method of exchanging information between two or more people with no set-up costs and that is generally far less expensive than a physical meeting or phone call. Helping with synchronization With real time communication by meetings or phone calls, participants must work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently. Batch processing of incoming emails can improve workflow compared to interrupting calls. Reducing cost Sending an email is much less expensive than sending postal mail, or long distance telephone calls, telex or telegrams. Increasing speed Much faster than most of the alternatives. Creating a "written" record Unlike a telephone or in-person conversation, email by its nature creates a detailed written record of the communication, the identity of the sender(s) and recipient(s) and the date and time the message was sent. In the event of a contract or legal dispute, saved emails can be used to prove that an individual was advised of certain issues, as each email has the date and time recorded on it. Possibility of auto-processing and improved distribution As well pre-processing of customer's orders or addressing the person in charge can be realized by automated procedures. Email marketing Email marketing via "opt-in" is often successfully used to send special sales offerings and new product information. Depending on the recipient's culture, email sent without permission—such as an "opt-in"—is likely to be viewed as unwelcome "email spam". Personal use Personal computer Many users access their personal emails from friends and family members using a personal computer in their house or apartment. Mobile Email has become used on smartphones and on all types of computers. Mobile "apps" for email increase accessibility to the medium for users who are out of their homes. While in the earliest years of email, users could only access email on desktop computers, in the 2010s, it is possible for users to check their email when they are away from home, whether they are across town or across the world. Alerts can also be sent to the smartphone or other devices to notify them immediately of new messages. This has given email the ability to be used for more frequent communication between users and allowed them to check their email and write messages throughout the day. , there were approximately 1.4 billion email users worldwide and 50 billion non-spam emails that were sent daily. Individuals often check emails on smartphones for both personal and work-related messages. It was found that US adults check their email more than they browse the web or check their Facebook accounts, making email the most popular activity for users to do on their smartphones. 78% of the respondents in the study revealed that they check their email on their phone. It was also found that 30% of consumers use only their smartphone to check their email, and 91% were likely to check their email at least once per day on their smartphone. However, the percentage of consumers using email on a smartphone ranges and differs dramatically across different countries. For example, in comparison to 75% of those consumers in the US who used it, only 17% in India did. Declining use among young people , the number of Americans visiting email web sites had fallen 6 percent after peaking in November 2009. For persons 12 to 17, the number was down 18 percent. Young people preferred instant messaging, texting and social media. Technology writer Matt Richtel said in The New York Times that email was like the VCR, vinyl records and film cameras—no longer cool and something older people do. A 2015 survey of Android users showed that persons 13 to 24 used messaging apps 3.5 times as much as those over 45, and were far less likely to use email. Issues Attachment size limitation Email messages may have one or more attachments, which are additional files that are appended to the email. Typical attachments include Microsoft Word documents, PDF documents, and scanned images of paper documents. In principle, there is no technical restriction on the size or number of attachments. However, in practice, email clients, servers, and Internet service providers implement various limitations on the size of files, or complete email – typically to 25MB or less. Furthermore, due to technical reasons, attachment sizes as seen by these transport systems can differ from what the user sees, which can be confusing to senders when trying to assess whether they can safely send a file by email. Where larger files need to be shared, various file hosting services are available and commonly used. Information overload The ubiquity of email for knowledge workers and "white collar" employees has led to concerns that recipients face an "information overload" in dealing with increasing volumes of email. With the growth in mobile devices, by default employees may also receive work-related emails outside of their working day. This can lead to increased stress and decreased satisfaction with work. Some observers even argue it could have a significant negative economic effect, as efforts to read the many emails could reduce productivity. Spam Email "spam" is unsolicited bulk email. The low cost of sending such email meant that, by 2003, up to 30% of total email traffic was spam, and was threatening the usefulness of email as a practical tool. The US CAN-SPAM Act of 2003 and similar laws elsewhere had some impact, and a number of effective anti-spam techniques now largely mitigate the impact of spam by filtering or rejecting it for most users, but the volume sent is still very high—and increasingly consists not of advertisements for products, but malicious content or links. In September 2017, for example, the proportion of spam to legitimate email rose to 59.56%. The percentage of spam email in 2021 is estimated to be 85%. Malware Emails are a major vector for the distribution of malware. This is often achieved by attaching malicious programs to the message and persuading potential victims to open the file. Types of malware distributed via email include computer worms and ransomware. Email spoofing Email spoofing occurs when the email message header is designed to make the message appear to come from a known or trusted source. Email spam and phishing methods typically use spoofing to mislead the recipient about the true message origin. Email spoofing may be done as a prank, or as part of a criminal effort to defraud an individual or organization. An example of a potentially fraudulent email spoofing is if an individual creates an email that appears to be an invoice from a major company, and then sends it to one or more recipients. In some cases, these fraudulent emails incorporate the logo of the purported organization and even the email address may appear legitimate. Email bombing Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash. Privacy concerns Today it can be important to distinguish between the Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees. Email privacy, without some security precautions, can be compromised because: email messages are generally not encrypted. email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages. many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox. the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication. web bugs invisibly embedded in HTML content can alert the sender of any email whenever an email is rendered as HTML (some e-mail clients do this when the user reads, or re-reads the e-mail) and from which IP address. It can also reveal whether an email was read on a smartphone or a PC, or Apple Mac device via the user agent string. There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail, or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server. Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this. Finally, the attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses. Legal contracts It is possible for an exchange of emails to form a binding contract, so users must be careful about what they send through email correspondence. A signature block on an email may be interpreted as satisfying a signature requirement for a contract. Flaming Flaming occurs when a person sends a message (or many messages) with angry or antagonistic content. The term is derived from the use of the word incendiary to describe particularly heated email discussions. The ease and impersonality of email communications mean that the social norms that encourage civility in person or via telephone do not exist and civility may be forgotten. Email bankruptcy Also known as "email fatigue", email bankruptcy is when a user ignores a large number of email messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all. As a solution, people occasionally send a "boilerplate" message explaining that their email inbox is full, and that they are in the process of clearing out all the messages. Harvard University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it. Internationalization Originally Internet email was completely ASCII text-based. MIME now allows body content text and some header content text in international character sets, but other headers and email addresses using UTF-8, while standardized have yet to be widely adopted. Tracking of sent mail The original SMTP mail service provides limited mechanisms for tracking a transmitted message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice (bounce message), but both software bugs and system failures can cause messages to be lost. To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production. Many ISPs now deliberately disable non-delivery reports (NDRs) and delivery receipts due to the activities of spammers: Delivery Reports can be used to verify whether an address exists and if so, this indicates to a spammer that it is available to be spammed. If the spammer uses a forged sender email address (email spoofing), then the innocent email address that was used can be flooded with NDRs from the many invalid email addresses the spammer may have attempted to mail. These NDRs then constitute spam (backscatter) from the ISP to the innocent user. In the absence of standard methods, a range of system based around the use of web bugs have been developed. However, these are often seen as underhand or raising privacy concerns, and only work with email clients that support rendering of HTML. Many mail clients now default to not showing "web content". Webmail providers can also disrupt web bugs by pre-caching images. See also Anonymous remailer Anti-spam techniques biff Bounce message Comparison of email clients Dark Mail Alliance Disposable email address E-card Electronic mailing list Email art Email authentication Email digest Email encryption Email hosting service Email hub Email storm Email tracking HTML email Information overload Internet fax List of email subject abbreviations MCI Mail Netiquette Posting style Privacy-enhanced Electronic Mail Push email RSS Telegraphy Unicode and email Usenet quoting Webmail, Comparison of webmail providers X-Originating-IP X.400 Notes References Further reading Cemil Betanov, Introduction to X.400, Artech House, . Marsha Egan, "Inbox Detox and The Habit of Email Excellence ", Acanthus Publishing Lawrence Hughes, Internet e-mail Protocols, Standards and Implementation, Artech House Publishers, . Kevin Johnson, Internet Email Protocols: A Developer's Guide, Addison-Wesley Professional, . Pete Loshin, Essential Email Standards: RFCs and Protocols Made Practical, John Wiley & Sons, . Sara Radicati, Electronic Mail: An Introduction to the X.400 Message Handling Standards, Mcgraw-Hill, . John Rhoton, Programmer's Guide to Internet Mail: SMTP, POP, IMAP, and LDAP, Elsevier, . John Rhoton, X.400 and SMTP: Battle of the E-mail Protocols, Elsevier, . David Wood, Programming Internet Mail, O'Reilly, . External links IANA's list of standard header fields The History of Email is Dave Crocker's attempt at capturing the sequence of 'significant' occurrences in the evolution of email; a collaborative effort that also cites this page. The History of Electronic Mail is a personal memoir by the implementer of an early email system A Look at the Origins of Network Email is a short, yet vivid recap of the key historical facts Business E-Mail Compromise - An Emerging Global Threat, FBI Explained from first principles, a 2021 article attempting to summarize more than 100 RFCs Internet terminology Mail History of the Internet Computer-related introductions in 1971 Fediverse
Email
[ "Technology" ]
7,595
[ "Computing terminology", "Internet terminology" ]
9,739
https://en.wikipedia.org/wiki/Emoticon
An emoticon (, , rarely , ), short for emotion icon, is a pictorial representation of a facial expression using characters—usually punctuation marks, numbers and letters—to express a person's feelings, mood or reaction, without needing to describe it in detail. The first ASCII emoticons are generally credited to computer scientist Scott Fahlman, who proposed what came to be known as "smileys"—:-) and —in a message on the bulletin board system (BBS) of Carnegie Mellon University in 1982. In Western countries, emoticons are usually written at a right angle to the direction of the text. Users from Japan popularized a kind of emoticon called kaomoji, using Japanese's larger character sets. This style arose on ASCII NET of Japan in 1986. They are also known as verticons (from vertical emoticon) due to their readability without rotations. As SMS mobile text messaging and the Internet became widespread in the late 1990s, emoticons became increasingly popular and were commonly used in texting, Internet forums and emails. Emoticons have played a significant role in communication through technology, and some devices and applications have provided stylized pictures that do not use text punctuation. They offer another range of "tone" through texting through facial gestures. Emoticons were the precursors to modern emojis. History Different uses of text characters (pre-1981) In 1648, poet Robert Herrick wrote, "Tumble me down, and I will sit Upon my ruins, (smiling yet:)." Herrick's work predated any other recorded use of brackets as a smiling face by around 200 years. However, experts doubted the inclusion of the colon in the poem was deliberate and if it was meant to represent a smiling face. English professor Alan Jacobs argued that "punctuation, in general, was unsettled in the seventeenth century ... Herrick was unlikely to have consistent punctuational practices himself, and even if he did he couldn't expect either his printers or his readers to share them." 17th century typography practice often placed colons and semicolons within parentheses, including 14 instances of "" in Richard Baxter's 1653 Plain Scripture Proof of Infants Church-membership and Baptism. Precursors to modern emoticons have existed since the 19th century. The National Telegraphic Review and Operators Guide in April 1857 documented the use of the number 73 in Morse code to express "love and kisses" (later reduced to the more formal "best regards"). Dodge's Manual in 1908 documented the reintroduction of "love and kisses" as the number 88. New Zealand academics Joan Gajadhar and John Green comment that both Morse code abbreviations are more succinct than modern abbreviations such as LOL. The transcript of one of Abraham Lincoln's speeches in 1862 recorded the audience's reaction as: "(applause and laughter ;)". There has been some debate whether the glyph in Lincoln's speech was a typo, a legitimate punctuation construct or the first emoticon. Linguist Philip Seargeant argues that it was a simple typesetting error. Before March 1881, the examples of "typographical art" appeared in at least three newspaper articles, including Kurjer warszawski (published in Warsaw) from March 5, 1881, using punctuation to represent the emotions of joy, melancholy, indifference and astonishment. In a 1912 essay titled "For Brevity and Clarity", American author Ambrose Bierce suggested facetiously that a bracket could be used to represent a smiling face, proposing "an improvement in punctuation" with which writers could convey cachinnation, loud or immoderate laughter: "it is written thus ‿ and presents a smiling mouth. It is to be appended, with the full stop, to every jocular or ironical sentence". In a 1936 Harvard Lampoon article, writer Alan Gregg proposed combining brackets with various other punctuation marks to represent various moods. Brackets were used for the sides of the mouth or cheeks, with other punctuation used between the brackets to display various emotions: for a smile, (showing more "teeth") for laughter, for a frown and for a wink. An instance of text characters representing a sideways smiling and frowning face could be found in the New York Herald Tribune on March 10, 1953, promoting the film Lili starring Leslie Caron. The September 1962 issue of MAD magazine included an article titled "Typewri-toons". The piece, featuring typewriter-generated artwork credited to "Royal Portable", was entirely made up of repurposed typography, including a capital letter P having a bigger 'bust' than a capital I, a lowercase b and d discussing their pregnancies, an asterisk on top of a letter to indicate the letter had just come inside from snowfall, and a classroom of lowercase n's interrupted by a lowercase h "raising its hand". A further example attributed to a Baltimore Sunday Sun columnist appeared in a 1967 article in Reader's Digest, using a dash and right bracket to represent a tongue in one's cheek: ). Prefiguring the modern "smiley" emoticon, writer Vladimir Nabokov told an interviewer from The New York Times in 1969, "I often think there should exist a special typographical sign for a smile—some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question." In the 1970s, the PLATO IV computer system was launched. It was one of the first computers used throughout educational and professional institutions, but rarely used in a residential setting. On the computer system, a student at the University of Illinois developed pictograms that resembled different smiling faces. Mary Kalantzis and Bill Cope stated this likely took place in 1972, and they claimed these to be the first emoticons. ASCII emoticons use in digital communication (1982–mid-1990s) Carnegie Mellon computer scientist Scott Fahlman is generally credited with the invention of the digital text-based emoticon in 1982. The use of ASCII symbols, a standard set of codes representing typographical marks, was essential to allow the symbols to be displayed on any computer. In Carnegie Mellon's bulletin board system, Fahlman proposed colon–hyphen–right bracket as a label for "attempted humor" to try to solve the difficulty of conveying humor or sarcasm in plain text. Fahlman sent the following message after an incident where a humorous warning about a mercury spill in an elevator was misunderstood as serious: 19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-( Within a few months, the smiley had spread to the ARPANET and Usenet. Other suggestions on the forum included an asterisk and an ampersand , the latter meant to represent a person doubled over in laughter, as well as a percent sign and a pound sign . Scott Fahlman suggested that not only could his emoticon communicate emotion, but also replace language. Since the 1990s, emoticons (colon, hyphen and bracket) have become integral to digital communications, and have inspired a variety of other emoticons, including the "winking" face using a semicolon , , a representation of the Face with Tears of Joy emoji and the acronym LOL. In 1996, The Smiley Company was established by Nicolas Loufrani and his father Franklin as a way of commercializing the smiley trademark. As part of this, The Smiley Dictionary website focused on ASCII emoticons, where a catalogue was made of them. Many other people did similar to Loufrani from 1995 onwards, including David Sanderson creating the book Smileys in 1997. James Marshall also hosted an online collection of ASCII emoticons that he completed in 2008. A researcher at Stanford University surveyed the emoticons used in four million Twitter messages and found that the smiling emoticon without a hyphen "nose" was much more common than the original version with the hyphen . Linguist Vyvyan Evans argues that this represents a shift in usage by younger users as a form of covert prestige: rejecting a standard usage in order to demonstrate in-group membership. Graphical emoticons and other developments (1990s–present) Loufrani began to use the basic text designs and turned them into graphical representations. They are now known as graphical emoticons. His designs were registered at the United States Copyright Office in 1997 and appeared online as GIF files in 1998. For ASCII emoticons that did not exist to convert into graphical form, Loufrani also backward engineered new ASCII emoticons from the graphical versions he created. These were the first graphical representations of ASCII emoticons. He published his Smiley icons as well as emoticons created by others, along with their ASCII versions, in an online Smiley Dictionary in 2001. This dictionary included 640 different smiley icons and was published as a book called Dico Smileys in 2002. In 2017, British magazine The Drum referred to Loufrani as the "godfather of the emoji" for his work in the field. On September 23, 2021, it was announced that Scott Fahlman was holding an auction for the original emoticons he created in 1982. The auction was held in Dallas, United States, and sold the two designs as non-fungible tokens (NFT). The online auction ended later that month, with the originals selling for US$237,500. In some programming languages, certain operators are known informally by their emoticon-like appearance. This includes the Spaceship operator <=> (a comparison), the Diamond operator <> (for type hinting) and the Elvis operator ?: (a shortened ternary operator). Styles Western Usually, emoticons in Western style have the eyes on the left, followed by the nose and the mouth. It is commonly placed at the end of a sentence, replacing the full stop. The two-character version :), which omits the nose, is very popular. The most basic emoticons are relatively consistent in form, but some can be rotated (making them tiny ambigrams). There are also some variations to emoticons to get new definitions, like changing a character to express another feeling. For example, equals sad and equals very sad. Weeping can be written as :'(. A blush can be expressed as :">. Others include wink ;), a grin :D, :P for tongue out, and smug ; they can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. ;P, such as when blowing a raspberry. An often used combination is also <3 for a heart and <code></3</code> for a broken heart. :O is also sometimes used to depict shock. :/ is used to depict melancholy, disappointment or disapproval. :| may be used to depict a neutral face. A broad grin is sometimes shown with crinkled eyes to express further amusement; XD and the addition of further "D" letters can suggest laughter or extreme amusement, e.g., XDDDD. The "3" in X3 and :3 represents an animal's mouth. An equal sign is often used for the eyes in place of the colon, seen as =). It has become more acceptable to omit the hyphen, whether a colon or an equal sign is used for the eyes. One linguistic study has indicated that the use of a nose in an emoticon may be related to the user's age, with younger people less likely to use a nose. Some variants are also more common in certain countries due to keyboard layouts. For example, the smiley =) may occur in Scandinavia. Diacritical marks are sometimes used. The letters Ö and Ü can be seen as emoticons, as the upright versions of :O (meaning that one is surprised) and :D (meaning that one is very happy), respectively. In countries where the Cyrillic alphabet is used, the right parenthesis ) is used as a smiley. Multiple parentheses )))) are used to express greater happiness, amusement or laughter. The colon is omitted due to being in a lesser-known position on the ЙЦУКЕН keyboard layout. The 'shrug' emoticon, uses the glyph ツ from the Japanese katakana writing system. Kaomoji (Japan ASCII movement) Kaomoji are often seen as the Japanese development of emoticons that is separate to the Scott Fahlman movement, which started in 1982. In 1986, a designer began to use brackets and other ASCII text characters to form faces. Over time, they became more often differentiated from each other, although both use ASCII characters. However, more westernised Kaomojis have dropped the brackets, such as owo, uwu and TwT, popularised in internet subcultures such as the anime and furry communities. 2channel Users of the Japanese discussion board 2channel, in particular, have developed a variety emoticons using characters from various scripts, such as Kannada, as in ಠ_ಠ (for a look of disapproval, disbelief or confusion). Similarly, the letter ರೃ was used in emoticons to represent a monocle and ಥ to represent a tearing eye. They were picked up by 4chan and spread to other Western sites soon after. Some have become characters in their own right like Monā. Korean In South Korea, emoticons use Korean Hangul letters, and the Western style is rarely used. The structures of Korean and Japanese emoticons are somewhat similar, but they have some differences. Korean style contains Korean jamo (letters) instead of other characters. The consonant jamos ㅅ, ㅁ or ㅂ can be used as the mouth or nose component and ㅇ, ㅎ or ㅍ for the eyes. Using quotation marks " and apostrophes ' are also commonly used combinations. Vowel jamos such as ㅜ and ㅠ can depict a crying face. Example: (same function as T in Western style). Sometimes ㅡ (not an em-dash "—", but a vowel jamo), a comma (,) or an underscore (_) is added, and the two character sets can be mixed together, as in and Also, semicolons and carets are commonly used in Korean emoticons; semicolons can mean sweating, examples of it are -;/, and . Chinese ideographic The character 囧 (U+56E7), which means , may be combined with the posture emoticon Orz, such as The character existed in Oracle bone script but was rarely used until its use as an emoticon, documented as early as January 20, 2005. Other variants of 囧 include 崮 (king 囧), 莔 (queen 囧), 商 (囧 with a hat), 囧興 (turtle) and 卣 (Bomberman). The character 槑 (U+69D1), a variant of 梅 , is used to represent a double of 呆 or further magnitude of dullness. In Chinese, normally full characters (as opposed to the stylistic use of 槑) might be duplicated to express emphasis. Posture emoticons Orz Orz (other forms include: , , , , , , , and ) is an emoticon representing a kneeling or bowing person (the Japanese version of which is called dogeza), with the "o" being the head, the "r" being the arms and part of the body, and the "z" being part of the body and the legs. This stick figure can represent respect or kowtowing, but commonly appears along a range of responses, including "frustration, despair, sarcasm, or grudging respect". It was first used in late 2002 at the forum on Techside, a Japanese personal website. At the "Techside FAQ Forum" (), a poster asked about a cable cover, typing "" to show a cable and its cover. Others commented that it looked like a kneeling person, and the symbol became popular. These comments were soon deleted as they were considered off-topic. By 2005, Orz spawned a subculture: blogs have been devoted to the emoticon, and URL shortening services have been named after it. In Taiwan, Orz is associated with the concept of nice guys. o7 o7, or O7, is an emoticon that depicts a person saluting, with the o being the head and the 7 being its arm. Multimedia variations A portmanteau of emotion and sound, an emotisound is a brief sound transmitted and played back during the viewing of a message, typically an IM message or email message. The sound is intended to communicate an emotional subtext. Some services, such as MuzIcons, combine emoticons and music players in an Adobe Flash-based widget. In 2004, the Trillian chat application introduced a feature called "emotiblips", which allows Trillian users to stream files to their instant message recipients "as the voice and video equivalent of an emoticon". In 2007, MTV and Paramount Home Entertainment promoted the "emoticlip" as a form of viral marketing for the second season of the show The Hills. The emoticlips were twelve short snippets of dialogue from the show, uploaded to YouTube. The emoticlip concept is credited to the Bradley & Montgomery advertising firm, which wrote that they hoped it would be widely adopted as "greeting cards that just happen to be selling something". Intellectual property rights In 2000, Despair, Inc. obtained a U.S. trademark registration for the "frowny" emoticon when used on "greeting cards, posters and art prints". In 2001, they issued a satirical press release, announcing that they would sue Internet users who typed the frowny; the company received protests when its mock release was posted on technology news website Slashdot. A number of patent applications have been filed on inventions that assist in communicating with emoticons. A few of these have been issued as US patents. US 6987991, for example, discloses a method developed in 2001 to send emoticons over a cell phone using a drop-down menu. The stated advantage was that it eases entering emoticons. The emoticon :-) was also filed in 2006 and registered in 2008 as a European Community Trademark (CTM). In Finland, the Supreme Administrative Court ruled in 2012 that the emoticon cannot be trademarked, thus repealing a 2006 administrative decision trademarking the emoticons :-), =), , :) and In 2005, a Russian court rejected a legal claim against Siemens by a man who claimed to hold a trademark on the ;-) emoticon. In 2008, Russian entrepreneur Oleg Teterin claimed to have been granted the trademark on the ;-) emoticon. A license would not "cost that much—tens of thousands of dollars" for companies but would be free of charge for individuals. Unicode A different, but related, use of the term "emoticon" is found in the Unicode Standard, referring to a subset of emoji that display facial expressions. The standard explains this usage with reference to existing systems, which provided functionality for substituting certain textual emoticons with images or emoji of the expressions in question. Some smiley faces were present in Unicode since 1.1, including a white frowning face, a white smiling face and a black smiling face ("black" refers to a glyph which is filled, "white" refers to a glyph which is unfilled). The Emoticons block was introduced in Unicode Standard version 6.0 (published in October 2010) and extended by 7.0. It covers Unicode range from U+1F600 to U+1F64F fully. After that block had been filled, Unicode 8.0 (2015), 9.0 (2016) and 10.0 (2017) added additional emoticons in the range from U+1F910 to U+1F9FF. Currently, U+1F90CU+1F90F, U+1F93F, U+1F94DU+1F94F, U+1F96CU+1F97F, U+1F998U+1F9CF (excluding U+1F9C0 which contains the 🧀 emoji) and U+1F9E7U+1F9FF do not contain any emoticons since Unicode 10.0. For historic and compatibility reasons, some other heads and figures, which mostly represent different aspects like genders, activities, and professions instead of emotions, are also found in Miscellaneous Symbols and Pictographs (especially U+1F466U+1F487) and Transport and Map Symbols. Body parts, mostly hands, are also encoded in the Dingbat and Miscellaneous Symbols blocks. See also ASCII art Emotion Markup Language (EML) Emotions in virtual communication Henohenomoheji Hieroglyph iConji Internet slang Irony punctuation Kaoani List of emoticons Martian language Pixel art Smiley Tête à Toto Text Typographic alignment Typographic approximation Explanatory notes References Further reading Bódi, Zoltán, and Veszelszki, Ágnes (2006). Emotikonok. Érzelemkifejezés az internetes kommunikációban (Emoticons: Expressing Emotions in the Internet Communication). Budapest: Magyar Szemiotikai Társaság. Dresner, Eli, and Herring, Susan C. (2010). "Functions of the Non-verbal in CMC: Emoticons and Illocutionary Force" (preprint copy). Communication Theory 20: 249–268. Veszelszki, Ágnes (2012). Connections of Image and Text in Digital and Handwritten Documents. In: Benedek, András, and Nyíri, Kristóf (eds.): The Iconic Turn in Education. Series Visual Learning Vol. 2. Frankfurt am Main et al.: Peter Lang, pp. 97−110. Veszelszki, Ágnes (2015). "Emoticons vs. Reaction-Gifs: Non-Verbal Communication on the Internet from the Aspects of Visuality, Verbality and Time". In: Benedek, András − Nyíri, Kristóf (eds.): Beyond Words: Pictures, Parables, Paradoxes (series Visual Learning, vol. 5). Frankfurt: Peter Lang. 131−145. Wolf, Alecia (2000). "Emotional expression online: Gender differences in emoticon use". CyberPsychology & Behavior 3: 827–833. External links ASCII art Computer-related introductions in 1982 Email Internet forum terminology Internet memes Internet slang Online chat Pictograms
Emoticon
[ "Mathematics" ]
4,885
[ "Emoticons", "Symbols", "Pictograms" ]
9,742
https://en.wikipedia.org/wiki/Erd%C5%91s%20number
The Erdős number () describes the "collaborative distance" between mathematician Paul Erdős and another person, as measured by authorship of mathematical papers. The same principle has been applied in other fields where a particular individual has collaborated with a large and broad number of peers. Overview Paul Erdős (1913–1996) was an influential Hungarian mathematician who in the latter part of his life spent a great deal of time writing papers with a large number of colleagues—over 500—working on solutions to outstanding mathematical problems. He published more papers during his lifetime (at least 1,525) than any other mathematician in history. (Leonhard Euler published more total pages of mathematics but fewer separate papers: about 800.) Erdős spent most of his career with no permanent home or job. He traveled with everything he owned in two suitcases, and would visit mathematicians he wanted to collaborate with, often unexpectedly, and expect to stay with them. The idea of the Erdős number was originally created by the mathematician's friends as a tribute to his enormous output. Later it gained prominence as a tool to study how mathematicians cooperate to find answers to unsolved problems. Several projects are devoted to studying connectivity among researchers, using the Erdős number as a proxy. For example, Erdős collaboration graphs can tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate. Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers (i.e. high proximity). The median Erdős number of Fields Medalists is 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower. As time passes, the lowest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematician Srinivasa Ramanujan has an Erdős number of only 3 (through G. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died. Definition and application in mathematics To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős himself is assigned an Erdős number of zero. A certain author's Erdős number is one greater than the lowest Erdős number of any of their collaborators; for example, an author who has coauthored a publication with Erdős would have an Erdős number of 1. The American Mathematical Society provides a free online tool to determine the collaboration distance between two mathematical authors listed in the Mathematical Reviews catalogue. Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 509 direct collaborators; these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (12,600 people as of 7 August 2020), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number of infinity (or an undefined one). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2. There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data from Mathematical Reviews, which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says: It also says: but excludes non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators. The Erdős number was most likely first defined in print by Casper Goffman, an analyst whose own Erdős number is 2. Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled "And what is your Erdős number?" See also some comments in an obituary by Michael Golomb. The median Erdős number among Fields medalists is as low as 3. Fields medalists with Erdős number 2 include Atle Selberg, Kunihiko Kodaira, Klaus Roth, Alan Baker, Enrico Bombieri, David Mumford, Charles Fefferman, William Thurston, Shing-Tung Yau, Jean Bourgain, Richard Borcherds, Manjul Bhargava, Jean-Pierre Serre and Terence Tao. There are no Fields medalists with Erdős number 1; however, Endre Szemerédi is an Abel Prize Laureate with Erdős number 1. Most frequent Erdős collaborators While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős (i.e. their number of collaborations). Related fields , all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13. The table below summarizes the Erdős number statistics for Nobel prize laureates in Physics, Chemistry, Medicine, and Economics. The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average, and median Erdős numbers among those laureates. Physics Among the Nobel Prize laureates in Physics, Albert Einstein and Sheldon Glashow have an Erdős number of 2. Nobel Laureates with an Erdős number of 3 include Enrico Fermi, Otto Stern, Wolfgang Pauli, Max Born, Willis E. Lamb, Eugene Wigner, Richard P. Feynman, Hans A. Bethe, Murray Gell-Mann, Abdus Salam, Steven Weinberg, Norman F. Ramsey, Frank Wilczek, David Wineland, and Giorgio Parisi. Fields Medal-winning physicist Ed Witten has an Erdős number of 3. Biology Computational biologist Lior Pachter has an Erdős number of 2. Evolutionary biologist Richard Lenski has an Erdős number of 3, having co-authored a publication with Lior Pachter and with mathematician Bernd Sturmfels, each of whom has an Erdős number of 2. Finance and economics There are at least two winners of the Nobel Prize in Economics with an Erdős number of 2: Harry M. Markowitz (1990) and Leonid Kantorovich (1975). Other financial mathematicians with Erdős number of 2 include David Donoho, Marc Yor, Henry McKean, Daniel Stroock, and Joseph Keller. Nobel Prize laureates in Economics with an Erdős number of 3 include Kenneth J. Arrow (1972), Milton Friedman (1976), Herbert A. Simon (1978), Gerard Debreu (1983), John Forbes Nash, Jr. (1994), James Mirrlees (1996), Daniel McFadden (2000), Daniel Kahneman (2002), Robert J. Aumann (2005), Leonid Hurwicz (2007), Roger Myerson (2007), Alvin E. Roth (2012), and Lloyd S. Shapley (2012) and Jean Tirole (2014). Some investment firms have been founded by mathematicians with low Erdős numbers, among them James B. Ax of Axcom Technologies, and James H. Simons of Renaissance Technologies, both with an Erdős number of 3. Philosophy Since the more formal versions of philosophy share reasoning with the basics of mathematics, these fields overlap considerably, and Erdős numbers are available for many philosophers. Philosophers John P. Burgess and Brian Skyrms have an Erdős number of 2. Jon Barwise and Joel David Hamkins, both with Erdős number 2, have also contributed extensively to philosophy, but are primarily described as mathematicians. Law Judge Richard Posner, having coauthored with Alvin E. Roth, has an Erdős number of at most 4. Roberto Mangabeira Unger, a politician, philosopher, and legal theorist who teaches at Harvard Law School, has an Erdős number of at most 4, having coauthored with Lee Smolin. Politics Angela Merkel, Chancellor of Germany from 2005 to 2021, has an Erdős number of at most 5. Engineering Some fields of engineering, in particular communication theory and cryptography, make direct use of the discrete mathematics championed by Erdős. It is therefore not surprising that practitioners in these fields have low Erdős numbers. For example, Robert McEliece, a professor of electrical engineering at Caltech, had an Erdős number of 1, having collaborated with Erdős himself. Cryptographers Ron Rivest, Adi Shamir, and Leonard Adleman, inventors of the RSA cryptosystem, all have Erdős number 2. Linguistics The Romanian mathematician and computational linguist Solomon Marcus had an Erdős number of 1 for a paper in Acta Mathematica Hungarica that he co-authored with Erdős in 1957. Impact Erdős numbers have been a part of the folklore of mathematicians throughout the world for many years. Among all working mathematicians at the turn of the millennium who have a finite Erdős number, the numbers range up to 15, the median is 5, and the mean is 4.65; almost everyone with a finite Erdős number has a number less than 8. Due to the very high frequency of interdisciplinary collaboration in science today, very large numbers of non-mathematicians in many other fields of science also have finite Erdős numbers. For example, political scientist Steven Brams has an Erdős number of 2. In biomedical research, it is common for statisticians to be among the authors of publications, and many statisticians can be linked to Erdős via John Tukey, who has an Erdős number of 2. Similarly, the prominent geneticist Eric Lander and the mathematician Daniel Kleitman have collaborated on papers, and since Kleitman has an Erdős number of 1, a large fraction of the genetics and genomics community can be linked via Lander and his numerous collaborators. Similarly, collaboration with Gustavus Simmons opened the door for Erdős numbers within the cryptographic research community, and many linguists have finite Erdős numbers, many due to chains of collaboration with such notable scholars as Noam Chomsky (Erdős number 4), William Labov (3), Mark Liberman (3), Geoffrey Pullum (3), or Ivan Sag (4). There are also connections with arts fields. According to Alex Lopez-Ortiz, all the Fields and Nevanlinna prize winners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9. Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is either Antoine Lavoisier (born 1743, Erdős number 13), Richard Dedekind (born 1831, Erdős number 7), or Ferdinand Georg Frobenius (born 1849, Erdős number 3), depending on the standard of publication eligibility. Martin Tompa proposed a directed graph version of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining the monotone Erdős number of an author to be the length of a longest path from Erdős to the author in this directed graph. He finds a path of this type of length 12. Also, Michael Barr suggests "rational Erdős numbers", generalizing the idea that a person who has written p joint papers with Erdős should be assigned Erdős number 1/p. From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians for each joint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are. It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas the h-index captures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking." In 2004 William Tozier, a mathematician with an Erdős number of 4 auctioned off a co-authorship on eBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who refused to pay and only placed the bid to stop what he considered a mockery. Variations A number of variations on the concept have been proposed to apply to other fields, notably the Bacon number (as in the game Six Degrees of Kevin Bacon), connecting actors to the actor Kevin Bacon by a chain of joint appearances in films. It was created in 1994, 25 years after Goffman's article on the Erdős number. A small number of people are connected to both Erdős and Bacon and thus have an Erdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematician Danica McKellar, best known for playing Winnie Cooper on the TV series The Wonder Years. Her Erdős number is 4, and her Bacon number is 2. Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the band Black Sabbath in terms of singing in public. Physicist Stephen Hawking had an Erdős–Bacon–Sabbath number of 8, and actress Natalie Portman has one of 11 (her Erdős number is 5). In chess, the Morphy number describes a player's connection to Paul Morphy, widely considered the greatest chess player of his time and an unofficial World Chess Champion. In go, the Shusaku number describes a player's connection to Honinbo Shusaku, the strongest player of his time. In video games, the Ryu number describes a video game character's connection to the Street Fighter character Ryu. See also References External links Jerry Grossman, The Erdős Number Project. Contains statistics and a complete list of all mathematicians with an Erdős number less than or equal to 2. New Erdős Number Project website Migration to new site in 2021. "On a Portion of the Well-Known Collaboration Graph", Jerrold W. Grossman and Patrick D. F. Ion. "Some Analyses of Erdős Collaboration Graph", Vladimir Batagelj and Andrej Mrvar. American Mathematical Society, MR free tools: collaboration distance. A search engine for Erdős numbers and collaboration distance between other authors. Numberphile video. Ronald Graham on imaginary Erdős numbers. Number Social networks Mathematics literature Separation numbers Bibliometrics
Erdős number
[ "Mathematics", "Technology" ]
3,209
[ "Metrics", "Bibliometrics", "Quantity", "Mathematical objects", "Science and technology studies", "Separation numbers", "Numbers" ]
9,758
https://en.wikipedia.org/wiki/Era
An era is a span of time defined for the purposes of chronology or historiography, as in the regnal eras in the history of a given monarchy, a calendar era used for a given calendar, or the geological eras defined for the history of Earth. Comparable terms are Epoch, age, period, saeculum, aeon (Greek aion) and Sanskrit yuga. Etymology The word has been in use in English since 1615, and is derived from Late Latin aera "an era or epoch from which time is reckoned," probably identical to Latin æra "counters used for calculation," plural of æs "brass, money". The Latin word use in chronology seems to have begun in 5th century Visigothic Spain, where it appears in the History of Isidore of Seville, and in later texts. The Spanish era is calculated from 38 BC, Before Christ, perhaps because of a tax (cfr. indiction) levied in that year, or due to a miscalculation of the Battle of Actium, which occurred in 31 BC. Like epoch, "era" in English originally meant "the starting point of an age"; the meaning "system of chronological notation" is c. 1646; that of "historical period" is 1741. Use in chronology In chronology, an "era" is the highest level for the organization of the measurement of time. A "calendar era" indicates a span of many years which are numbered beginning at a specific reference date (epoch), which often marks the origin of a political state or cosmology, dynasty, ruler, the birth of a leader, or another significant historical or mythological event; it is generally called after its focus accordingly as in "Victorian era". Geological era In large-scale natural science, there is need for another time perspective, independent from human activity, and indeed spanning a far longer period (mainly prehistoric), where "geologic era" refers to well-defined time spans. The next-larger division of geologic time is the eon. The Phanerozoic Eon, for example, is subdivided into eras. There are currently three eras defined in the Phanerozoic; the following table lists them from youngest to oldest (BP is an abbreviation for "before present"). The older Proterozoic and Archean eons are also divided into eras. Cosmological era For periods in the history of the universe, the term "epoch" is typically preferred, but "era" is used e.g. of the "Stelliferous Era". Calendar eras Calendar eras count the years since a particular date (epoch), often one with religious significance. Anno mundi (year of the world) refers to a group of calendar eras based on a calculation of the age of the world, assuming it was created as described in the Book of Genesis. In Jewish religious contexts one of the versions is still used, and many Eastern Orthodox religious calendars used another version until 1728. Hebrew year 5772 AM began at sunset on 28 September 2011 and ended on 16 September 2012. In the Western church, Anno Domini (AD also written CE), counting the years since the birth of Jesus on traditional calculations, was always dominant. The Islamic calendar, which also has variants, counts years from the Hijra or emigration of the Islamic prophet Muhammad from Mecca to Medina, which occurred in 622 AD. The Islamic year is some days shorter than 365; January 2012 fell in 1433 AH ("After Hijra"). For a time ranging from 1872 to the Second World War, the Japanese used the imperial year system (kōki), counting from the year when the legendary Emperor Jimmu founded Japan, which occurred in 660 BC. Many Buddhist calendars count from the death of the Buddha, which according to the most commonly used calculations was in 545–543 BCE or 483 BCE. Dates are given as "BE" for "Buddhist Era"; 2000 AD was 2543 BE in the Thai solar calendar. Other calendar eras of the past counted from political events, such as the Seleucid era and the Ancient Roman ab urbe condita ("AUC"), counting from the foundation of the city. Regnal eras The word era also denotes the units used under a different, more arbitrary system where time is not represented as an endless continuum with a single reference year, but each unit starts counting from one again as if time starts again. The use of regnal years is a rather impractical system, and a challenge for historians if a single piece of the historical chronology is missing, and often reflects the preponderance in public life of an absolute ruler in many ancient cultures. Such traditions sometimes outlive the political power of the throne, and may even be based on mythological events or rulers who may not have existed (for example Rome numbering from the rule of Romulus and Remus). In a manner of speaking the use of the supposed date of the birth of Christ as a base year is a form of an era. In East Asia, each emperor's reign may be subdivided into several reign periods, each being treated as a new era. The name of each was a motto or slogan chosen by the emperor. Different East Asian countries utilized slightly different systems, notably: Chinese eras Japanese era Korean eras Vietnamese eras A similar practice survived in the United Kingdom until quite recently, but only for formal official writings: in daily life the ordinary year A.D. has been used for a long time, but Acts of Parliament were dated according to the years of the reign of the current monarch, so that "61 & 62 Vict c. 37" refers to the Local Government (Ireland) Act 1898 passed in the session of Parliament in the 61st/62nd year of the reign of Queen Victoria. Historiography "Era" can be used to refer to well-defined periods in historiography, such as the Roman era, Elizabethan era, Victorian era, etc. Use of the term for more recent periods or topical history might include Soviet era, and "musical eras" in the history of modern popular music, such as the "big band era", "disco era", etc. See also Periodization List of time periods List of archaeological periods References Chronology Units of time
Era
[ "Physics", "Mathematics" ]
1,295
[ "Chronology", "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
9,763
https://en.wikipedia.org/wiki/Exoplanet
An exoplanet or extrasolar planet is a planet outside the Solar System. The first possible evidence of an exoplanet was noted in 1917 but was not then recognized as such. The first confirmed detection of an exoplanet was in 1992 around a pulsar, and the first detection around a main-sequence star was in 1995. A different planet, first detected in 1988, was confirmed in 2003. In collaboration with ground-based and other space-based observatories the James Webb Space Telescope (JWST) is expected to give more insight into exoplanet traits, such as their composition, environmental conditions, and potential for life. There are many methods of detecting exoplanets. Transit photometry and Doppler spectroscopy have found the most, but these methods suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone. In several cases, multiple planets have been observed around a star. About 1 in 5 Sun-like stars are estimated to have an "Earth-sized" planet in the habitable zone. Assuming there are 200 billion stars in the Milky Way, it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if planets orbiting the numerous red dwarfs are included. The least massive exoplanet known is Draugr (also known as PSR B1257+12 A or PSR B1257+12 b), which is about twice the mass of the Moon. The most massive exoplanet listed on the NASA Exoplanet Archive is HR 2562 b, about 30 times the mass of Jupiter. However, according to some definitions of a planet (based on the nuclear fusion of deuterium), it is too massive to be a planet and might be a brown dwarf. Known orbital times for exoplanets vary from less than an hour (for those closest to their star) to thousands of years. Some exoplanets are so far away from the star that it is difficult to tell whether they are gravitationally bound to it. Almost all planets detected so far are within the Milky Way. However, there is evidence that extragalactic planets, exoplanets located in other galaxies, may exist. The nearest exoplanets are located 4.2 light-years (1.3 parsecs) from Earth and orbit Proxima Centauri, the closest star to the Sun. The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone (sometimes called "goldilocks zone"), where it is possible for liquid water, a prerequisite for life as we know it, to exist on the surface. However, the study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. Rogue planets are those that do not orbit any higher mass host. Such objects are considered a separate category of planetary-mass objects, especially if they are gas giants, often counted as sub-brown dwarfs. The rogue planets in the Milky Way possibly number in the billions or more. Definition IAU The official definition of the term planet used by the International Astronomical Union (IAU) only covers the Solar System and thus does not apply to exoplanets. The IAU Working Group on Extrasolar Planets issued a position statement containing a working definition of "planet" in 2001 and which was modified in 2003. An exoplanet was defined by the following criteria: This working definition was amended by the IAU's Commission F2: Exoplanets and the Solar System in August 2018. The official working definition of an exoplanet is now as follows: Alternatives The IAU's working definition is not always used. One alternate suggestion is that planets should be distinguished from brown dwarfs on the basis of their formation. It is widely thought that giant planets form through core accretion, which may sometimes produce planets with masses above the deuterium fusion threshold; massive planets of that sort may have already been observed. Brown dwarfs form like stars from the direct gravitational collapse of clouds of gas, and this formation mechanism also produces objects that are below the limit and can be as low as . Objects in this mass range that orbit their stars with wide separations of hundreds or thousands of Astronomical Units (AU) and have large star/object mass ratios likely formed as brown dwarfs; their atmospheres would likely have a composition more similar to their host star than accretion-formed planets, which would contain increased abundances of heavier elements. Most directly imaged planets as of April 2014 are massive and have wide orbits so probably represent the low-mass end of a brown dwarf formation. One study suggests that objects above formed through gravitational instability and should not be thought of as planets. Also, the 13-Jupiter-mass cutoff does not have a precise physical significance. Deuterium fusion can occur in some objects with a mass below that cutoff. The amount of deuterium fused depends to some extent on the composition of the object. As of 2011, the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016, this limit was increased to 60 Jupiter masses based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses. Another criterion for separating planets and brown dwarfs, rather than deuterium fusion, formation process or location, is whether the core pressure is dominated by Coulomb pressure or electron degeneracy pressure with the dividing line at around 5 Jupiter masses. Nomenclature The convention for naming exoplanets is an extension of the system used for designating multiple-star systems as adopted by the International Astronomical Union (IAU). For exoplanets orbiting a single star, the IAU designation is formed by taking the designated or proper name of its parent star, and adding a lower case letter. Letters are given in order of each planet's discovery around the parent star, so that the first planet discovered in a system is designated "b" (the parent star is considered "a") and later planets are given subsequent letters. If several planets in the same system are discovered at the same time, the closest one to the star gets the next letter, followed by the other planets in order of orbital size. A provisional IAU-sanctioned standard exists to accommodate the designation of circumbinary planets. A limited number of exoplanets have IAU-sanctioned proper names. Other naming systems exist. History of detection For centuries scientists, philosophers, and science fiction writers suspected that extrasolar planets existed, but there was no way of knowing whether they were real in fact, how common they were, or how similar they might be to the planets of the Solar System. Various detection claims made in the nineteenth century were rejected by astronomers. The first evidence of a possible exoplanet, orbiting Van Maanen 2, was noted in 1917, but was not recognized as such. The astronomer Walter Sydney Adams, who later became director of the Mount Wilson Observatory, produced a spectrum of the star using Mount Wilson's 60-inch telescope. He interpreted the spectrum to be of an F-type main-sequence star, but it is now thought that such a spectrum could be caused by the residue of a nearby exoplanet that had been pulverized by the gravity of the star, the resulting dust then falling onto the star. The first suspected scientific detection of an exoplanet occurred in 1988. Shortly afterwards, the first confirmation of detection came in 1992 when Aleksander Wolszczan announced the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12. The first confirmation of an exoplanet orbiting a main-sequence star was made in 1995, when a giant planet was found in a four-day orbit around the nearby star 51 Pegasi. Some exoplanets have been imaged directly by telescopes, but the vast majority have been detected through indirect methods, such as the transit method and the radial-velocity method. In February 2018, researchers using the Chandra X-ray Observatory, combined with a planet detection technique called microlensing, found evidence of planets in a distant galaxy, stating, "Some of these exoplanets are as (relatively) small as the moon, while others are as massive as Jupiter. Unlike Earth, most of the exoplanets are not tightly bound to stars, so they're actually wandering through space or loosely orbiting between stars. We can estimate that the number of planets in this [faraway] galaxy is more than a trillion." Early speculations In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that fixed stars are similar to the Sun and are likewise accompanied by planets. In the eighteenth century, the same possibility was mentioned by Isaac Newton in the "General Scholium" that concludes his Principia. Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One." In 1938, D.Belorizky demonstrated that it was realistic to search for exo-Jupiters by using transit photometry. In 1952, more than 40 years before the first hot Jupiter was discovered, Otto Struve wrote that there is no compelling reason that planets could not be much closer to their parent star than is the case in the Solar System, and proposed that Doppler spectroscopy and the transit method could detect super-Jupiters in short orbits. Discredited claims Claims of exoplanet detections have been made since the nineteenth century. Some of the earliest involve the binary star 70 Ophiuchi. In 1855, William Stephen Jacob at the East India Company's Madras Observatory reported that orbital anomalies made it "highly probable" that there was a "planetary body" in this system. In the 1890s, Thomas J. J. See of the University of Chicago and the United States Naval Observatory stated that the orbital anomalies proved the existence of a dark body in the 70 Ophiuchi system with a 36-year period around one of the stars. However, Forest Ray Moulton published a paper proving that a three-body system with those orbital parameters would be highly unstable. During the 1950s and 1960s, Peter van de Kamp of Swarthmore College made another prominent series of detection claims, this time for planets orbiting Barnard's Star. Astronomers now generally regard all early reports of detection as erroneous. In 1991, Andrew Lyne, M. Bailes and S. L. Shemar claimed to have discovered a pulsar planet in orbit around PSR 1829-10, using pulsar timing variations. The claim briefly received intense attention, but Lyne and his team soon retracted it. Confirmed discoveries As of , a total of confirmed exoplanets are listed in the NASA Exoplanet Archive, including a few that were confirmations of controversial claims from the late 1980s. The first published discovery to receive subsequent confirmation was made in 1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and Stephenson Yang of the University of Victoria and the University of British Columbia. Although they were cautious about claiming a planetary detection, their radial-velocity observations suggested that a planet orbits the star Gamma Cephei. Partly because the observations were at the very limits of instrumental capabilities at the time, astronomers remained skeptical for several years about this and other similar observations. It was thought some of the apparent planets might instead have been brown dwarfs, objects intermediate in mass between planets and stars. In 1990, additional observations were published that supported the existence of the planet orbiting Gamma Cephei, but subsequent work in 1992 again raised serious doubts. Finally, in 2003, improved techniques allowed the planet's existence to be confirmed. On 9 January 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Follow-up observations solidified these results, and confirmation of a third planet in 1994 revived the topic in the popular press. These pulsar planets are thought to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of gas giants that somehow survived the supernova and then decayed into their current orbits. As pulsars are aggressive stars, it was considered unlikely at the time that a planet may be able to be formed in their orbit. In the early 1990s, a group of astronomers led by Donald Backer, who were studying what they thought was a binary pulsar (PSR B1620−26 b), determined that a third object was needed to explain the observed Doppler shifts. Within a few years, the gravitational effects of the planet on the orbit of the pulsar and white dwarf had been measured, giving an estimate of the mass of the third object that was too small for it to be a star. The conclusion that the third object was a planet was announced by Stephen Thorsett and his collaborators in 1993. On 6 October 1995, Michel Mayor and Didier Queloz of the University of Geneva announced the first definitive detection of an exoplanet orbiting a main-sequence star, nearby G-type star 51 Pegasi. This discovery, made at the Observatoire de Haute-Provence, ushered in the modern era of exoplanetary discovery, and was recognized by a share of the 2019 Nobel Prize in Physics. Technological advances, most notably in high-resolution spectroscopy, led to the rapid detection of many new exoplanets: astronomers could detect exoplanets indirectly by measuring their gravitational influence on the motion of their host stars. More extrasolar planets were later detected by observing the variation in a star's apparent luminosity as an orbiting planet transited in front of it. Initially, the most known exoplanets were massive planets that orbited very close to their parent stars. Astronomers were surprised by these "hot Jupiters", because theories of planetary formation had indicated that giant planets should only form at large distances from stars. But eventually more planets of other sorts were found, and it is now clear that hot Jupiters make up the minority of exoplanets. In 1999, Upsilon Andromedae became the first main-sequence star known to have multiple planets. Kepler-16 contains the first discovered planet that orbits a binary main-sequence star system. On 26 February 2014, NASA announced the discovery of 715 newly verified exoplanets around 305 stars by the Kepler Space Telescope. These exoplanets were checked using a statistical technique called "verification by multiplicity". Before these results, most confirmed planets were gas giants comparable in size to Jupiter or larger because they were more easily detected, but the Kepler planets are mostly between the size of Neptune and the size of Earth. On 23 July 2015, NASA announced Kepler-452b, a near-Earth-size planet orbiting the habitable zone of a G2-type star. On 6 September 2018, NASA discovered an exoplanet about 145 light years away from Earth in the constellation Virgo. This exoplanet, Wolf 503b, is twice the size of Earth and was discovered orbiting a type of star known as an "Orange Dwarf". Wolf 503b completes one orbit in as few as six days because it is very close to the star. Wolf 503b is the only exoplanet that large that can be found near the so-called small planet radius gap. The gap, sometimes called the Fulton gap, is the observation that it is unusual to find exoplanets with sizes between 1.5 and 2 times the radius of the Earth. In January 2020, scientists announced the discovery of TOI 700 d, the first Earth-sized planet in the habitable zone detected by TESS. Candidate discoveries As of January 2020, NASA's Kepler and TESS missions had identified 4374 planetary candidates yet to be confirmed, several of them being nearly Earth-sized and located in the habitable zone, some around Sun-like stars. In September 2020, astronomers reported evidence, for the first time, of an extragalactic planet, M51-ULS-1b, detected by eclipsing a bright X-ray source (XRS), in the Whirlpool Galaxy (M51a). Also in September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet unbounded by any star, and free floating in the Milky Way galaxy. Detection methods Direct imaging Planets are extremely faint compared to their parent stars. For example, a Sun-like star is about a billion times brighter than the reflected light from any exoplanet orbiting it. It is difficult to detect such a faint light source, and furthermore, the parent star causes a glare that tends to wash it out. It is necessary to block the light from the parent star to reduce the glare while leaving the light from the planet detectable; doing so is a major technical challenge which requires extreme optothermal stability. All exoplanets that have been directly imaged are both large (more massive than Jupiter) and widely separated from their parent stars. Specially designed direct-imaging instruments such as Gemini Planet Imager, VLT-SPHERE, and SCExAO will image dozens of gas giants, but the vast majority of known extrasolar planets have only been detected through indirect methods. Indirect methods Transit method If a planet crosses (or transits) in front of its parent star's disk, then the observed brightness of the star drops by a small amount. The amount by which the star dims depends on its size and on the size of the planet, among other factors. Because the transit method requires that the planet's orbit intersect a line-of-sight between the host star and Earth, the probability that an exoplanet in a randomly oriented orbit will be observed to transit the star is somewhat small. The Kepler telescope used this method. Radial velocity or Doppler method As a planet orbits a star, the star also moves in its own small orbit around the system's center of mass. Variations in the star's radial velocity—that is, the speed with which it moves towards or away from Earth—can be detected from displacements in the star's spectral lines due to the Doppler effect. Extremely small radial-velocity variations can be observed, of 1 m/s or even somewhat less. Transit timing variation (TTV) When multiple planets are present, each one slightly perturbs the others' orbits. Small variations in the times of transit for one planet can thus indicate the presence of another planet, which itself may or may not transit. For example, variations in the transits of the planet Kepler-19b suggest the existence of a second planet in the system, the non-transiting Kepler-19c. Transit duration variation (TDV) When a planet orbits multiple stars or if the planet has moons, its transit time can significantly vary per transit. Although no new planets or moons have been discovered with this method, it is used to successfully confirm many transiting circumbinary planets. Gravitational microlensing Microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. Planets orbiting the lensing star can cause detectable anomalies in magnification as it varies over time. Unlike most other methods which have a detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1–10 AU away from Sun-like stars. Astrometry Astrometry consists of precisely measuring a star's position in the sky and observing the changes in that position over time. The motion of a star due to the gravitational influence of a planet may be observable. Because the motion is so small, however, this method was not very productive until the 2020s. It has produced only a few confirmed discoveries, though it has been successfully used to investigate the properties of planets found in other ways. Pulsar timing A pulsar, a small, dense remnant of a star that has exploded as a supernova, emits radio waves regularly as it rotates. If planets orbit the pulsar, the motion of the pulsar around the system's center of mass alters the pulsar's distance to Earth over time. As a result, the radio pulses from the pulsar arrive on Earth at a later or earlier time. This light travel delay due to the pulsar being physically closer or farther from Earth is known as a Roemer time delay. The first confirmed discovery of an extrasolar planet was made using this method. But as of 2011, it has not been very productive; five planets have been detected in this way, around three different pulsars. Variable star timing (pulsation frequency) Like pulsars, there are some other types of stars which exhibit periodic activity. Deviations from periodicity can sometimes be caused by a planet orbiting it. As of 2013, a few planets have been discovered with this method. Reflection/emission modulations When a planet orbits very close to a star, it catches a considerable amount of starlight. As the planet orbits the star, the amount of light changes due to planets having phases from Earth's viewpoint or planets glowing more from one side than the other due to temperature differences. Relativistic beaming Relativistic beaming measures the observed flux from the star due to its motion. The brightness of the star changes as the planet moves closer or further away from its host star. Ellipsoidal variations Massive planets close to their host stars can slightly deform the shape of the star. This causes the brightness of the star to slightly deviate depending on how it is rotated relative to Earth. Polarimetry With the polarimetry method, a polarized light reflected off the planet is separated from unpolarized light emitted from the star. No new planets have been discovered with this method, although a few already discovered planets have been detected with this method. Circumstellar disks Disks of space dust surround many stars, thought to originate from collisions among asteroids and comets. The dust can be detected because it absorbs starlight and re-emits it as infrared radiation. Features on the disks may suggest the presence of planets, though this is not considered a definitive detection method. Formation and evolution Planets may form within a few to tens (or more) of millions of years of their star forming. The planets of the Solar System can only be observed in their current state, but observations of different planetary systems of varying ages allows us to observe planets at different stages of evolution. Available observations range from young proto-planetary disks where planets are still forming to planetary systems of over 10 Gyr old. When planets form in a gaseous protoplanetary disk, they accrete hydrogen/helium envelopes. These envelopes cool and contract over time and, depending on the mass of the planet, some or all of the hydrogen/helium is eventually lost to space. This means that even terrestrial planets may start off with large radii if they form early enough. An example is Kepler-51b which has only about twice the mass of Earth but is almost the size of Saturn, which is a hundred times the mass of Earth. Kepler-51b is quite young at a few hundred million years old. Planet-hosting stars There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone. Most known exoplanets orbit stars roughly similar to the Sun, i.e. main-sequence stars of spectral categories F, G, or K. Lower-mass stars (red dwarfs, of spectral category M) are less likely to have planets massive enough to be detected by the radial-velocity method. Despite this, several tens of planets around red dwarfs have been discovered by the Kepler space telescope, which uses the transit method to detect smaller planets. Using data from Kepler, a correlation has been found between the metallicity of a star and the probability that the star hosts a giant planet, similar to the size of Jupiter. Stars with higher metallicity are more likely to have planets, especially giant planets, than stars with lower metallicity. Some planets orbit one member of a binary star system, and several circumbinary planets have been discovered which orbit both members of a binary star. A few planets in triple star systems are known and one in the quadruple system Kepler-64. Orbital and physical parameters General features Color and brightness In 2013, the color of an exoplanet was determined for the first time. The best-fit albedo measurements of HD 189733b suggest that it is deep dark blue. Later that same year, the colors of several other exoplanets were determined, including GJ 504 b which visually has a magenta color, and Kappa Andromedae b, which if seen up close would appear reddish in color. Helium planets are expected to be white or grey in appearance. The apparent brightness (apparent magnitude) of a planet depends on how far away the observer is, how reflective the planet is (albedo), and how much light the planet receives from its star, which depends on how far the planet is from the star and how bright the star is. So, a planet with a low albedo that is close to its star can appear brighter than a planet with a high albedo that is far from the star. The darkest known planet in terms of geometric albedo is TrES-2b, a hot Jupiter that reflects less than 1% of the light from its star, making it less reflective than coal or black acrylic paint. Hot Jupiters are expected to be quite dark due to sodium and potassium in their atmospheres, but it is not known why TrES-2b is so dark—it could be due to an unknown chemical compound. For gas giants, geometric albedo generally decreases with increasing metallicity or atmospheric temperature unless there are clouds to modify this effect. Increased cloud-column depth increases the albedo at optical wavelengths, but decreases it at some infrared wavelengths. Optical albedo increases with age, because older planets have higher cloud-column depths. Optical albedo decreases with increasing mass, because higher-mass giant planets have higher surface gravities, which produces lower cloud-column depths. Also, elliptical orbits can cause major fluctuations in atmospheric composition, which can have a significant effect. There is more thermal emission than reflection at some near-infrared wavelengths for massive and/or young gas giants. So, although optical brightness is fully phase-dependent, this is not always the case in the near infrared. Temperatures of gas giants reduce over time and with distance from their stars. Lowering the temperature increases optical albedo even without clouds. At a sufficiently low temperature, water clouds form, which further increase optical albedo. At even lower temperatures, ammonia clouds form, resulting in the highest albedos at most optical and near-infrared wavelengths. Magnetic field In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one-tenth as strong as Jupiter's. The magnetic fields of exoplanets are thought to be detectable by their auroral radio emissions with sensitive low-frequency radio telescopes such as LOFAR, although they have yet to be found. The radio emissions could measure the rotation rate of the interior of an exoplanet, and may yield a more accurate way to measure exoplanet rotation than by examining the motion of clouds. However, the most sensitive radio search for auroral emissions, thus far, from nine exoplanets with Arecibo also did not result in any discoveries. Earth's magnetic field results from its flowing liquid metallic core, but on massive super-Earths with high pressure, different compounds may form which do not match those created under terrestrial conditions. Compounds may form with greater viscosities and high melting temperatures, which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Forms of magnesium oxide such as MgSi3O12 could be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths. Hot Jupiters have been observed to have a larger radius than expected. This could be caused by the interaction between the stellar wind and the planet's magnetosphere creating an electric current through the planet that heats it up (Joule heating) causing it to expand. The more magnetically active a star is, the greater the stellar wind and the larger the electric current leading to more heating and expansion of the planet. This theory matches the observation that stellar activity is correlated with inflated planetary radii. In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic hydrogen form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect "star-planet interactions" in the well-studied HD 189733 system calls other related claims of the effect into question. A later search for radio emissions from eight exoplanets that orbit within 0.1 astronomical units of their host stars, conducted by the Arecibo radio telescope also failed to find signs of these magnetic star-planet interactions. In 2019, the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss. Plate tectonics In 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-Earths even if the planet is dry. If super-Earths have more than 80 times as much water as Earth, then they become ocean planets with all land completely submerged. However, if there is less water than this limit, then the deep water cycle will move enough water between the oceans and mantle to allow continents to exist. Volcanism Large surface temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions. Rings The star 1SWASP J140747.93-394542.6 was occulted by an object that is circled by a ring system much larger than Saturn's rings. However, the mass of the object is not known; it could be a brown dwarf or low-mass star instead of a planet. The brightness of optical images of Fomalhaut b could be due to starlight reflecting off a circumplanetary ring system with a radius between 20 and 40 times that of Jupiter's radius, about the size of the orbits of the Galilean moons. The rings of the Solar System's gas giants are aligned with their planet's equator. However, for exoplanets that orbit close to their star, tidal forces from the star would lead to the outermost rings of a planet being aligned with the planet's orbital plane around the star. A planet's innermost rings would still be aligned with the planet's equator so that if the planet has a tilted rotational axis, then the different alignments between the inner and outer rings would create a warped ring system. Moons In December 2013, a candidate exomoon of the rogue planet or red dwarf MOA-2011-BLG-262L was announced. On 3 October 2018, evidence suggesting a large exomoon orbiting Kepler-1625b was reported. Atmospheres Atmospheres have been detected around several exoplanets. The first to be observed was HD 209458 b in 2001. As of February 2014, more than fifty transiting and five directly imaged exoplanet atmospheres have been observed, resulting in detection of molecular spectral features; observation of day–night temperature gradients; and constraints on vertical atmospheric structure. Also, an atmosphere has been detected on the non-transiting hot Jupiter Tau Boötis b. In May 2017, glints of light from Earth, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets. Comet-like tails KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet. The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust. In June 2015, scientists reported that the atmosphere of GJ 436 b was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail long. Insolation pattern Tidally locked planets in a 1:1 spin-orbit resonance would have their star always shining directly overhead on one spot, which would be hot with the opposite hemisphere receiving no light and being freezing cold. Such a planet could resemble an eyeball, with the hotspot being the pupil. Planets with an eccentric orbit could be locked in other resonances. 3:2 and 5:2 resonances would result in a double-eyeball pattern with hotspots in both eastern and western hemispheres. Planets with both an eccentric orbit and a tilted axis of rotation would have more complicated insolation patterns. Surface Surface composition Surface features can be distinguished from atmospheric features by comparing emission and reflection spectroscopy with transmission spectroscopy. Mid-infrared spectroscopy of exoplanets may detect rocky surfaces, and near-infrared may identify magma oceans or high-temperature lavas, hydrated silicate surfaces and water ice, giving an unambiguous method to distinguish between rocky and gaseous exoplanets. Surface temperature Measuring the intensity of the light it receives from its parent star can estimate the temperature of an exoplanet. For example, the planet OGLE-2005-BLG-390Lb is estimated to have a surface temperature of roughly −220 °C (50 K). However, such estimates may be substantially in error because they depend on the planet's usually unknown albedo, and because factors such as the greenhouse effect may introduce unknown complications. A few planets have had their temperature measured by observing the variation in infrared radiation as the planet moves around in its orbit and is eclipsed by its parent star. For example, the planet HD 189733b has been estimated to have an average temperature of 1,205 K (932 °C) on its dayside and 973 K (700 °C) on its nightside. Habitability As more planets are discovered, the field of exoplanetology continues to grow into a deeper study of extrasolar worlds, and will ultimately tackle the prospect of life on planets beyond the Solar System. At cosmic distances, life can only be detected if it is developed at a planetary scale and strongly modified the planetary environment, in such a way that the modifications cannot be explained by classical physico-chemical processes (out of equilibrium processes). For example, molecular oxygen () in the atmosphere of Earth is a result of photosynthesis by living plants and many kinds of microorganisms, so it can be used as an indication of life on exoplanets, although small amounts of oxygen could also be produced by non-biological means. Furthermore, a potentially habitable planet must orbit a stable star at a distance within which planetary-mass objects with sufficient atmospheric pressure can support liquid water at their surfaces. Habitable zone The habitable zone around a star is the region where the temperature is just right to allow liquid water to exist on the surface of a planet; that is, not too close to the star for the water to evaporate and not too far away from the star for the water to freeze. The heat produced by stars varies depending on the size and age of the star, so that the habitable zone can be at different distances for different stars. Also, the atmospheric conditions on the planet influence the planet's ability to retain heat so that the location of the habitable zone is also specific to each type of planet: desert planets (also known as dry planets), with very little water, will have less water vapor in the atmosphere than Earth and so have a reduced greenhouse effect, meaning that a desert planet could maintain oases of water closer to its star than Earth is to the Sun. The lack of water also means there is less ice to reflect heat into space, so the outer edge of desert-planet habitable zones is further out. Rocky planets with a thick hydrogen atmosphere could maintain surface water much further out than the Earth–Sun distance. Planets with larger mass have wider habitable zones because gravity reduces the water cloud column depth which reduces the greenhouse effect of water vapor, thus moving the inner edge of the habitable zone closer to the star. Planetary rotation rate is one of the major factors determining the circulation of the atmosphere and hence the pattern of clouds: slowly rotating planets create thick clouds that reflect more and so can be habitable much closer to their star. Earth with its current atmosphere would be habitable in Venus's orbit, if it had Venus's slow rotation. If Venus lost its water ocean due to a runaway greenhouse effect, it is likely to have had a higher rotation rate in the past. Alternatively, Venus never had an ocean because water vapor was lost to space during its formation and could have had its slow rotation throughout its history. Tidally locked planets (a.k.a. "eyeball" planets) can be habitable closer to their star than previously thought due to the effect of clouds: at high stellar flux, strong convection produces thick water clouds near the substellar point that greatly increase the planetary albedo and reduce surface temperatures. Planets in the habitable zones of stars with low metallicity are more habitable for complex life on land than high metallicity stars because the stellar spectrum of high metallicity stars is less likely to cause the formation of ozone thus enabling more ultraviolet rays to reach the planet's surface. Habitable zones have usually been defined in terms of surface temperature, however over half of Earth's biomass is from subsurface microbes, and the temperature increases with depth, so the subsurface can be conducive for microbial life when the surface is frozen and if this is considered, the habitable zone extends much further from the star, even rogue planets could have liquid water at sufficient depths underground. In an earlier era of the universe the temperature of the cosmic microwave background would have allowed any rocky planets that existed to have liquid water on their surface regardless of their distance from a star. Jupiter-like planets might not be habitable, but they could have habitable moons. Ice ages and snowball states The outer edge of the habitable zone is where planets are completely frozen, but planets well inside the habitable zone can periodically become frozen. If orbital fluctuations or other causes produce cooling, then this creates more ice, but ice reflects sunlight causing even more cooling, creating a feedback loop until the planet is completely or nearly completely frozen. When the surface is frozen, this stops carbon dioxide weathering, resulting in a build-up of carbon dioxide in the atmosphere from volcanic emissions. This creates a greenhouse effect which thaws the planet again. Planets with a large axial tilt are less likely to enter snowball states and can retain liquid water further from their star. Large fluctuations of axial tilt can have even more of a warming effect than a fixed large tilt. Paradoxically, planets orbiting cooler stars, such as red dwarfs, are less likely to enter snowball states because the infrared radiation emitted by cooler stars is mostly at wavelengths that are absorbed by ice which heats it up. Tidal heating If a planet has an eccentric orbit, then tidal heating can provide another source of energy besides stellar radiation. This means that eccentric planets in the radiative habitable zone can be too hot for liquid water. Tides also circularize orbits over time, so there could be planets in the habitable zone with circular orbits that have no water because they used to have eccentric orbits. Eccentric planets further out than the habitable zone would still have frozen surfaces, but the tidal heating could create a subsurface ocean similar to Europa's. In some planetary systems, such as in the Upsilon Andromedae system, the eccentricity of orbits is maintained or even periodically varied by perturbations from other planets in the system. Tidal heating can cause outgassing from the mantle, contributing to the formation and replenishment of an atmosphere. Potentially habitable planets A review in 2015 identified exoplanets Kepler-62f, Kepler-186f and Kepler-442b as the best candidates for being potentially habitable. These are at a distance of 1200, 490 and 1,120 light-years away, respectively. Of these, Kepler-186f is in similar size to Earth with its 1.2-Earth-radius measure, and it is located towards the outer edge of the habitable zone around its red dwarf star. When looking at the nearest terrestrial exoplanet candidates, Proxima Centauri b is about 4.2 light-years away. Its equilibrium temperature is estimated to be . Earth-size planets In November 2013, it was estimated that 22±8% of Sun-like stars in the Milky Way galaxy may have an Earth-sized planet in the habitable zone. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earths, rising to 40 billion if red dwarfs are included. Kepler-186f, a 1.2-Earth-radius planet in the habitable zone of a red dwarf, was reported in April 2014. Proxima Centauri b, a planet in the habitable zone of Proxima Centauri, the nearest known star to the solar system with an estimated minimum mass of 1.27 times the mass of the Earth. In February 2013, researchers speculated that up to 6% of small red dwarfs may have Earth-size planets. This suggests that the closest one to the Solar System could be 13 light-years away. The estimated distance increases to 21 light-years when a 95% confidence interval is used. In March 2013, a revised estimate gave an occurrence rate of 50% for Earth-size planets in the habitable zone of red dwarfs. At 1.63 times Earth's radius Kepler-452b is the first discovered near-Earth-size planet in the "habitable zone" around a G2-type Sun-like star (July 2015). Planetary system Exoplanets are often members of planetary systems of multiple planets around a star. The planets interact with each other gravitationally and sometimes form resonant systems where the orbital periods of the planets are in integer ratios. The Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance. Some hot Jupiters orbit their stars in the opposite direction to their stars' rotation. One proposed explanation is that hot Jupiters tend to form in dense clusters, where perturbations are more common and gravitational capture of planets by neighboring stars is possible. Search projects ANDES – The ArmazoNes High Dispersion Echelle Spectrograph, a planet finding and planet characterisation spectrograph, is expected to be fitted onto ESO's ELT 39.3m telescope. ANDES was formally known as HIRES, which itself was created after a merger of the consortia behind the earlier CODEX (optical high-resolution) and SIMPLE (near-infrared high-resolution) spectrograph concepts. CoRoT – Space telescope that found the first transiting rocky planet. ESPRESSO – A rocky planet-finding, and stable spectroscopic observing, spectrograph mounted on ESO's 4 × 8.2 m VLT telescope, sited on the levelled summit of Cerro Paranal in the Atacama Desert of northern Chile. HARPS – High-precision echelle planet-finding spectrograph installed on the ESO's 3.6m telescope at La Silla Observatory in Chile. Kepler – Mission to look for large numbers of exoplanets using the transit method. TESS – To search for new exoplanets; rotating so by the end of its two-year mission it will have observed stars from all over the sky. It is expected to find at least 3,000 new exoplanets. See also Detecting Earth from distant star-based systems Extrasolar planets in fiction Habitable zone for complex life Lists of exoplanets Planetary capture Notes References Further reading (Hardback); (Paperback). (Hardback); (Paperback). (Hardcover.) Paperback. External links The Extrasolar Planets Encyclopaedia (Paris Observatory) NASA Exoplanet Archive Exoplanetology Search for extraterrestrial intelligence Types of planet Concepts in astronomy Articles containing video clips
Exoplanet
[ "Physics", "Astronomy" ]
9,661
[ "Concepts in astronomy" ]
9,765
https://en.wikipedia.org/wiki/Equuleus
Equuleus is a faint constellation located just north of the celestial equator. Its name is Latin for "little horse", a foal. It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is the second smallest of the modern constellations (after Crux), spanning only 72 square degrees. It is also very faint, having no stars brighter than the fourth magnitude. Notable features Stars The brightest star in Equuleus is α Equulei, traditionally called Kitalpha, a yellow star magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse". There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. γ Equulei is an α2 CVn variable star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. 6 Equulei is an astrometric binary system itself, with an apparent magnitude of 6.07. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days. It has a spectral type of M3e-M4e and has an average B-V colour index of +1.41. Equuleus contains some double stars of interest. γ Equulei consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. ε Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equulei is a binary star with an orbital period of 5.7 years, which at one time was the shortest known orbital period for an optical binary. The two components of the system are never more than 0.35 arcseconds apart. Deep-sky objects Due to its small size and its distance from the plane of the Milky Way, Equuleus is rather devoid of deep sky objects. Some very faint galaxies in the NGC catalog between magnitudes 13 and 15 include NGC 7015, NGC 7040, and NGC 7046. NGC 7045 is a triple star that was mistaken as a nebula by its discoverer, John Herschel. Other faint galaxies in the IC Catalog include IC 1360, IC 1361, IC 1364, IC 1367, IC 1375, and IC 5083. IC 1365 is a group of galaxies. The magnitudes of these objects vary from 14.5 to 15.5, making them hard to see in even the largest of amateur telescopes. Mythology In Greek mythology, one myth associates Equuleus with the foal Celeris (meaning "swiftness" or "speed"), who was the offspring or brother of the winged horse Pegasus. Celeris was given to Castor by Mercury. Other myths say that Equuleus is the horse struck from Poseidon's trident, during the contest between him and Athena when deciding which would be the superior. Because this section of stars rises before Pegasus, it is often called Equus Primus, or the First Horse. Equuleus is also linked to the story of Philyra and Saturn. Created by Hipparchus and included by Ptolemy, it abuts Pegasus; unlike the larger horse, it is depicted as a horse's head alone. Equivalents In Chinese astronomy, the stars that correspond to Equuleus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). See also Equuleus (Chinese astronomy) References Burnham, Robert (1978). Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 2. Dover Publications Hoffleit+ (1991) V/50 The Bright Star Catalogue, 5th revised ed, Yale University Observatory, Strasbourg astronomical Data Center Ian Ridpath & Wil Tirion (2007). Stars and Planets Guide, Collins, London. . Princeton University Press, Princeton. . External links The Deep Photographic Guide to the Constellations: Equuleus The clickable Equuleus Star Tales – Equuleus Warburg Institute Iconographic Database (medieval and early modern images of Equuleus) Constellations Northern constellations Constellations listed by Ptolemy
Equuleus
[ "Astronomy" ]
1,025
[ "Constellations listed by Ptolemy", "Constellations", "Northern constellations", "Sky regions", "Equuleus" ]
9,770
https://en.wikipedia.org/wiki/Eclipse
An eclipse is an astronomical event which occurs when an astronomical object or spacecraft is temporarily obscured, by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy. An eclipse is the result of either an occultation (completely hidden) or a transit (partially hidden). A "deep eclipse" (or "deep occultation") is when a small astronomical object is behind a bigger one. The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position. For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth and the line defined by the intersecting planes points near the Sun. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. It is because of the non-planar differences that eclipses are not a common event. If both orbits were perfectly circular, then each eclipse would be the same type every month. Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly total eclipses occurring at any one particular point on the Earth's surface, are very rare events that can be many decades apart. Etymology The term is derived from the ancient Greek noun (), which means 'the abandonment', 'the downfall', or 'the darkening of a heavenly body', which is derived from the verb () which means 'to abandon', 'to darken', or 'to cease to exist', a combination of prefix (), from preposition (), 'out', and of verb (), 'to be absent'. Umbra, penumbra and antumbra For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse. Typically the cross-section of the objects involved in an astronomical eclipse is roughly disk-shaped. The region of an object's shadow during an eclipse is divided into three parts: The umbra (Latin for 'shadow'), within which the object completely covers the light source. For the Sun, this light source is the photosphere. The antumbra (from Latin ante, 'before, in front of', plus umbra) extending beyond the tip of the umbra, within which the object is completely in front of the light source but too small to completely cover it. The penumbra (from the Latin paene, 'almost, nearly', plus umbra), within which the object is only partially in front of the light source. A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable, because the antumbra of the Sun-Earth system lies far beyond the Moon. Analogously, Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun and thus cannot produce an annular eclipse. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra. The first contact occurs when the eclipsing object's disc first starts to impinge on the light source; second contact is when the disc moves completely within the light source; third contact when it starts to move out of the light; and fourth or last contact when it finally leaves the light source's disc entirely. For spherical bodies, when the occulting object is smaller than the star, the length (L) of the umbra's cone-shaped shadow is given by: where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384 km, which is much larger than the Moon's semimajor axis of 3.844 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality. On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving. Eclipse cycles An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world. In one saros period there are 239.0 anomalistic periods, 241.0 sidereal periods, 242.0 nodical periods, and 223.0 synodic periods. Although the orbit of the Moon does not give exact integers, the numbers of orbit cycles are close enough to integers to give strong similarity for eclipses spaced at 18.03 yr intervals. Earth–Moon system An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros. Between 1901 and 2100 there are the maximum of seven eclipses in: four (penumbral) lunar and three solar eclipses: 1908, 2038. four solar and three lunar eclipses: 1918, 1973, 2094. five solar and two lunar eclipses: 1934. Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in: 1591, 1656, 1787, 1805, 1918, 1935, 1982, and 2094. Solar eclipse As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra. The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun. Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface. During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit. When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the Earth to eclipse the Sun in 1969 and when the Cassini probe observed Saturn to eclipse the Sun in 2006. Lunar eclipse Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour. There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded. Historical record Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223, B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin. The first person to give scientific explanation on eclipses was Anaxagoras [c500BC - 428BC]. Anaxagoras stated that the Moon shines by reflected light from the Sun. In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his treatise Aryabhatiya. Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Indian computations were very accurate that 18th-century French scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by only 41 seconds, whereas Le Gentil's charts were long by 68 seconds. By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology. Eclipses in mythology and religion The American author Gene Weingarten described the tension between belief and eclipses thus: "I am a devout atheist but can't explain why the moon is exactly the right size, and gets positioned so precisely between the Earth and the sun, that total solar eclipses are perfect. It bothers me." The Graeco-Roman historian Cassius Dio, writing between AD 211–229, relates the anecdote that Emperor Claudius considered it necessary to prevent disturbance among the Roman population by publishing a prediction for a solar eclipse which would fall on his birthday anniversary [1 August in the year AD 45]. In this context, Cassius Dio provides a detailed explanation of solar and lunar eclipses. Typically in mythology, eclipses were understood to be one variation or another of a spiritual battle between the sun and evil forces or spirits of darkness. More specifically, in Norse mythology, it is believed that there is a wolf by the name of Fenrir that is in constant pursuit of the Sun, and eclipses are thought to occur when the wolf successfully devours the divine Sun. Other Norse tribes believed that there are two wolves by the names of Sköll and Hati that are in pursuit of the Sun and the Moon, known by the names of Sol and Mani, and these tribes believed that an eclipse occurs when one of the wolves successfully eats either the Sun or the Moon. In most types of mythologies and certain religions, eclipses were seen as a sign that the gods were angry and that danger was soon to come, so people often altered their actions in an effort to dissuade the gods from unleashing their wrath. In the Hindu religion, for example, people often sing religious hymns for protection from the evil spirits of the eclipse, and many people of the Hindu religion refuse to eat during an eclipse to avoid the effects of the evil spirits. Hindu people living in India will also wash off in the Ganges River, which is believed to be spiritually cleansing, directly following an eclipse to clean themselves of the evil spirits. In early Judaism and Christianity, eclipses were viewed as signs from God, and some eclipses were seen as a display of God's greatness or even signs of cycles of life and death. However, more ominous eclipses such as a blood moon were believed to be a divine sign that God would soon destroy their enemies. Other planets and dwarf planets Gas giants The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops. The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light. The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France. On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years. Mars On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit. Pluto Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects. Mercury and Venus Eclipses are impossible on Mercury and Venus, which have no moons. However, as seen from the Earth, both have been observed to transit across the face of the Sun. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of Venus transits will occur on December 10, 2117, and December 8, 2125. Transits of Mercury are much more common, occurring 13 times each century, on average. Eclipsing binaries A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary. The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment. The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783. Types Sun – Moon – Earth: Solar eclipse | annular eclipse | hybrid eclipse | partial eclipse Sun – Earth – Moon: Lunar eclipse | penumbral eclipse | partial lunar eclipse | central lunar eclipse Sun – Phobos – Mars: Transit of Phobos from Mars | Solar eclipses on Mars Sun – Deimos – Mars: Transit of Deimos from Mars | Solar eclipses on Mars Other types: Solar eclipses on Jupiter | Solar eclipses on Saturn | Solar eclipses on Uranus | Solar eclipses on Neptune | Solar eclipses on Pluto See also List of solar eclipses in the 21st century Mursili's eclipse Transit of Venus References External links A Catalogue of Eclipse Cycles Search 5,000 years of eclipses NASA eclipse home page International Astronomical Union's Working Group on Solar Eclipses Interactive eclipse maps site Classroom demonstration of how an eclipse occurs Image galleries The World at Night Eclipse Gallery Solar and Lunar Eclipse Image Gallery Williams College eclipse collection of images Astrological aspects Astronomical events Earth phenomena Concepts in astronomy
Eclipse
[ "Physics", "Astronomy" ]
4,353
[ "Physical phenomena", "Earth phenomena", "Concepts in astronomy", "Astronomical events", "Eclipses" ]
9,771
https://en.wikipedia.org/wiki/Ed%20%28software%29
(pronounced as distinct letters, ) is a line editor for Unix and Unix-like operating systems. It was one of the first parts of the Unix operating system that was developed, in August 1969. It remains part of the POSIX and Open Group standards for Unix-based operating systems, alongside the more sophisticated full-screen editor vi. History and influence The ed text editor was one of the first three key elements of the Unix operating system—assembler, editor, and shell—developed by Ken Thompson in August 1969 on a PDP-7 at AT&T Bell Labs. Many features of ed came from the qed text editor developed at Thompson's alma mater University of California, Berkeley. Thompson was very familiar with qed, and had reimplemented it on the CTSS and Multics systems. Thompson's versions of qed were notable as the first to implement regular expressions. Regular expressions are also implemented in ed, though their implementation is considerably less general than that in qed. Dennis M. Ritchie produced what Doug McIlroy later described as the "definitive" ed, and aspects of ed went on to influence ex, which in turn spawned vi. The non-interactive Unix command grep was inspired by a common special use of qed and later ed, where the command g/re/p performs a global regular expression search and prints the lines containing matches. The Unix stream editor, sed implemented many of the scripting features of qed that were not supported by ed on Unix. Features Features of ed include: available on essentially all Unix systems (and mandatory on systems conforming to the Single Unix Specification). support for regular expressions powerful automation can be achieved by feeding commands from standard input (In)famous for its terseness, ed, compatible with teletype terminals like Teletype Model 33, gives almost no visual feedback, and has been called (by Peter H. Salus) "the most user-hostile editor ever created", even when compared to the contemporary (and notoriously complex) TECO. For example, the message that ed will produce in case of error, and when it wants to make sure the user wishes to quit without saving, is "?". It does not report the current filename or line number, or even display the results of a change to the text, unless requested. Older versions (c. 1981) did not even ask for confirmation when a quit command was issued without the user saving changes. This terseness was appropriate in the early versions of Unix, when consoles were teletypes, modems were slow, and memory was precious. As computer technology improved and these constraints were loosened, editors with more visual feedback became the norm. In current practice, ed is rarely used interactively, but does find use in some shell scripts. For interactive use, ed was subsumed by the sam, vi and Emacs editors in the 1980s. ed can be found on virtually every version of Unix and Linux available, and as such is useful for people who have to work with multiple versions of Unix. On Unix-based operating systems, some utilities like SQL*Plus run ed as the editor if the EDITOR and VISUAL environment variables are not defined. If something goes wrong, ed is sometimes the only editor available. This is often the only time when it is used interactively. The version of ed provided by GNU has a few switches to enhance the feedback. Using provides a simple prompt and enables more useful feedback messages. The switch is defined in POSIX since XPG2 (1987). The ed commands are often imitated in other line-based editors. For example, EDLIN in early MS-DOS versions and 32-bit versions of Windows NT has a somewhat similar syntax, and text editors in many MUDs (LPMud and descendants, for example) use ed-like syntax. These editors, however, are typically more limited in function. Example Here is an example transcript of an ed session. For clarity, commands and text typed by the user are in normal face, and output from ed is emphasized. a This is line number two. . 2i . ,l ed is the standard Unix text editor.$ $ This is line number two.$ w text.txt 63 ,l ed is the standard Unix text editor.$ $ This is line number three.$ w text.txt 65 q The end result is a simple text file text.txt containing the following text: ed is the standard Unix text editor. This is line number three. Started with an empty file, the a command appends text (all ed commands are single letters). The command puts ed in insert mode, inserting the characters that follow and is terminated by a single dot on a line. The two lines that are entered before the dot end up in the file buffer. The 2i command also goes into insert mode, and will insert the entered text (a single empty line in our case) before line two. All commands may be prefixed by a line number to operate on that line. In the line ,l, the lowercase L stands for the list command. The command is prefixed by a range, in this case , which is a shortcut for 1,$. A range is two line numbers separated by a comma ($ means the last line). In return, ed lists all lines, from first to last. These lines are ended with dollar signs, so that white space at the end of lines is clearly visible. Once the empty line is inserted in line 2, the line which reads "This is line number two." is now actually the third line. This error is corrected with , a substitution command. The 3 will apply it to the correct line; following the command is the text to be replaced, and then the replacement. Listing all lines with ,l the line is shown now to be correct. w text.txt writes the buffer to the file text.txt making ed respond with 65, the number of characters written to the file. q will end an ed session. Cultural references The GNU Project has numerous jokes around ed hosted on its website. In addition, the glibc documentation notes an error code called with its description (errorstr) merely a single question mark, noting "the experienced user will know what is wrong." See also Edlin, the standard MS-DOS line editor which was inspired by ed Sam (text editor) Editor war List of Unix commands References External links Manual page from Unix First Edition describing ed. , a direct descendant of the original ed. GNU ed homepage A History of UNIX before Berkeley section 3.1 describes the history of ed. Unix text editors MacOS text editors Standard Unix programs Unix SUS2008 utilities Plan 9 commands Line editor 1971 software Console applications
Ed (software)
[ "Technology" ]
1,391
[ "Computing commands", "Plan 9 commands", "Standard Unix programs" ]
9,772
https://en.wikipedia.org/wiki/Edlin
Edlin is a line editor, and the only text editor provided with early versions of IBM PC DOS, MS-DOS and OS/2. Although superseded in MS-DOS 5.0 and later by the full-screen MS-DOS Editor, and by Notepad in Microsoft Windows, it continues to be included in the 32-bit versions of current Microsoft operating systems. History Edlin was created by Tim Paterson in two weeks in 1980, for Seattle Computer Products's 86-DOS (QDOS) based on the CP/M context editor ED, itself distantly inspired by the Unix ed line editor. Microsoft acquired 86-DOS and, after some further development, sold it as MS-DOS, so Edlin was included in v1.0–v5.0 of MS-DOS. From MS-DOS 6 onwards, the only editor included was the new full-screen MS-DOS Editor. Windows 95, 98 and ME ran on top of an embedded version of DOS, which reports itself as MS-DOS 7. As a successor to MS-DOS 6, this did not include Edlin. However, Edlin is included in the 32-bit versions of Windows NT and its derivatives—up to and including Windows 10—because the NTVDM's DOS support in those operating systems is based on MS-DOS version 5.0. However, unlike most other external DOS commands, it has not been transformed into a native Win32 program. It also does not support long filenames, which were not added to MS-DOS and Windows until long after Edlin was written. The FreeDOS version was developed by Gregory Pietsch. Usage There are only a few commands. The short list can be found by entering a ? at the edlin prompt. When a file is open, typing L lists the contents (e.g., 1,6L lists lines 1 through 6). Each line is displayed with a line number in front of it. *1,6L 1: Edlin: The only text editor in early versions of DOS. 2: 3: Back in the day, I remember seeing web pages 4: branded with a logo at the bottom: 5: "This page created in edlin." 6: The things that some people put themselves through. ;-) * The currently selected line has a *. To replace the contents of any line, the line number is entered and any text entered replaces the original. While editing a line pressing Ctrl-C cancels any changes. The * marker remains on that line. Entering I (optionally preceded with a line number) inserts one or more lines before the * line or the line given. When finished entering lines, Ctrl-C returns to the edlin command prompt. *6I 6:*(...or similar) 7:*^C   *7D *L 1: Edlin: The only text editor in early versions of DOS. 2: 3: Back in the day, I remember seeing web pages 4: branded with a logo at the bottom: 5: "This page created in edlin." 6: (...or similar) * i - Inserts lines of text. D - deletes the specified line, again optionally starting with the number of a line, or a range of lines. E.g.: 2,4d deletes lines 2 through 4. In the above example, line 7 was deleted. R - is used to replace all occurrences of a piece of text in a given range of lines, for example, to replace a spelling error. Including the ? prompts for each change. E.g.: To replace 'prit' with 'print' and to prompt for each change: ?rprit^Zprint (the ^Z represents pressing CTRL-Z). It is case-sensitive. S - searches for given text. It is used in the same way as replace, but without the replacement text. A search for 'apple' in the first 20 lines of a file is typed 1,20?sapple (no space, unless that is part of the search) followed by a press of enter. For each match, it asks if it is the correct one, and accepts n or y (or Enter). P - displays a listing of a range of lines. If no range is specified, P displays the complete file from the * to the end. This is different from L in that P changes the current line to be the last line in the range. T - transfers another file into the one being edited, with this syntax: [line to insert at]t[full path to file]. W - (write) saves the file. E - saves the file and quits edlin. Q - quits edlin without saving. Scripts Edlin may be used as a non-interactive file editor in scripts by redirecting a series of edlin commands. edlin < script FreeDOS Edlin A GPL-licensed clone of Edlin that includes long filename support is available for download as part of the FreeDOS project. This runs on operating systems such as Linux or Unix as well as MS-DOS. See also List of DOS commands ed and ex, similar Unix line editors. 86-DOS References Further reading External links Edlin | Microsoft Docs MS-DOS edlin command help Open source EDLIN implementation that comes with MS-DOS v2.0 1980 software Console applications DOS text editors Line editor OS/2 commands Microsoft free software Windows components Windows text editors
Edlin
[ "Technology" ]
1,141
[ "OS/2 commands", "Computing commands", "Windows commands" ]
9,775
https://en.wikipedia.org/wiki/Endoplasmic%20reticulum
The endoplasmic reticulum (ER) is a part of a transportation system of the eukaryotic cell, and has many other important functions such as protein folding. It is a type of organelle made up of two subunits – rough endoplasmic reticulum (RER), and smooth endoplasmic reticulum (SER). The endoplasmic reticulum is found in most eukaryotic cells and forms an interconnected network of flattened, membrane-enclosed sacs known as cisternae (in the RER), and tubular structures in the SER. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum is not found in red blood cells, or spermatozoa. The two types of ER share many of the same proteins and engage in certain common activities such as the synthesis of certain lipids and cholesterol. Different types of cells contain different ratios of the two types of ER depending on the activities of the cell. RER is found mainly toward the nucleus of the cell and SER towards the cell membrane or plasma membrane of cell. The outer (cytosolic) face of the RER is studded with ribosomes that are the sites of protein synthesis. The RER is especially prominent in cells such as hepatocytes. The SER lacks ribosomes and functions in lipid synthesis but not metabolism, the production of steroid hormones, and detoxification. The SER is especially abundant in mammalian liver and gonad cells. The ER was observed by light microscopy by Garnier in 1897, who coined the term ergastoplasm. The lacy membranes of the endoplasmic reticulum were first seen by electron microscopy in 1945 by Keith R. Porter, Albert Claude, and Ernest F. Fullam. Later, the word reticulum, which means "network", was applied by Porter in 1953 to describe this fabric of membranes. Structure The general structure of the endoplasmic reticulum is a network of membranes called cisternae. These sac-like structures are held together by the cytoskeleton. The phospholipid membrane encloses the cisternal space (or lumen), which is continuous with the perinuclear space but separate from the cytosol. The functions of the endoplasmic reticulum can be summarized as the synthesis and export of proteins and membrane lipids, but varies between ER and cell type and cell function. The quantity of both rough and smooth endoplasmic reticulum in a cell can slowly interchange from one type to the other, depending on the changing metabolic activities of the cell. Transformation can include embedding of new proteins in membrane as well as structural changes. Changes in protein content may occur without noticeable structural changes. Rough endoplasmic reticulum The surface of the rough endoplasmic reticulum (often abbreviated RER or rough ER; also called granular endoplasmic reticulum) is studded with protein-manufacturing ribosomes giving it a "rough" appearance (hence its name). The binding site of the ribosome on the rough endoplasmic reticulum is the translocon. However, the ribosomes are not a stable part of this organelle's structure as they are constantly being bound and released from the membrane. A ribosome only binds to the RER once a specific protein-nucleic acid complex forms in the cytosol. This special complex forms when a free ribosome begins translating the mRNA of a protein destined for the secretory pathway. The first 5–30 amino acids polymerized encode a signal peptide, a molecular message that is recognized and bound by a signal recognition particle (SRP). Translation pauses and the ribosome complex binds to the RER translocon where translation continues with the nascent (new) protein forming into the RER lumen and/or membrane. The protein is processed in the ER lumen by an enzyme (a signal peptidase), which removes the signal peptide. Ribosomes at this point may be released back into the cytosol; however, non-translating ribosomes are also known to stay associated with translocons. The membrane of the rough endoplasmic reticulum is in the form of large double-membrane sheets that are located near, and continuous with, the outer layer of the nuclear envelope. The double membrane sheets are stacked and connected through several right- or left-handed helical ramps, the "Terasaki ramps", giving rise to a structure resembling a parking garage. Although there is no continuous membrane between the endoplasmic reticulum and the Golgi apparatus, membrane-bound transport vesicles shuttle proteins between these two compartments. Vesicles are surrounded by coating proteins called COPI and COPII. COPII targets vesicles to the Golgi apparatus and COPI marks them to be brought back to the rough endoplasmic reticulum. The rough endoplasmic reticulum works in concert with the Golgi complex to target new proteins to their proper destinations. The second method of transport out of the endoplasmic reticulum involves areas called membrane contact sites, where the membranes of the endoplasmic reticulum and other organelles are held closely together, allowing the transfer of lipids and other small molecules. The rough endoplasmic reticulum is key in multiple functions: Manufacture of lysosomal enzymes with a mannose-6-phosphate marker added in the cis-Golgi network. Manufacture of secreted proteins, either secreted constitutively with no tag or secreted in a regulatory manner involving clathrin and paired basic amino acids in the signal peptide. Integral membrane proteins that stay embedded in the membrane as vesicles exit and bind to new membranes. Rab proteins are key in targeting the membrane; SNAP and SNARE proteins are key in the fusion event. Initial glycosylation as assembly continues. This is N-linked (O-linking occurs in the Golgi). N-linked glycosylation: If the protein is properly folded, oligosaccharyltransferase recognizes the AA sequence NXS or NXT (with the S/T residue phosphorylated) and adds a 14-sugar backbone (2-N-acetylglucosamine, 9-branching mannose, and 3-glucose at the end) to the side-chain nitrogen of Asn. Smooth endoplasmic reticulum In most cells the smooth endoplasmic reticulum (abbreviated SER) is scarce. Instead there are areas where the ER is partly smooth and partly rough, this area is called the transitional ER. The transitional ER gets its name because it contains ER exit sites. These are areas where the transport vesicles which contain lipids and proteins made in the ER, detach from the ER and start moving to the Golgi apparatus. Specialized cells can have a lot of smooth endoplasmic reticulum and in these cells the smooth ER has many functions. It synthesizes lipids, phospholipids, and steroids. Cells which secrete these products, such as those in the testes, ovaries, and sebaceous glands have an abundance of smooth endoplasmic reticulum. It also carries out the metabolism of carbohydrates, detoxification of natural metabolism products and of alcohol and drugs, attachment of receptors on cell membrane proteins, and steroid metabolism. In muscle cells, it regulates calcium ion concentration. Smooth endoplasmic reticulum is found in a variety of cell types (both animal and plant), and it serves different functions in each. The smooth endoplasmic reticulum also contains the enzyme glucose-6-phosphatase, which converts glucose-6-phosphate to glucose, a step in gluconeogenesis. It is connected to the nuclear envelope and consists of tubules that are located near the cell periphery. These tubes sometimes branch forming a network that is reticular in appearance. In some cells, there are dilated areas like the sacs of rough endoplasmic reticulum. The network of smooth endoplasmic reticulum allows for an increased surface area to be devoted to the action or storage of key enzymes and the products of these enzymes. Sarcoplasmic reticulum The sarcoplasmic reticulum (SR), from the Greek σάρξ sarx ("flesh"), is smooth ER found in muscle cells. The only structural difference between this organelle and the smooth endoplasmic reticulum is the composition of proteins they have, both bound to their membranes and drifting within the confines of their lumens. This fundamental difference is indicative of their functions: The endoplasmic reticulum synthesizes molecules, while the sarcoplasmic reticulum stores calcium ions and pumps them out into the sarcoplasm when the muscle fiber is stimulated. After their release from the sarcoplasmic reticulum, calcium ions interact with contractile proteins that utilize ATP to shorten the muscle fiber. The sarcoplasmic reticulum plays a major role in excitation-contraction coupling. Functions The endoplasmic reticulum serves many general functions, including the folding of protein molecules in sacs called cisternae and the transport of synthesized proteins in vesicles to the Golgi apparatus. Rough endoplasmic reticulum is also involved in protein synthesis. Correct folding of newly made proteins is made possible by several endoplasmic reticulum chaperone proteins, including protein disulfide isomerase (PDI), ERp29, the Hsp70 family member BiP/Grp78, calnexin, calreticulin, and the peptidylprolyl isomerase family. Only properly folded proteins are transported from the rough ER to the Golgi apparatus – unfolded proteins cause an unfolded protein response as a stress response in the ER. Disturbances in redox regulation, calcium regulation, glucose deprivation, and viral infection or the over-expression of proteins can lead to endoplasmic reticulum stress response (ER stress), a state in which the folding of proteins slows, leading to an increase in unfolded proteins. This stress is emerging as a potential cause of damage in hypoxia/ischemia, insulin resistance, and other disorders. Protein transport Secretory proteins, mostly glycoproteins, are moved across the endoplasmic reticulum membrane. Proteins that are transported by the endoplasmic reticulum throughout the cell are marked with an address tag called a signal sequence. The N-terminus (one end) of a polypeptide chain (i.e., a protein) contains a few amino acids that work as an address tag, which are removed when the polypeptide reaches its destination. Nascent peptides reach the ER via the translocon, a membrane-embedded multiprotein complex. Proteins that are destined for places outside the endoplasmic reticulum are packed into transport vesicles and moved along the cytoskeleton toward their destination. In human fibroblasts, the ER is always co-distributed with microtubules and the depolymerisation of the latter cause its co-aggregation with mitochondria, which are also associated with the ER. The endoplasmic reticulum is also part of a protein sorting pathway. It is, in essence, the transportation system of the eukaryotic cell. The majority of its resident proteins are retained within it through a retention motif. This motif is composed of four amino acids at the end of the protein sequence. The most common retention sequences are KDEL for lumen-located proteins and KKXX for transmembrane proteins. However, variations of KDEL and KKXX do occur, and other sequences can also give rise to endoplasmic reticulum retention. It is not known whether such variation can lead to sub-ER localizations. There are three KDEL (1, 2 and 3) receptors in mammalian cells, and they have a very high degree of sequence identity. The functional differences between these receptors remain to be established. Bioenergetics regulation of ER ATP supply by a CaATiER mechanism The endoplasmic reticulum does not harbor an ATP-regeneration machinery, and therefore requires ATP import from mitochondria. The imported ATP is vital for the ER to carry out its house keeping cellular functions, such as for protein folding and trafficking. The ER ATP transporter, SLC35B1/AXER, was recently cloned and characterized, and the mitochondria supply ATP to the ER through a Ca2+-antagonized transport into the ER (CaATiER) mechanism. The CaATiER mechanism shows sensitivity to cytosolic Ca2+ ranging from high nM to low μM range, with the Ca2+-sensing element yet to be identified and validated. Clinical significance Increased and supraphysiological ER stress in pancreatic β cells disrupts normal insulin secretion, leading to hyperinsulinemia and consequently peripheral insulin resistance associated with obesity in humans. Human clinical trials also suggested a causal link between obesity-induced increase in insulin secretion and peripheral insulin resistance. Abnormalities in XBP1 lead to a heightened endoplasmic reticulum stress response and subsequently causes a higher susceptibility for inflammatory processes that may even contribute to Alzheimer's disease. In the colon, XBP1 anomalies have been linked to the inflammatory bowel diseases including Crohn's disease. The unfolded protein response (UPR) is a cellular stress response related to the endoplasmic reticulum. The UPR is activated in response to an accumulation of unfolded or misfolded proteins in the lumen of the endoplasmic reticulum. The UPR functions to restore normal function of the cell by halting protein translation, degrading misfolded proteins, and activating the signaling pathways that lead to increasing the production of molecular chaperones involved in protein folding. Sustained overactivation of the UPR has been implicated in prion diseases as well as several other neurodegenerative diseases and the inhibition of the UPR could become a treatment for those diseases. See also Ribosome-associated vesicle References External links Endoplasmic Reticulum- Structure, Types & Functions Endoplasmic Reticulum Lipid and protein composition of Endoplasmic reticulum in OPM database Animations of the various cell functions referenced here
Endoplasmic reticulum
[ "Biology" ]
3,087
[ "Cell biology" ]
9,804
https://en.wikipedia.org/wiki/Electric%20charge
Electric charge (symbol q, sometimes Q) is a physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects. Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e. Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth. Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics. The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges. Overview Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed. By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign. The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral. An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge. During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral. Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa. Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current. Unit The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer. The elementary charge is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as , , or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect. The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e. History From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect. In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon. In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge". Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies. In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia. Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745). Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium. Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward. It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge. Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path. In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity). In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body. In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state. In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level. Role of charge in static electricity Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects. Electrification by sliding When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other. A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena: The two pieces of glass repel each other. Each piece of glass attracts each piece of resin. The two pieces of resin repel each other. This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts. If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified. An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand. Role of charge in electric current Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations. At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma. Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners. Conservation of electric charge The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I: Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result: The charge transferred between times and is obtained by integrating both sides: where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface. Relativistic invariance Aside from the properties described in articles about electromagnetism, electric charge is a relativistic invariant. This means that any particle that has electric charge q has the same electric charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the electric charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as that of two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus). See also SI electromagnetism units Color charge Partial charge Positron or antielectron is an antiparticle or antimatter counterpart of the electron References External links How fast does a charge decay? Chemical properties Conservation laws Electricity Flavour (particle physics) Spintronics Electromagnetic quantities
Electric charge
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,614
[ "Electromagnetic quantities", "Physical quantities", "Electric charge", "Equations of physics", "Conservation laws", "Spintronics", "Quantity", "Condensed matter physics", "nan", "Wikipedia categories named after physical quantities", "Symmetry", "Physics theorems" ]
9,813
https://en.wikipedia.org/wiki/Extinction%20event
An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp fall in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the background extinction rate and the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from disagreement as to what constitutes a "major" extinction event, and the data chosen to measure past diversity. The "Big Five" mass extinctions In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five particular geological intervals with excessive diversity loss. They were originally identified as outliers on a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that in the current, Phanerozoic Eon, multicellular animal life has experienced at least five major and many minor mass extinctions. The "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events. All of the five in the Phanerozoic Eon were anciently preceded by the presumed far more extensive mass extinction of microbial life during the Great Oxidation Event (a.k.a. Oxygen Catastrophe) early in the Proterozoic Eon. At the end of the Ediacaran and just before the Cambrian explosion, yet another Proterozoic extinction event (of unknown magnitude) is speculated to have ushered in the Phanerozoic. Despite the common presentation focusing only on these five events, no measure of extinction shows any definite line separating them from the many other Phanerozoic extinction events that appear only slightly lesser catastrophes; further, using different methods of calculating an extinction's impact can lead to other events featuring in the top five. Fossil records of older events are more difficult to interpret. This is because: Older fossils are more difficult to find, as they are usually buried at a considerable depth. Dating of older fossils is more difficult. Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched. Prehistoric environmental events can disturb the deposition process. Marine fossils tend to be better preserved than their more sought-after land-based counterparts, but the deposition and preservation of fossils on land is more erratic. It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias. Sixth mass extinction Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event due to human activities is currently under way: Extinctions by severity Extinction events can be tracked by several methods, including geological change, ecological impact, extinction vs. origination (speciation) rates, and most commonly diversity loss among taxonomic units. Most early papers used families as the unit of taxonomy, based on compendiums of marine animal families by Sepkoski (1982, 1992). Later papers by Sepkoski and other authors switched to genera, which are more precise than families and less prone to taxonomic bias or incomplete sampling relative to species. These are several major papers estimating loss or ecological impact from fifteen commonly-discussed extinction events. Different methods used by these papers are described in the following section. The "Big Five" mass extinctions are bolded. Graphed but not discussed by Sepkoski (1996), considered continuous with the Late Devonian mass extinction At the time considered continuous with the end-Permian mass extinction Includes late Norian time slices Diversity loss of both pulses calculated together Pulses extend over adjacent time slices, calculated separately Considered ecologically significant, but not analyzed directly Excluded due to a lack of consensus on Late Triassic chronology The study of major extinction events Breakthrough studies in the 1980s–1990s For much of the 20th century, the study of mass extinctions was hampered by insufficient data. Mass extinctions, though acknowledged, were considered mysterious exceptions to the prevailing gradualistic view of prehistory, where slow evolutionary trends define faunal changes. The first breakthrough was published in 1980 by a team led by Luis Alvarez, who discovered trace metal evidence for an asteroid impact at the end of the Cretaceous period. The Alvarez hypothesis for the end-Cretaceous extinction gave mass extinctions, and catastrophic explanations, newfound popular and scientific attention. Another landmark study came in 1982, when a paper written by David M. Raup and Jack Sepkoski was published in the journal Science. This paper, originating from a compendium of extinct marine animal families developed by Sepkoski, identified five peaks of marine family extinctions which stand out among a backdrop of decreasing extinction rates through time. Four of these peaks were statistically significant: the Ashgillian (end-Ordovician), Late Permian, Norian (end-Triassic), and Maastrichtian (end-Cretaceous). The remaining peak was a broad interval of high extinction smeared over the later half of the Devonian, with its apex in the Frasnian stage. Through the 1980s, Raup and Sepkoski continued to elaborate and build upon their extinction and origination data, defining a high-resolution biodiversity curve (the "Sepkoski curve") and successive evolutionary faunas with their own patterns of diversification and extinction. Though these interpretations formed a strong basis for subsequent studies of mass extinctions, Raup and Sepkoski also proposed a more controversial idea in 1984: a 26-million-year periodic pattern to mass extinctions. Two teams of astronomers linked this to a hypothetical brown dwarf in the distant reaches of the solar system, inventing the "Nemesis hypothesis" which has been strongly disputed by other astronomers. Around the same time, Sepkoski began to devise a compendium of marine animal genera, which would allow researchers to explore extinction at a finer taxonomic resolution. He began to publish preliminary results of this in-progress study as early as 1986, in a paper which identified 29 extinction intervals of note. By 1992, he also updated his 1982 family compendium, finding minimal changes to the diversity curve despite a decade of new data. In 1996, Sepkoski published another paper which tracked marine genera extinction (in terms of net diversity loss) by stage, similar to his previous work on family extinctions. The paper filtered its sample in three ways: all genera (the entire unfiltered sample size), multiple-interval genera (only those found in more than one stage), and "well-preserved" genera (excluding those from groups with poor or understudied fossil records). Diversity trends in marine animal families were also revised based on his 1992 update. Revived interest in mass extinctions led many other authors to re-evaluate geological events in the context of their effects on life. A 1995 paper by Michael Benton tracked extinction and origination rates among both marine and continental (freshwater & terrestrial) families, identifying 22 extinction intervals and no periodic pattern. Overview books by O.H. Walliser (1996) and A. Hallam and P.B. Wignall (1997) summarized the new extinction research of the previous two decades. One chapter in the former source lists over 60 geological events which could conceivably be considered global extinctions of varying sizes. These texts, and other widely circulated publications in the 1990s, helped to establish the popular image of mass extinctions as a "big five" alongside many smaller extinctions through prehistory. New data on genera: Sepkoski's compendium Though Sepkoski died in 1999, his marine genera compendium was formally published in 2002. This prompted a new wave of studies into the dynamics of mass extinctions. These papers utilized the compendium to track origination rates (the rate that new species appear or speciate) parallel to extinction rates in the context of geological stages or substages. A review and re-analysis of Sepkoski's data by Bambach (2006) identified 18 distinct mass extinction intervals, including 4 large extinctions in the Cambrian. These fit Sepkoski's definition of extinction, as short substages with large diversity loss and overall high extinction rates relative to their surroundings. Bambach et al. (2004) considered each of the "Big Five" extinction intervals to have a different pattern in the relationship between origination and extinction trends. Moreover, background extinction rates were broadly variable and could be separated into more severe and less severe time intervals. Background extinctions were least severe relative to the origination rate in the middle Ordovician-early Silurian, late Carboniferous-Permian, and Jurassic-recent. This argues that the Late Ordovician, end-Permian, and end-Cretaceous extinctions were statistically significant outliers in biodiversity trends, while the Late Devonian and end-Triassic extinctions occurred in time periods which were already stressed by relatively high extinction and low origination. Computer models run by Foote (2005) determined that abrupt pulses of extinction fit the pattern of prehistoric biodiversity much better than a gradual and continuous background extinction rate with smooth peaks and troughs. This strongly supports the utility of rapid, frequent mass extinctions as a major driver of diversity changes. Pulsed origination events are also supported, though to a lesser degree which is largely dependent on pulsed extinctions. Similarly, Stanley (2007) used extinction and origination data to investigate turnover rates and extinction responses among different evolutionary faunas and taxonomic groups. In contrast to previous authors, his diversity simulations show support for an overall exponential rate of biodiversity growth through the entire Phanerozoic. Tackling biases in the fossil record As data continued to accumulate, some authors began to re-evaluate Sepkoski's sample using methods meant to account for sampling biases. As early as 1982, a paper by Phillip W. Signor and Jere H. Lipps noted that the true sharpness of extinctions was diluted by the incompleteness of the fossil record. This phenomenon, later called the Signor-Lipps effect, notes that a species' true extinction must occur after its last fossil, and that origination must occur before its first fossil. Thus, species which appear to die out just prior to an abrupt extinction event may instead be a victim of the event, despite an apparent gradual decline looking at the fossil record alone. A model by Foote (2007) found that many geological stages had artificially inflated extinction rates due to Signor-Lipps "backsmearing" from later stages with extinction events. Other biases include the difficulty in assessing taxa with high turnover rates or restricted occurrences, which cannot be directly assessed due to a lack of fine-scale temporal resolution. Many paleontologists opt to assess diversity trends by randomized sampling and rarefaction of fossil abundances rather than raw temporal range data, in order to account for all of these biases. But that solution is influenced by biases related to sample size. One major bias in particular is the "Pull of the recent", the fact that the fossil record (and thus known diversity) generally improves closer to the modern day. This means that biodiversity and abundance for older geological periods may be underestimated from raw data alone. Alroy (2010) attempted to circumvent sample size-related biases in diversity estimates using a method he called "shareholder quorum subsampling" (SQS). In this method, fossils are sampled from a "collection" (such as a time interval) to assess the relative diversity of that collection. Every time a new species (or other taxon) enters the sample, it brings over all other fossils belonging to that species in the collection (its "share" of the collection). For example, a skewed collection with half its fossils from one species will immediately reach a sample share of 50% if that species is the first to be sampled. This continues, adding up the sample shares until a "coverage" or "quorum" is reached, referring to a pre-set desired sum of share percentages. At that point, the number of species in the sample are counted. A collection with more species is expected to reach a sample quorum with more species, thus accurately comparing the relative diversity change between two collections without relying on the biases inherent to sample size. Alroy also elaborated on three-timer algorithms, which are meant to counteract biases in estimates of extinction and origination rates. A given taxon is a "three-timer" if it can be found before, after, and within a given time interval, and a "two-timer" if it overlaps with a time interval on one side. Counting "three-timers" and "two-timers" on either end of a time interval, and sampling time intervals in sequence, can together be combined into equations to predict extinction and origination with less bias. In subsequent papers, Alroy continued to refine his equations to improve lingering issues with precision and unusual samples. McGhee et al. (2013), a paper which primarily focused on ecological effects of mass extinctions, also published new estimates of extinction severity based on Alroy's methods. Many extinctions were significantly more impactful under these new estimates, though some were less prominent. Stanley (2016) was another paper which attempted to remove two common errors in previous estimates of extinction severity. The first error was the unjustified removal of "singletons", genera unique to only a single time slice. Their removal would mask the influence of groups with high turnover rates or lineages cut short early in their diversification. The second error was the difficulty in distinguishing background extinctions from brief mass extinction events within the same short time interval. To circumvent this issue, background rates of diversity change (extinction/origination) were estimated for stages or substages without mass extinctions, and then assumed to apply to subsequent stages with mass extinctions. For example, the Santonian and Campanian stages were each used to estimate diversity changes in the Maastrichtian prior to the K-Pg mass extinction. Subtracting background extinctions from extinction tallies had the effect of reducing the estimated severity of the six sampled mass extinction events. This effect was stronger for mass extinctions which occurred in periods with high rates of background extinction, like the Devonian. Uncertainty in the Proterozoic and earlier eons Because most diversity and biomass on Earth is microbial, and thus difficult to measure via fossils, extinction events placed on-record are those that affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. For this reason, well-documented extinction events are confined to the Phanerozoic eon – with the sole exception of the Oxygen Catastrophe in the Proterozoic – since before the Phanerozoic, all living organisms were either microbial, or if multicellular then soft-bodied. Perhaps due to the absence of a robust microbial fossil record, mass extinctions might only seem to be mainly a Phanerozoic phenomenon, with merely the observable extinction rates appearing low before large complex organisms with hard body parts arose. Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years. The Oxygen Catastrophe, which occurred around 2.45 billion years ago in the Paleoproterozoic, is plausible as the first-ever major extinction event. It was perhaps also the worst-ever, in some sense, but with the Earth's ecology just before that time so poorly understood, and the concept of prokaryote genera so different from genera of complex life, that it would be difficult to meaningfully compare it to any of the "Big Five" even if Paleoproterozoic life were better known. Since the Cambrian explosion, five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and best-known, the Cretaceous–Paleogene extinction event, which occurred approximately  Ma (million years ago), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major Phanerozoic mass extinctions, there are numerous lesser ones, and the ongoing mass extinction caused by human activity is sometimes called the sixth mass extinction. Evolutionary importance Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation. For example, mammaliaformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Similarly, within Synapsida, the replacement of taxa that originated in the earliest, Pennsylvanian and Cisuralian evolutionary radiation (often still called "pelycosaurs", though this is a paraphyletic group) by therapsids occurred around the Kungurian/Roadian transition, which is often called Olson's extinction (which may be a slow decline over 20 Ma rather than a dramatic, brief event). Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event. Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past". Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the 'struggle for existence' – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species: "Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others". Patterns in frequency Various authors have suggested that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas, mostly regarding astronomical influences, attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious or lacking statistical significance. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables such as Strontium isotopes, flood basalts, anoxic events, orogenies, and evaporite deposition. One explanation for this proposed cycle is carbon storage and release by oceanic crust, which exchanges carbon between the atmosphere and mantle. Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time. It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions, but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable. Causes There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed. Identifying causes of specific mass extinctions A good theory for a particular mass extinction should: explain all of the losses, not just focus on a few groups (such as dinosaurs); explain why particular groups of organisms died out and why others survived; provide mechanisms that are strong enough to cause a mass extinction but not a total extinction; be based on events or processes that can be shown to have happened, not just inferred from the extinction. It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world. Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate. Most widely supported explanations MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992): Flood basalt events (giant volcanic eruptions): 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions. Sea-level falls: 12, of which seven were associated with significant extinctions. Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions, or cannot be dated precisely enough. The impact that created the Siljan Ring either was just before the Late Devonian Extinction or coincided with it. The most commonly suggested causes of mass extinctions are listed below. Flood basalt events The formation of large igneous provinces by flood basalt events could have: produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated. Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years. Flood basalt events have been implicated as the cause of many major extinction events. It is speculated that massive volcanism caused or contributed to the Kellwasser Event, the End-Guadalupian Extinction Event, the End-Permian Extinction Event, the Smithian-Spathian Extinction, the Triassic-Jurassic Extinction Event, the Toarcian Oceanic Anoxic Event, the Cenomanian-Turonian Oceanic Anoxic Event, the Cretaceous-Palaeogene Extinction Event, and the Palaeocene-Eocene Thermal Maximum. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon. Sea-level fall These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges. Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous, along with the more recently recognised Capitanian mass extinction of comparable severity to the Big Five. A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans. Extraterrestrial threats Impact events The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires. Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is lingering dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction. The Permian-Triassic extinction event has also been hypothesised to have been caused by an asteroid impact that formed the Araguainha crater due to the estimated date of the crater's formation overlapping with the end-Permian extinction event. However, this hypothesis has been widely challenged, with the impact hypothesis being rejected by most researchers. According to the Shiva hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on maps of the spiral structure of the Milky Way in CO molecular line emission has failed to find a correlation. A nearby nova, supernova or gamma ray burst A nearby gamma-ray burst (less than 6000 light-years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the Sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years. It has been suggested that a gamma ray burst caused the End-Ordovician extinction, while a supernova has been proposed as the cause of the Hangenberg event. A supernova within 25 light-years would strip Earth of its atmosphere. Today there is in the Solar System's neighbourhood no critical star capable to produce a supernova dangerous to life on Earth. Global cooling Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction. It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts. Global warming This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below). Global warming as a cause of mass extinction is supported by several recent studies. The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of all marine families became extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming. Clathrate gun hypothesis Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly – for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming. The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13. It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. Anoxic events Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism. It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Capitanian, Permian–Triassic, and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Lundgreni, Mulde, Lau, Smithian-Spathian, Toarcian, and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous that indicate anoxic events but are not associated with mass extinctions. The bio-availability of essential trace elements (in particular selenium) to potentially lethal lows has been shown to coincide with, and likely have contributed to, at least three mass extinction events in the oceans, that is, at the end of the Ordovician, during the Middle and Late Devonian, and at the end of the Triassic. During periods of low oxygen concentrations very soluble selenate (Se6+) is converted into much less soluble selenide (Se2-), elemental Se and organo-selenium complexes. Bio-availability of selenium during these extinction events dropped to about 1% of the current oceanic concentration, a level that has been proven lethal to many extant organisms. British oceanologist and atmospheric scientist, Andrew Watson, explained that, while the Holocene epoch exhibits many processes reminiscent of those that have contributed to past anoxic events, full-scale ocean anoxia would take "thousands of years to develop". Hydrogen sulfide emissions from the seas Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide, which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation. Oceanic overturn Oceanic overturn is a disruption of thermo-haline circulation that lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms that inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water. Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events. It has been suggested that oceanic overturn caused or contributed to the late Devonian and Permian–Triassic extinctions. Geomagnetic reversal One theory is that periods of increased geomagnetic reversals will weaken Earth's magnetic field long enough to expose the atmosphere to the solar winds, causing oxygen ions to escape the atmosphere in a rate increased by 3–4 orders, resulting in a disastrous decrease in oxygen. Plate tectonics Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges that expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent that includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior that may have extreme seasonal variations. Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time, which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian. Other hypotheses Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms that are contrary to the available evidence; they are based on other theories that have been rejected or superseded. Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with human-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss. A study published in May 2017 in Proceedings of the National Academy of Sciences argued that a "biological annihilation" akin to a sixth mass extinction event is underway as a result of anthropogenic causes, such as over-population and over-consumption. The study suggested that as much as 50% of the number of animal individuals that once lived on Earth were already extinct, threatening the basis for human existence too. Future biosphere extinction/sterilization The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide, could actually cause an even greater mass extinction, having the potential to wipe out even microbes (in other words, the Earth would be completely sterilized): rising global temperatures caused by the expanding Sun would gradually increase the rate of weathering, which would in turn remove more and more CO2 from the atmosphere. When CO2 levels get too low (perhaps at 50 ppm), most plant life will die out, although simpler plants like grasses and mosses can survive much longer, until levels drop to 10 ppm. With all photosynthetic organisms gone, atmospheric oxygen can no longer be replenished, and it is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining aerobic life to die out via asphyxiation, leaving behind only simple anaerobic prokaryotes. When the Sun becomes 10% brighter in about a billion years, Earth will suffer a moist greenhouse effect resulting in its oceans boiling away, while the Earth's liquid outer core cools due to the inner core's expansion and causes the Earth's magnetic field to shut down. In the absence of a magnetic field, charged particles from the Sun will deplete the atmosphere and further increase the Earth's temperature to an average of around 420 K (147 °C, 296 °F) in 2.8 billion years, causing the last remaining life on Earth to die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, it would represent the final mass extinction in Earth's history (albeit a very long extinction event). Effects and recovery The effects of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, it takes millions of years for biodiversity to recover after extinction events. In the most severe mass extinctions it may take 15 to 30 million years. The worst Phanerozoic event, the Permian–Triassic extinction, devastated life on Earth, killing over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to successive waves of extinction that inhibited recovery, as well as prolonged environmental stress that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, four to six million years after the extinction; and some writers estimate that the recovery was not complete until 30 million years after the P-T extinction, that is, in the late Triassic. Subsequent to the P-T extinction, there was an increase in provincialization, with species occupying smaller ranges – perhaps removing incumbents from niches and setting the stage for an eventual rediversification. The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora. In media The term extinction level event (ELE) has been used in media. The 1998 film Deep Impact describes a potential comet strike of earth as an E.L.E. See also Bioevent Elvis taxon Endangered species Geologic time scale Global catastrophic risk Holocene extinction Human extinction Kačák Event Lazarus taxon List of impact craters on Earth List of largest volcanic eruptions List of possible impact structures on Earth Medea hypothesis Rare species Signor–Lipps effect Snowball Earth Speculative evolution The Sixth Extinction: An Unnatural History (nonfiction book) Timeline of extinctions in the Holocene Quaternary extinction event Footnotes References Further reading Edmeades B (2021) Megafauna: First victims of the human-caused extinction | Houndstooth Press | isbn 978-1-5445-2651-5 External links – nonprofit organization producing a documentary about mass extinction titled "Call of Life: Facing the Mass Extinction" – Calculate extinction rates for yourself! History of climate variability and change Evolutionary biology Meteorological hypotheses Natural disasters
Extinction event
[ "Physics", "Astronomy", "Biology" ]
8,873
[ "Evolutionary biology", "Physical phenomena", "Astronomical hypotheses", "Evolution of the biosphere", "Weather", "Extinction events", "Hypothetical impact events", "Natural disasters", "Biological hypotheses" ]
9,828
https://en.wikipedia.org/wiki/Execution%20unit
In computer engineering, an execution unit (E-unit or EU) is a part of a processing unit that performs the operations and calculations forwarded from the instruction unit. It may have its own internal control sequence unit (not to be confused with a CPU's main control unit), some registers, and other internal units such as an arithmetic logic unit, address generation unit, floating-point unit, load–store unit, branch execution unit or other smaller and more specific components, and can be tailored to support a certain datatype, such as integers or floating-points. It is common for modern processing units to have multiple parallel functional units within its execution units, which is referred to as superscalar design. The simplest arrangement is to use a single bus manager unit to manage the memory interface and the others to perform calculations. Additionally, modern execution units are usually pipelined. References Central processing unit Computer arithmetic
Execution unit
[ "Mathematics", "Technology", "Engineering" ]
185
[ "Computer engineering", "Computer engineering stubs", "Computer arithmetic", "Arithmetic", "Computing stubs" ]
9,837
https://en.wikipedia.org/wiki/Ethylene
Ethylene (IUPAC name: ethene) is a hydrocarbon which has the formula or . It is a colourless, flammable gas with a faint "sweet and musky" odour when pure. It is the simplest alkene (a hydrocarbon with carbon–carbon double bonds). Ethylene is widely used in the chemical industry, and its worldwide production (over 150 million tonnes in 2016) exceeds that of any other organic compound. Much of this production goes toward creating polythene, which is a widely used plastic containing polymer chains of ethylene units in various chain lengths. Production emits greenhouse gases, including methane from feedstock production and carbon dioxide from any non-sustainable energy used. Ethylene is also an important natural plant hormone and is used in agriculture to induce ripening of fruits. The hydrate of ethylene is ethanol. Structure and properties This hydrocarbon has four hydrogen atoms bound to a pair of carbon atoms that are connected by a double bond. All six atoms that comprise ethylene are coplanar. The H-C-H angle is 117.4°, close to the 120° for ideal sp² hybridized carbon. The molecule is also relatively weak: rotation about the C-C bond is a very low energy process that requires breaking the π-bond by supplying heat at 50 °C. The π-bond in the ethylene molecule is responsible for its useful reactivity. The double bond is a region of high electron density, thus it is susceptible to attack by electrophiles. Many reactions of ethylene are catalyzed by transition metals, which bind transiently to the ethylene using both the π and π* orbitals. Being a simple molecule, ethylene is spectroscopically simple. Its UV-vis spectrum is still used as a test of theoretical methods. Uses Major industrial reactions of ethylene include in order of scale: 1) polymerization, 2) oxidation, 3) halogenation and hydrohalogenation, 4) alkylation, 5) hydration, 6) oligomerization, and 7) hydroformylation. In the United States and Europe, approximately 90% of ethylene is used to produce ethylene oxide, ethylene dichloride, ethylbenzene and polyethylene. Most of the reactions with ethylene are electrophilic addition. Polymerization Polyethylene production uses more than half of the world's ethylene supply. Polyethylene, also called polyethene and polythene, is the world's most widely used plastic. It is primarily used to make films in packaging, carrier bags and trash liners. Linear alpha-olefins, produced by oligomerization (formation of short-chain molecules) are used as precursors, detergents, plasticisers, synthetic lubricants, additives, and also as co-monomers in the production of polyethylenes. Oxidation Ethylene is oxidized to produce ethylene oxide, a key raw material in the production of surfactants and detergents by ethoxylation. Ethylene oxide is also hydrolyzed to produce ethylene glycol, widely used as an automotive antifreeze as well as higher molecular weight glycols, glycol ethers, and polyethylene terephthalate. Ethylene oxidation in the presence of a palladium catalyst can form acetaldehyde. This conversion remains a major industrial process (10M kg/y). The process proceeds via the initial complexation of ethylene to a Pd(II) center. Halogenation and hydrohalogenation Major intermediates from the halogenation and hydrohalogenation of ethylene include ethylene dichloride, ethyl chloride, and ethylene dibromide. The addition of chlorine entails "oxychlorination", i.e. chlorine itself is not used. Some products derived from this group are polyvinyl chloride, trichloroethylene, perchloroethylene, methyl chloroform, polyvinylidene chloride and copolymers, and ethyl bromide. Alkylation Major chemical intermediates from the alkylation with ethylene is ethylbenzene, precursor to styrene. Styrene is used principally in polystyrene for packaging and insulation, as well as in styrene-butadiene rubber for tires and footwear. On a smaller scale, ethyltoluene, ethylanilines, 1,4-hexadiene, and aluminium alkyls. Products of these intermediates include polystyrene, unsaturated polyesters and ethylene-propylene terpolymers. Oxo reaction The hydroformylation (oxo reaction) of ethylene results in propionaldehyde, a precursor to propionic acid and n-propyl alcohol. Hydration Ethylene has long represented the major nonfermentative precursor to ethanol. The original method entailed its conversion to diethyl sulfate, followed by hydrolysis. The main method practiced since the mid-1990s is the direct hydration of ethylene catalyzed by solid acid catalysts: C2H4 + H2O → CH3CH2OH Dimerization to butenes Ethylene is dimerized by hydrovinylation to give n-butenes using processes licensed by Lummus or IFP. The Lummus process produces mixed n-butenes (primarily 2-butenes) while the IFP process produces 1-butene. 1-Butene is used as a comonomer in the production of certain kinds of polyethylene. Fruit and flowering Ethylene is a hormone that affects the ripening and flowering of many plants. It is widely used to control freshness in horticulture and fruits. The scrubbing of naturally occurring ethylene delays ripening. Adsorption of ethylene by nets coated in titanium dioxide gel has also been shown to be effective. Niche uses An example of a niche use is as an anesthetic agent (in an 85% ethylene/15% oxygen ratio). Another use is as a welding gas. It is also used as a refrigerant gas for low temperature applications under the name R-1150. Production Global ethylene production was 107 million tonnes in 2005, 109 million tonnes in 2006, 138 million tonnes in 2010, and 141 million tonnes in 2011. By 2013, ethylene was produced by at least 117 companies in 32 countries. To meet the ever-increasing demand for ethylene, sharp increases in production facilities are added globally, particularly in the Mideast and in China. Production emits greenhouse gas, namely significant amounts of carbon dioxide. Industrial process Ethylene is produced by several methods in the petrochemical industry. A primary method is steam cracking (SC) where hydrocarbons and steam are heated to 750–950 °C. This process converts large hydrocarbons into smaller ones and introduces unsaturation. When ethane is the feedstock, ethylene is the product. Ethylene is separated from the resulting mixture by repeated compression and distillation. In Europe and Asia, ethylene is obtained mainly from cracking naphtha, gasoil and condensates with the coproduction of propylene, C4 olefins and aromatics (pyrolysis gasoline). Other technologies employed for the production of ethylene include Fischer-Tropsch synthesis and methanol-to-olefins (MTO). Laboratory synthesis Although of great value industrially, ethylene is rarely synthesized in the laboratory and is ordinarily purchased. It can be produced via dehydration of ethanol with sulfuric acid or in the gas phase with aluminium oxide or activated alumina. Biosynthesis Ethylene is produced from methionine in nature. The immediate precursor is 1-aminocyclopropane-1-carboxylic acid. Ligand Ethylene is a fundamental ligand in transition metal alkene complexes. One of the first organometallic compounds, Zeise's salt is a complex of ethylene. Useful reagents containing ethylene include Pt(PPh3)2(C2H4) and Rh2Cl2(C2H4)4. The Rh-catalysed hydroformylation of ethylene is conducted on an industrial scale to provide propionaldehyde. History Some geologists and scholars believe that the famous Greek Oracle at Delphi (the Pythia) went into her trance-like state as an effect of ethylene rising from ground faults. Ethylene appears to have been discovered by Johann Joachim Becher, who obtained it by heating ethanol with sulfuric acid; he mentioned the gas in his Physica Subterranea (1669). Joseph Priestley also mentions the gas in his Experiments and observations relating to the various branches of natural philosophy: with a continuation of the observations on air (1779), where he reports that Jan Ingenhousz saw ethylene synthesized in the same way by a Mr. Enée in Amsterdam in 1777 and that Ingenhousz subsequently produced the gas himself. The properties of ethylene were studied in 1795 by four Dutch chemists, Johann Rudolph Deimann, Adrien Paets van Troostwyck, Anthoni Lauwerenburgh and Nicolas Bondt, who found that it differed from hydrogen gas and that it contained both carbon and hydrogen. This group also discovered that ethylene could be combined with chlorine to produce the Dutch oil, 1,2-dichloroethane; this discovery gave ethylene the name used for it at that time, olefiant gas (oil-making gas.) The term olefiant gas is in turn the etymological origin of the modern word "olefin", the class of hydrocarbons in which ethylene is the first member. In the mid-19th century, the suffix -ene (an Ancient Greek root added to the end of female names meaning "daughter of") was widely used to refer to a molecule or part thereof that contained one fewer hydrogen atoms than the molecule being modified. Thus, ethylene () was the "daughter of ethyl" (). The name ethylene was used in this sense as early as 1852. In 1866, the German chemist August Wilhelm von Hofmann proposed a system of hydrocarbon nomenclature in which the suffixes -ane, -ene, -ine, -one, and -une were used to denote the hydrocarbons with 0, 2, 4, 6, and 8 fewer hydrogens than their parent alkane. In this system, ethylene became ethene. Hofmann's system eventually became the basis for the Geneva nomenclature approved by the International Congress of Chemists in 1892, which remains at the core of the IUPAC nomenclature. However, by that time, the name ethylene was deeply entrenched, and it remains in wide use today, especially in the chemical industry. Following experimentation by Luckhardt, Crocker, and Carter at the University of Chicago, ethylene was used as an anesthetic. It remained in use through the 1940s use even while chloroform was being phased out. Its pungent odor and its explosive nature limit its use today. Nomenclature The 1979 IUPAC nomenclature rules made an exception for retaining the non-systematic name ethylene; however, this decision was reversed in the 1993 rules, and it remains unchanged in the newest 2013 recommendations, so the IUPAC name is now ethene. In the IUPAC system, the name ethylene is reserved for the divalent group -CH2CH2-. Hence, names like ethylene oxide and ethylene dibromide are permitted, but the use of the name ethylene for the two-carbon alkene is not. Nevertheless, use of the name ethylene for H2C=CH2 (and propylene for H2C=CHCH3) is still prevalent among chemists in North America. Greenhouse gas emissions "A key factor affecting petrochemicals life-cycle emissions is the methane intensity of feedstocks, especially in the production segment." Emissions from cracking of naptha and natural gas (common in the US as gas is cheap there) depend a lot on the source of energy (for example gas burnt to provide high temperatures) but that from naptha is certainly more per kg of feedstock. Both steam cracking and production from natural gas via ethane are estimated to emit 1.8 to 2kg of CO2 per kg ethylene produced, totalling over 260 million tonnes a year. This is more than all other manufactured chemicals except cement and ammonia. According to a 2022 report using renewable or nuclear energy could cut emissions by almost half. Safety Like all hydrocarbons, ethylene is a combustible asphyxiant. It is listed as an IARC group 3 agent, since there is no current evidence that it causes cancer in humans. See also RediRipe, an ethylene detector for fruits. References External links International Chemical Safety Card 0475 MSDS Alkenes General anesthetics Monomers Commodity chemicals Petrochemicals Industrial gases Greenhouse gases Organic compounds with 2 carbon atoms
Ethylene
[ "Chemistry", "Materials_science", "Environmental_science" ]
2,831
[ "Products of chemical industry", "Petrochemicals", "Environmental chemistry", "Organic compounds with 2 carbon atoms", "Organic compounds", "Alkenes", "Industrial gases", "Polymer chemistry", "Chemical process engineering", "Greenhouse gases", "Monomers", "Commodity chemicals" ]
9,875
https://en.wikipedia.org/wiki/Exploit%20%28computer%20security%29
An exploit is a method or piece of code that takes advantage of vulnerabilities in software, applications, networks, operating systems, or hardware, typically for malicious purposes. The term "exploit" derives from the English verb "to exploit," meaning "to use something to one’s own advantage." Exploits are designed to identify flaws, bypass security measures, gain unauthorized access to systems, take control of systems, install malware, or steal sensitive data. While an exploit by itself may not be a malware, it serves as a vehicle for delivering malicious software by breaching security controls. Exploits target vulnerabilities, which are essentially flaws or weaknesses in a system's defenses. Common targets for exploits include operating systems, web browsers, and various applications, where hidden vulnerabilities can compromise the integrity and security of computer systems. Exploits can cause unintended or unanticipated behavior in systems, potentially leading to severe security breaches. Many exploits are designed to provide superuser-level access to a computer system. Attackers may use multiple exploits in succession to first gain low-level access and then escalate privileges repeatedly until they reach the highest administrative level, often referred to as "root." This technique of chaining several exploits together to perform a single attack is known as an exploit chain. Exploits that remain unknown to everyone except the individuals who discovered and developed them are referred to as zero-day or "0day" exploits. After an exploit is disclosed to the authors of the affected software, the associated vulnerability is often fixed through a patch, rendering the exploit unusable. This is why some black hat hackers, as well as military or intelligence agency hackers, do not publish their exploits but keep them private. One scheme that offers zero-day exploits is known as exploit as a service. Researchers estimate that malicious exploits cost the global economy over US$450 billion annually. In response to this threat, organizations are increasingly utilizing cyber threat intelligence to identify vulnerabilities and prevent hacks before they occur. Classification There are several methods of classifying exploits. The most common is by how the exploit communicates to the vulnerable software. A remote exploit works over a network and exploits the security vulnerability without any prior access to the vulnerable system. A local exploit requires prior access or physical access to the vulnerable system, and usually increases the privileges of the person running the exploit past those granted by the system administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with a client application. A common form of exploits against client applications are browser exploits. Exploits against client applications may also require some interaction with the user and thus may be used in combination with the social engineering method. Another classification is by the action against the vulnerable system; unauthorized data access, arbitrary code execution, and denial of service are examples. Exploitations are commonly categorized and named by the type of vulnerability they exploit , whether they are local/remote and the result of running the exploit (e.g. EoP, DoS, spoofing). Zero-click A zero-click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. FORCEDENTRY, discovered in 2021, is an example of a zero-click attack. These exploits are commonly the most sought after exploits (specifically on the underground exploit market) because the target typically has no way of knowing they have been compromised at the time of exploitation. In 2022, NSO Group was reportedly selling zero-click exploits to governments for breaking into individuals' phones. Pivoting Pivoting is a technique employed by both hackers and penetration testers to expand their access within a target network. By compromising a system, attackers can leverage it as a platform to target other systems that are typically shielded from direct external access by firewalls. Internal networks often contain a broader range of accessible machines compared to those exposed to the internet. For example, an attacker might compromise a web server on a corporate network and then utilize it to target other systems within the same network. This approach is often referred to as a multi-layered attack. Pivoting is also known as island hopping. Pivoting can further be distinguished into proxy pivoting and VPN pivoting: Proxy pivoting is the practice of channeling traffic through a compromised target using a proxy payload on the machine and launching attacks from the computer. This type of pivoting is restricted to certain TCP and UDP ports that are supported by the proxy. VPN pivoting enables the attacker to create an encrypted layer to tunnel into the compromised machine to route any network traffic through that target machine, for example, to run a vulnerability scan on the internal network through the compromised machine, effectively giving the attacker full network access as if they were behind the firewall. Typically, the proxy or VPN applications enabling pivoting are executed on the target computer as the payload of an exploit. Pivoting is usually done by infiltrating a part of a network infrastructure (as an example, a vulnerable printer or thermostat) and using a scanner to find other devices connected to attack them. By attacking a vulnerable piece of networking, an attacker could infect most or all of a network and gain complete control. See also Computer security Computer virus Crimeware Exploit kit Hacking: The Art of Exploitation (second edition) IT risk Metasploit Shellcode w3af Notes External links
Exploit (computer security)
[ "Technology" ]
1,123
[ "Computer security exploits" ]
9,877
https://en.wikipedia.org/wiki/Erg
The erg is a unit of energy equal to 10−7joules (100nJ). It is not an SI unit, instead originating from the centimetre–gram–second system of units (CGS). Its name is derived from (), a Greek word meaning 'work' or 'task'. An erg is the amount of work done by a force of one dyne exerted for a distance of one centimetre. In the CGS base units, it is equal to one gram centimetre-squared per second-squared (g⋅cm2/s2). It is thus equal to 10−7 joules or 100 nanojoules (nJ) in SI units. 1 erg = = 1 erg = = = 1 erg = = 1 erg = = 1 erg = History In 1864, Rudolf Clausius proposed the Greek word () for the unit of energy, work and heat. In 1873, a committee of the British Association for the Advancement of Science, including British physicists James Clerk Maxwell and William Thomson recommended the general adoption of the centimetre, the gramme, and the second as fundamental units (C.G.S. System of Units). To distinguish derived units, they recommended using the prefix "C.G.S. unit of ..." and requested that the word erg or ergon be strictly limited to refer to the C.G.S. unit of energy. In 1922, chemist William Draper Harkins proposed the name micri-erg as a convenient unit to measure the surface energy of molecules in surface chemistry. It would equate to 10−14 erg, the equivalent to 10−21 joule. The erg is not a part of the International System of Units (SI), which has been recommended since 1 January 1978 when the European Economic Community ratified a directive of 1971 that implemented SI as agreed by the General Conference of Weights and Measures. It is the unit of energy in Gaussian units, which are widely used in astrophysics, applications involving microscopic problems and relativistic electrodynamics, and sometimes in mechanics. See also Foe (unit), relative measure for energy released by a supernova Lumen second, for the lumerg and lumberg units Metre–tonne–second system of units References Units of energy Centimetre–gram–second system of units
Erg
[ "Mathematics" ]
498
[ "Quantity", "Units of energy", "Units of measurement" ]
9,890
https://en.wikipedia.org/wiki/Electron%20counting
In chemistry, electron counting is a formalism for assigning a number of valence electrons to individual atoms in a molecule. It is used for classifying compounds and for explaining or predicting their electronic structure and bonding. Many rules in chemistry rely on electron-counting: Octet rule is used with Lewis structures for main group elements, especially the lighter ones such as carbon, nitrogen, and oxygen, 18-electron rule in inorganic chemistry and organometallic chemistry of transition metals, Hückel's rule for the π-electrons of aromatic compounds, Polyhedral skeletal electron pair theory for polyhedral cluster compounds, including transition metals and main group elements and mixtures thereof, such as boranes. Atoms are called "electron-deficient" when they have too few electrons as compared to their respective rules, or "hypervalent" when they have too many electrons. Since these compounds tend to be more reactive than compounds that obey their rule, electron counting is an important tool for identifying the reactivity of molecules. While the counting formalism considers each atom separately, these individual atoms (with their hypothetical assigned charge) do not generally exist as free species. Counting rules Two methods of electron counting are "neutral counting" and "ionic counting". Both approaches give the same result (and can therefore be used to verify one's calculation). The neutral counting approach assumes the molecule or fragment being studied consists of purely covalent bonds. It was popularized by Malcolm Green along with the L and X ligand notation. It is usually considered easier especially for low-valent transition metals. The "ionic counting" approach assumes purely ionic bonds between atoms. It is important, though, to be aware that most chemical species exist between the purely covalent and ionic extremes. Neutral counting Neutral counting assumes each bond is equally split between two atoms. This method begins with locating the central atom on the periodic table and determining the number of its valence electrons. One counts valence electrons for main group elements differently from transition metals, which use d electron count. E.g. in period 2: B, C, N, O, and F have 3, 4, 5, 6, and 7 valence electrons, respectively. E.g. in period 4: K, Ca, Sc, Ti, V, Cr, Fe, Ni have 1, 2, 3, 4, 5, 6, 8, 10 valence electrons respectively. One is added for every halide or other anionic ligand which binds to the central atom through a sigma bond. Two is added for every lone pair bonding to the metal (e.g. each Lewis base binds with a lone pair). Unsaturated hydrocarbons such as alkenes and alkynes are considered Lewis bases. Similarly Lewis and Bronsted acids (protons) contribute nothing. One is added for each homoelement bond. One is added for each negative charge, and one is subtracted for each positive charge. Ionic counting Ionic counting assumes unequal sharing of electrons in the bond. The more electronegative atom in the bond gains electron lost from the less electronegative atom. This method begins by calculating the number of electrons of the element, assuming an oxidation state. E.g. for a Fe2+ has 6 electrons S2− has 8 electrons Two is added for every halide or other anionic ligand which binds to the metal through a sigma bond. Two is added for every lone pair bonding to the metal (e.g. each phosphine ligand can bind with a lone pair). Similarly Lewis and Bronsted acids (protons) contribute nothing. For unsaturated ligands such as alkenes, one electron is added for each carbon atom binding to the metal. Electrons donated by common fragments "Special cases" The numbers of electrons "donated" by some ligands depends on the geometry of the metal-ligand ensemble. An example of this complication is the M–NO entity. When this grouping is linear, the NO ligand is considered to be a three-electron ligand. When the M–NO subunit is strongly bent at N, the NO is treated as a pseudohalide and is thus a one electron (in the neutral counting approach). The situation is not very different from the η3 versus the η1 allyl. Another unusual ligand from the electron counting perspective is sulfur dioxide. Examples H2O For a water molecule (H2O), using both neutral counting and ionic counting result in a total of 8 electrons. The neutral counting method assumes each OH bond is split equally (each atom gets one electron from the bond). Thus both hydrogen atoms have an electron count of one. The oxygen atom has 6 valence electrons. The total electron count is 8, which agrees with the octet rule. With the ionic counting method, the more electronegative oxygen will gain electrons donated by the two hydrogen atoms in the two OH bonds to become O2-. It now has 8 total valence electrons, which obeys the octet rule. CH4, for the central C neutral counting: C contributes 4 electrons, each H radical contributes one each: 4 + 4 × 1 = 8 valence electrons ionic counting: C4− contributes 8 electrons, each proton contributes 0 each: 8 + 4 × 0 = 8 electrons. Similar for H: neutral counting: H contributes 1 electron, the C contributes 1 electron (the other 3 electrons of C are for the other 3 hydrogens in the molecule): 1 + 1 × 1 = 2 valence electrons. ionic counting: H contributes 0 electrons (H+), C4− contributes 2 electrons (per H), 0 + 1 × 2 = 2 valence electrons conclusion: Methane follows the octet-rule for carbon, and the duet rule for hydrogen, and hence is expected to be a stable molecule (as we see from daily life) H2S, for the central S neutral counting: S contributes 6 electrons, each hydrogen radical contributes one each: 6 + 2 × 1 = 8 valence electrons ionic counting: S2− contributes 8 electrons, each proton contributes 0: 8 + 2 × 0 = 8 valence electrons conclusion: with an octet electron count (on sulfur), we can anticipate that H2S would be pseudo-tetrahedral if one considers the two lone pairs. SCl2, for the central S neutral counting: S contributes 6 electrons, each chlorine radical contributes one each: 6 + 2 × 1 = 8 valence electrons ionic counting: S2+ contributes 4 electrons, each chloride anion contributes 2: 4 + 2 × 2 = 8 valence electrons conclusion: see discussion for H2S above. Both SCl2 and H2S follow the octet rule - the behavior of these molecules is however quite different. SF6, for the central S neutral counting: S contributes 6 electrons, each fluorine radical contributes one each: 6 + 6 × 1 = 12 valence electrons ionic counting: S6+ contributes 0 electrons, each fluoride anion contributes 2: 0 + 6 × 2 = 12 valence electrons conclusion: ionic counting indicates a molecule lacking lone pairs of electrons, therefore its structure will be octahedral, as predicted by VSEPR. One might conclude that this molecule would be highly reactive - but the opposite is true: SF6 is inert, and it is widely used in industry because of this property. RuCl2(bpy)2 RuCl2(bpy)2 is an octahedral metal complex with two bidentate 2,2′-Bipyridine (bpy) ligands and two chloride ligands. In the neutral counting method, the Ruthenium of the complex is treated as Ru(0). It has 8 d electrons to contribute to the electron count. The two bpy ligands are L-type ligand neutral ligands, thus contributing two electrons each. The two chloride ligands hallides and thus 1 electron donors, donating 1 electron each to the electron count. The total electron count of RuCl2(bpy)2 is 18. In the ionic counting method, the Ruthenium of the complex is treated as Ru(II). It has 6 d electrons to contribute to the electron count. The two bpy ligands are L-type ligand neutral ligands, thus contributing two electrons each. The two chloride ligands are anionic ligands, thus donating 2 electrons each to the electron count. The total electron count of RuCl2(bpy)2 is 18, agreeing with the result of neural counting. TiCl4, for the central Ti neutral counting: Ti contributes 4 electrons, each chlorine radical contributes one each: 4 + 4 × 1 = 8 valence electrons ionic counting: Ti4+ contributes 0 electrons, each chloride anion contributes two each: 0 + 4 × 2 = 8 valence electrons conclusion: Having only 8e (vs. 18 possible), we can anticipate that TiCl4 will be a good Lewis acid. Indeed, it reacts (in some cases violently) with water, alcohols, ethers, amines. Fe(CO)5 neutral counting: Fe contributes 8 electrons, each CO contributes 2 each: 8 + 2 × 5 = 18 valence electrons ionic counting: Fe(0) contributes 8 electrons, each CO contributes 2 each: 8 + 2 × 5 = 18 valence electrons conclusions: this is a special case, where ionic counting is the same as neutral counting, all fragments being neutral. Since this is an 18-electron complex, it is expected to be isolable compound. Ferrocene, (C5H5)2Fe, for the central Fe: neutral counting: Fe contributes 8 electrons, the 2 cyclopentadienyl-rings contribute 5 each: 8 + 2 × 5 = 18 electrons ionic counting: Fe2+ contributes 6 electrons, the two aromatic cyclopentadienyl rings contribute 6 each: 6 + 2 × 6 = 18 valence electrons on iron. conclusion: Ferrocene is expected to be an isolable compound. See also d electron count Tolman's rule References Inorganic chemistry Chemical bonding
Electron counting
[ "Physics", "Chemistry", "Materials_science" ]
2,090
[ "Chemical bonding", "Condensed matter physics", "nan" ]
9,891
https://en.wikipedia.org/wiki/Entropy
Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change and information systems including the transmission of information in telecommunication. Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible. The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation. Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behaviour, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, which has become one of the defining universal constants for the modern International System of Units (SI). History In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body". The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as , a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. Etymology In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy () after the Greek word for 'transformation'. He gave "transformational content" () as a synonym, paralleling his "thermal and ergonal content" () as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation'). In more detail, Clausius explained his choice of "entropy" as a name as follows: I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful. Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing". Definitions and descriptions The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modelled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes. State variables and functions of state Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero. Reversible process The entropy change of a system excluding its surroundings can be well-defined as a small portion of heat transferred to the system during reversible process divided by the temperature of the system during this heat transfer:The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible. In contrast, irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible, the total entropy increases, and the potential for maximum work to be done during the process is lost. Carnot cycle The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle which is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, the heat is transferred from a hot reservoir to a working gas at the constant temperature during isothermal expansion stage and the heat is transferred from a working gas to a cold reservoir at the constant temperature during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats and , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat is greater than the magnitude of heat . Through the efforts of Clausius and Kelvin, the work done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat absorbed by a working body of the engine during isothermal expansion:To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. It is known that a work produced by an engine over a cycle equals to a net heat absorbed over a cycle. Thus, with the sign convention for a heat transferred in a thermodynamic process ( for an absorption and for a dissipation) we get:Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function with a change of . It is called an internal energy and forms a central concept for the first law of thermodynamics. Finally, comparison for both the representations of a work output in a Carnot cycle gives us:Similarly to the derivation of internal energy, this equality implies existence of a state function with a change of and which is conserved over an entire cycle. Clausius called this state function entropy. In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:Here we denote the entropy change for a thermal reservoir by , where is either for a hot reservoir or for a cold one. If we consider a heat engine which is less effective than Carnot cycle (i.e., the work produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:Substitution of the work as the net heat into the inequality above gives us:or in terms of the entropy change :A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analysed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) that is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics. For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, descriptions of devices operating near the limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics. Classical thermodynamics The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system. While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. According to the Clausius equality, for a reversible cyclic thermodynamic process: which means the line integral is path-independent. Thus we can define a state function , called entropy:Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI). To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change. We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy . From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up of energy to the surrounding at the temperature , its entropy falls by and at least of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined). Statistical mechanics The statistical definition was developed by Ludwig Boltzmann in the 1870s by analysing the statistical behaviour of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature. The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant. The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1). Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability of being occupied (usually given by the Boltzmann distribution):where is the Boltzmann constant and the summation is performed over all possible microstates of the system. In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:where is a density matrix, is a trace operator and is a matrix logarithm. Density matrix formalism is not required if the system occurs to be in a thermal equilibrium so long as the basis states are chosen to be eigenstates of Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for can be derived from it, but not vice versa. In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability , where is the number of microstates whose energy equals to the one of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble. The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model. The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables , , and observer B using variables , , , . If observer B changes variable , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment. Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property. In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics. Entropy of a system Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state. In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalisation has progressed. Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do. Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds. One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing. Equivalence of definitions Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:and the entropy in classical thermodynamics:together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution. Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates: Second law of thermodynamics The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature absorbing an infinitesimal amount of heat in a reversible way, is given by . More explicitly, an energy is not available to do useful work, where is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy. Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely. The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximises its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state. Applications The fundamental thermodynamic relation The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure bears on the volume as the only external parameter, this relation is:Since both internal energy and entropy are monotonic functions of temperature , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist). The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities. Entropy in chemical thermodynamics Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously. Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1. Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture. Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, must be incorporated in an expression that includes both the system and its surroundings: Via additional steps this expression becomes the equation of Gibbs free energy change for reactants and products in the system at the constant pressure and temperature :where is the enthalpy change and is the entropy change. World's technological capacity to store and communicate entropic information A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalised on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007. Entropy balance equation for open systems In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat , flow of shaft work and pressure-volume work across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer , where is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system. To derive a generalised entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that , i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy , the entropy balance equation is:where is the net rate of entropy flow due to the flows of mass into and out of the system with entropy per unit mass , is the rate of entropy flow due to the flow of heat across the system boundary and is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity. In case of multiple heat flows the term is replaced by , where is the heat flow through -th port into the system and is the temperature at the -th port. The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:with zero for reversible process and positive values for irreversible one. Entropy change formulas for simple processes For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas. Isothermal expansion or compression of an ideal gas For the expansion (or compression) of an ideal gas from an initial volume and pressure to a final volume and pressure at any constant temperature, the change in entropy is given by:Here is the amount of gas (in moles) and is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant. Cooling and heating For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature to a final temperature , the entropy change is: provided that the constant-pressure molar heat capacity (or specific heat) is constant and that no phase transition occurs in this temperature interval. Similarly at constant volume, the entropy change is:where the constant-volume molar heat capacity is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply. Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:Similarly if the temperature and pressure of an ideal gas both vary: Phase transitions Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point , the entropy of fusion is:Similarly, for vaporisation of a liquid to a gas at the boiling point , the entropy of vaporisation is: Approaches to understanding entropy As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid. Standard textbook definitions The following is a list of additional definitions of entropy from a collection of textbooks: a measure of energy dispersal at a specific temperature. a measure of disorder in the universe or of the availability of the energy in a system to do work. a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work. In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Order and disorder Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" and "order" in the system are each given by: Here, is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and is the "order" capacity of the system. Energy dispersal The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantised energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both". Relating entropy to energy usefulness It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced. As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorised to lead to the heat death of the universe. Entropy and adiabatic accessibility A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state is defined as the largest number such that is adiabatically accessible from a composite state consisting of an amount in the state and a complementary amount, , in the state . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling. Entropy in quantum mechanics In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":where is the density matrix, is the trace operator and is the Boltzmann constant. This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities :i.e. in such a basis the density matrix is diagonal. Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain. Information theory When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities so that:where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits). In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message. Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is:and if entropy is measured in units of per nat, then the entropy is given by:which is the Boltzmann entropy formula, where is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the function of information theory and using Shannon's other term, "uncertainty", instead. Measurement The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles and constant volume , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat :The resulting relation describes how entropy changes when a small amount of energy is introduced into the system at a certain temperature . The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy. Interdisciplinary applications Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution. Philosophy and theoretical physics Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Biology Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimisation. Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species. Cosmology Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source. If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation). The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult. Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe. Economics Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics. In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position. See also Boltzmann entropy Brownian ratchet Configuration entropy Conformational entropy Entropic explosion Entropic force Entropic value at risk Entropy and life Entropy unit Free entropy Harmonic entropy Info-metrics Negentropy (negative entropy) Phase space Principle of maximum entropy Residual entropy Thermodynamic potential Notes References Further reading Lambert, Frank L.; Sharp, Kim (2019). Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics (SpringerBriefs in Physics). Springer Nature. . Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering External links "Entropy" at Scholarpedia Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008 Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle Khan Academy: entropy lectures, part of Chemistry playlist Entropy Intuition More on Entropy Proof: S (or Entropy) is a valid state variable Reconciling Thermodynamic and State Definitions of Entropy Thermodynamic Entropy Definition Clarification The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013. The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200) Physical quantities Philosophy of thermal and statistical physics State functions Asymmetry Extensive quantities
Entropy
[ "Physics", "Chemistry", "Mathematics" ]
10,892
[ "State functions", "Physical phenomena", "Thermodynamic properties", "Philosophy of thermal and statistical physics", "Physical quantities", "Quantity", "Chemical quantities", "Extensive quantities", "Entropy", "Thermodynamics", "Asymmetry", "Statistical mechanics", "Wikipedia categories nam...
9,908
https://en.wikipedia.org/wiki/Equation%20of%20state
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. Overview At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid. The general form of an equation of state may be written as where is the pressure, the volume, and the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system. An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology. Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry. Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero. , number of moles of a substance , , molar volume, the volume of 1 mole of gas or liquid , ideal gas constant ≈ 8.3144621J/mol·K , pressure at the critical point , molar volume at the critical point , absolute temperature at the critical point Historical background Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676. In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for species as:In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with , giving:In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong. The van der Waals equation of state can be written as where is a parameter describing the attractive energy between particles and is a parameter describing the volume of the particles. Ideal gas law Classical ideal gas law The classical ideal gas law may be written In the form shown above, the equation of state is thus If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows where is the number density of the gas (number of atoms/molecules per unit volume), is the (constant) adiabatic index (ratio of specific heats), is the internal energy per unit mass (the "specific internal energy"), is the specific heat capacity at constant volume, and is the specific heat capacity at constant pressure. Quantum ideal gas law Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass and spin that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with particles occupying a volume with temperature and pressure is given by where is the Boltzmann constant and the chemical potential is given by the following implicit function In the limiting case where , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit reduces to With a fixed number density , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ. Cubic equations of state Cubic equations of state are called such because they can be rewritten as a cubic function of . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state. Virial equations of state Virial equation of state Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only. The BWR equation of state where is pressure is molar density Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available. The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered. The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state. Physically based equations of state There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid. Perturbation theory-based models Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory. Statistical associating fluid theory (SAFT) An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al. Multiparameter equations of state Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density: with The reduced density and reduced temperature are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times. One example of such an equation of state is the form proposed by Span and Wagner. This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms. List of further equations of state Stiffened equation of state When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used: where is the internal energy per unit mass, is an empirically determined constant typically taken to be about 6.1, and is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres). The equation is stated in this form because the speed of sound in water is given by . Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa). This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks. Morse oscillator equation of state An equation of state of Morse oscillator has been derived, and it has the following form: Where is the first order virial parameter and it depends on the temperature, is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. is the fractional volume of the system. Ultrarelativistic equation of state An ultrarelativistic fluid has equation of state where is the pressure, is the mass density, and is the speed of sound. Ideal Bose equation of state The equation of state for an ideal Bose gas is where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form. Jones–Wilkins–Lee equation of state for explosives (JWL equation) The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives. The ratio is defined by using , which is the density of the explosive (solid part) and , which is the density of the detonation products. The parameters , , , and are given by several references. In addition, the initial density (solid part) , speed of detonation , Chapman–Jouguet pressure and the chemical energy per unit volume of the explosive are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below. Others Tait equation for water and other liquids. Several equations are referred to as the Tait equation. Murnaghan equation of state Birch–Murnaghan equation of state Stacey–Brennan–Irvine equation of state Modified Rydberg equation of state Adapted polynomial equation of state Johnson–Holmquist equation of state Mie–Grüneisen equation of state Anton-Schmidt equation of state State-transition equation See also Gas laws Departure function Table of thermodynamic equations Real gas Cluster expansion Polytrope References External links Equations of physics Engineering thermodynamics Mechanical engineering Fluid mechanics Thermodynamic models
Equation of state
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,273
[ "Applied and interdisciplinary physics", "Equations of physics", "Thermodynamic models", "Engineering thermodynamics", "Statistical mechanics", "Mathematical objects", "Equations", "Civil engineering", "Thermodynamics", "Mechanical engineering", "Equations of state", "Fluid mechanics" ]
9,924
https://en.wikipedia.org/wiki/Electronic%20mixer
An electronic mixer is a device that combines two or more electrical or electronic signals into one or two composite output signals. There are two basic circuits that both use the term mixer, but they are very different types of circuits: additive mixers and multiplicative mixers. Additive mixers are also known as analog adders to distinguish from the related digital adder circuits. Simple additive mixers use Kirchhoff's circuit laws to add the currents of two or more signals together, and this terminology ("mixer") is only used in the realm of audio electronics where audio mixers are used to add together audio signals such as voice signals, music signals, and sound effects. Multiplicative mixers multiply together two time-varying input signals instantaneously (instant-by-instant). If the two input signals are both sinusoids of specified frequencies f1 and f2, then the output of the mixer will contain two new sinusoids that have the sum f1 + f2 frequency and the difference frequency absolute value |f1 - f2|. Any nonlinear electronic block driven by two signals with frequencies f1 and f2 would generate intermodulation (mixing) products. A multiplier (which is a nonlinear device) will generate ideally only the sum and difference frequencies, whereas an arbitrary nonlinear block will also generate signals at 2·f1-3·f2, etc. Therefore, normal nonlinear amplifiers or just single diodes have been used as mixers, instead of a more complex multiplier. A multiplier usually has the advantage of rejecting – at least partly – undesired higher-order intermodulations and larger conversion gain. Additive mixers Additive mixers add two or more signals, giving out a composite signal that contains the frequency components of each of the source signals. The simplest additive mixers are resistor networks, and thus purely passive, while more complex matrix mixers employ active components such as buffer amplifiers for impedance matching and better isolation. Multiplicative mixers An ideal multiplicative mixer produces an output signal equal to the product of the two input signals. In communications, a multiplicative mixer is often used together with an oscillator to modulate signal frequencies. A multiplicative mixer can be coupled with a filter to either up-convert or down-convert an input signal frequency, but they are more commonly used to down-convert to a lower frequency to allow for simpler filter designs, as done in superheterodyne receivers. In many typical circuits, the single output signal actually contains multiple waveforms, namely those at the sum and difference of the two input frequencies and harmonic waveforms. The output signal may be obtained by removing the other signal components with a filter.í Mathematical treatment The received signal can be represented as and that of the local oscillator can be represented as For simplicity, assume that the output I of the detector is proportional to the square of the amplitude: The output has high frequency (, and ) and constant components. In heterodyne detection, the high frequency components and usually the constant components are filtered out, leaving the intermediate (beat) frequency at . The amplitude of this last component is proportional to the amplitude of the signal radiation. With appropriate signal analysis the phase of the signal can be recovered as well. If is equal to then the beat component is a recovered version of the original signal, with the amplitude equal to the product of and ; that is, the received signal is amplified by mixing with the local oscillator. This is the basis for a Direct conversion receiver. Implementations Multiplicative mixers have been implemented in many ways. The most popular are Gilbert cell mixers, diode mixers, diode ring mixers (ring modulation) and switching mixers. Diode mixers take advantage of the non-linearity of diode devices to produce the desired multiplication in the squared term. They are very inefficient as most of the power output is in other unwanted terms which need filtering out. Inexpensive AM radios still use diode mixers. Electronic mixers are usually made with transistors and/or diodes arranged in a balanced circuit or even a double-balanced circuit. They are readily manufactured as monolithic integrated circuits or hybrid integrated circuits. They are designed for a wide variety of frequency ranges, and they are mass-produced to tight tolerances by the hundreds of thousands, making them relatively cheap. Double-balanced mixers are very widely used in microwave communications, satellite communications, ultrahigh frequency (UHF) communications transmitters, radio receivers, and radar systems. Gilbert cell mixers are an arrangement of transistors that multiplies the two signals. Switching mixers use arrays of field-effect transistors or vacuum tubes. These are used as electronic switches, to alternate the signal direction. They are controlled by the signal being mixed. They are especially popular with digitally controlled radios. Switching mixers pass more power and usually insert less distortion than Gilbert cell mixers. Analog circuits Audio mixing da:Lydmikser ja:ミキサー pl:Mieszacz
Electronic mixer
[ "Engineering" ]
1,044
[ "Analog circuits", "Electronic engineering" ]
9,927
https://en.wikipedia.org/wiki/Endomembrane%20system
The endomembrane system is composed of the different membranes (endomembranes) that are suspended in the cytoplasm within a eukaryotic cell. These membranes divide the cell into functional and structural compartments, or organelles. In eukaryotes the organelles of the endomembrane system include: the nuclear membrane, the endoplasmic reticulum, the Golgi apparatus, lysosomes, vesicles, endosomes, and plasma (cell) membrane among others. The system is defined more accurately as the set of membranes that forms a single functional and developmental unit, either being connected directly, or exchanging material through vesicle transport. Importantly, the endomembrane system does not include the membranes of plastids or mitochondria, but might have evolved partially from the actions of the latter (see below). The nuclear membrane contains a lipid bilayer that encompasses the contents of the nucleus. The endoplasmic reticulum (ER) is a synthesis and transport organelle that branches into the cytoplasm in plant and animal cells. The Golgi apparatus is a series of multiple compartments where molecules are packaged for delivery to other cell components or for secretion from the cell. Vacuoles, which are found in both plant and animal cells (though much bigger in plant cells), are responsible for maintaining the shape and structure of the cell as well as storing waste products. A vesicle is a relatively small, membrane-enclosed sac that stores or transports substances. The cell membrane is a protective barrier that regulates what enters and leaves the cell. There is also an organelle known as the Spitzenkörper that is only found in fungi, and is connected with hyphal tip growth. In prokaryotes endomembranes are rare, although in many photosynthetic bacteria the plasma membrane is highly folded and most of the cell cytoplasm is filled with layers of light-gathering membrane. These light-gathering membranes may even form enclosed structures called chlorosomes in green sulfur bacteria. Another example is the complex "pepin" system of Thiomargarita species, especially T. magnifica. The organelles of the endomembrane system are related through direct contact or by the transfer of membrane segments as vesicles. Despite these relationships, the various membranes are not identical in structure and function. The thickness, molecular composition, and metabolic behavior of a membrane are not fixed, they may be modified several times during the membrane's life. One unifying characteristic the membranes share is a lipid bilayer, with proteins attached to either side or traversing them. History of the concept Most lipids are synthesized in yeast either in the endoplasmic reticulum, lipid particles, or the mitochondrion, with little or no lipid synthesis occurring in the plasma membrane or nuclear membrane. Sphingolipid biosynthesis begins in the endoplasmic reticulum, but is completed in the Golgi apparatus. The situation is similar in mammals, with the exception of the first few steps in ether lipid biosynthesis, which occur in peroxisomes. The various membranes that enclose the other subcellular organelles must therefore be constructed by transfer of lipids from these sites of synthesis. However, although it is clear that lipid transport is a central process in organelle biogenesis, the mechanisms by which lipids are transported through cells remain poorly understood. The first proposal that the membranes within cells form a single system that exchanges material between its components was by Morré and Mollenhauer in 1974. This proposal was made as a way of explaining how the various lipid membranes are assembled in the cell, with these membranes being assembled through lipid flow from the sites of lipid synthesis. The idea of lipid flow through a continuous system of membranes and vesicles was an alternative to the various membranes being independent entities that are formed from transport of free lipid components, such as fatty acids and sterols, through the cytosol. Importantly, the transport of lipids through the cytosol and lipid flow through a continuous endomembrane system are not mutually exclusive processes and both may occur in cells. Components of the system Nuclear envelope The nuclear envelope surrounds the nucleus, separating its contents from the cytoplasm. It has two membranes, each a lipid bilayer with associated proteins. The outer nuclear membrane is continuous with the rough endoplasmic reticulum membrane, and like that structure, features ribosomes attached to the surface. The outer membrane is also continuous with the inner nuclear membrane since the two layers are fused together at numerous tiny holes called nuclear pores that perforate the nuclear envelope. These pores are about 120 nm in diameter and regulate the passage of molecules between the nucleus and cytoplasm, permitting some to pass through the membrane, but not others. Since the nuclear pores are located in an area of high traffic, they play an important role in cell physiology. The space between the outer and inner membranes is called the perinuclear space and is joined with the lumen of the rough ER. The nuclear envelope's structure is determined by a network of intermediate filaments (protein filaments). This network is organized into a mesh-like lining called the nuclear lamina, which binds to chromatin, integral membrane proteins, and other nuclear components along the inner surface of the nucleus. The nuclear lamina is thought to help materials inside the nucleus reach the nuclear pores and in the disintegration of the nuclear envelope during mitosis and its reassembly at the end of the process. The nuclear pores are highly efficient at selectively allowing the passage of materials to and from the nucleus, because the nuclear envelope has a considerable amount of traffic. RNA and ribosomal subunits must be continually transferred from the nucleus to the cytoplasm. Histones, gene regulatory proteins, DNA and RNA polymerases, and other substances essential for nuclear activities must be imported from the cytoplasm. The nuclear envelope of a typical mammalian cell contains 3000–4000 pore complexes. If the cell is synthesizing DNA each pore complex needs to transport about 100 histone molecules per minute. If the cell is growing rapidly, each complex also needs to transport about 6 newly assembled large and small ribosomal subunits per minute from the nucleus to the cytosol, where they are used to synthesize proteins. Endoplasmic reticulum The endoplasmic reticulum (ER) is a membranous synthesis and transport organelle that is an extension of the nuclear envelope. More than half the total membrane in eukaryotic cells is accounted for by the ER. The ER is made up of flattened sacs and branching tubules that are thought to interconnect, so that the ER membrane forms a continuous sheet enclosing a single internal space. This highly convoluted space is called the ER lumen and is also referred to as the ER cisternal space. The lumen takes up about ten percent of the entire cell volume. The endoplasmic reticulum membrane allows molecules to be selectively transferred between the lumen and the cytoplasm, and since it is connected to the nuclear envelope, it provides a channel between the nucleus and the cytoplasm. The ER has a central role in producing, processing, and transporting biochemical compounds for use inside and outside of the cell. Its membrane is the site of production of all the transmembrane proteins and lipids for many of the cell's organelles, including the ER itself, the Golgi apparatus, lysosomes, endosomes, secretory vesicles, and the plasma membrane. Furthermore, almost all of the proteins that will exit the cell, plus those destined for the lumen of the ER, Golgi apparatus, or lysosomes, are originally delivered to the ER lumen. Consequently, many of the proteins found in the cisternal space of the endoplasmic reticulum lumen are there only temporarily as they pass on their way to other locations. Other proteins, however, constantly remain in the lumen and are known as endoplasmic reticulum resident proteins. These special proteins contain a specialized retention signal made up of a specific sequence of amino acids that enables them to be retained by the organelle. An example of an important endoplasmic reticulum resident protein is the chaperone protein known as BiP which identifies other proteins that have been improperly built or processed and keeps them from being sent to their final destinations. The ER is involved in cotranslational sorting of proteins. A polypeptide which contains an ER signal sequence is recognised by the signal recognition particle which halts the production of the protein. The SRP transports the nascent protein to the ER membrane where it is released through a membrane channel and translation resumes. There are two distinct, though connected, regions of ER that differ in structure and function: smooth ER and rough ER. The rough endoplasmic reticulum is so named because the cytoplasmic surface is covered with ribosomes, giving it a bumpy appearance when viewed through an electron microscope. The smooth ER appears smooth since its cytoplasmic surface lacks ribosomes. Functions of the smooth ER In the great majority of cells, smooth ER regions are scarce and are often partly smooth and partly rough. They are sometimes called transitional ER because they contain ER exit sites from which transport vesicles carrying newly synthesized proteins and lipids bud off for transport to the Golgi apparatus. In certain specialized cells, however, the smooth ER is abundant and has additional functions. The smooth ER of these specialized cells functions in diverse metabolic processes, including synthesis of lipids, metabolism of carbohydrates, and detoxification of drugs and poisons. Enzymes of the smooth ER are vital to the synthesis of lipids, including oils, phospholipids, and steroids. Sex hormones of vertebrates and the steroid hormones secreted by the adrenal glands are among the steroids produced by the smooth ER in animal cells. The cells that synthesize these hormones are rich in smooth ER. Liver cells are another example of specialized cells that contain an abundance of smooth ER. These cells provide an example of the role of smooth ER in carbohydrate metabolism. Liver cells store carbohydrates in the form of glycogen. The breakdown of glycogen eventually leads to the release of glucose from the liver cells, which is important in the regulation of sugar concentration in the blood. However, the primary product of glycogen breakdown is glucose-1-phosphate. This is converted to glucose-6-phosphate and then an enzyme of the liver cell's smooth ER removes the phosphate from the glucose, so that it can then leave the cell. Enzymes of the smooth ER can also help detoxify drugs and poisons. Detoxification usually involves the addition of a hydroxyl group to a drug, making the drug more soluble and thus easier to purge from the body. One extensively studied detoxification reaction is carried out by the cytochrome P450 family of enzymes, which catalyze oxidation reactions on water-insoluble drugs or metabolites that would otherwise accumulate to toxic levels in cell membrane. In muscle cells, a specialized smooth ER (sarcoplasmic reticulum) forms a membranous compartment (cisternal space) into which calcium ions are pumped. When a muscle cell becomes stimulated by a nerve impulse, calcium goes back across this membrane into the cytosol and generates the contraction of the muscle cell. Functions of the rough ER Many types of cells export proteins produced by ribosomes attached to the rough ER. The ribosomes assemble amino acids into protein units, which are carried into the rough ER for further adjustments. These proteins may be either transmembrane proteins, which become embedded in the membrane of the endoplasmic reticulum, or water-soluble proteins, which are able to pass through the membrane into the lumen. Those that reach the inside of the endoplasmic reticulum are folded into the correct three-dimensional conformation. Chemicals, such as carbohydrates or sugars, are added, then the endoplasmic reticulum either transports the completed proteins, called secretory proteins, to areas of the cell where they are needed, or they are sent to the Golgi apparatus for further processing and modification. Once secretory proteins are formed, the ER membrane separates them from the proteins that will remain in the cytosol. Secretory proteins depart from the ER enfolded in the membranes of vesicles that bud like bubbles from the transitional ER. These vesicles in transit to another part of the cell are called transport vesicles. An alternative mechanism for transport of lipids and proteins out of the ER are through lipid transfer proteins at regions called membrane contact sites where the ER becomes closely and stably associated with the membranes of other organelles, such as the plasma membrane, Golgi or lysosomes. In addition to making secretory proteins, the rough ER makes membranes that grows in place from the addition of proteins and phospholipids. As polypeptides intended to be membrane proteins grow from the ribosomes, they are inserted into the ER membrane itself and are kept there by their hydrophobic portions. The rough ER also produces its own membrane phospholipids; enzymes built into the ER membrane assemble phospholipids. The ER membrane expands and can be transferred by transport vesicles to other components of the endomembrane system. Golgi apparatus The Golgi apparatus (also known as the Golgi body and the Golgi complex) is composed of separate sacs called cisternae. Its shape is similar to a stack of pancakes. The number of these stacks varies with the specific function of the cell. The Golgi apparatus is used by the cell for further protein modification. The section of the Golgi apparatus that receives the vesicles from the ER is known as the cis face, and is usually near the ER. The opposite end of the Golgi apparatus is called the trans face, this is where the modified compounds leave. The trans face is usually facing the plasma membrane, which is where most of the substances the Golgi apparatus modifies are sent. Vesicles sent off by the ER containing proteins are further altered at the Golgi apparatus and then prepared for secretion from the cell or transport to other parts of the cell. Various things can happen to the proteins on their journey through the enzyme covered space of the Golgi apparatus. The modification and synthesis of the carbohydrate portions of glycoproteins is common in protein processing. The Golgi apparatus removes and substitutes sugar monomers, producing a large variety of oligosaccharides. In addition to modifying proteins, the Golgi also manufactures macromolecules itself. In plant cells, the Golgi produces pectins and other polysaccharides needed by the plant structure. Once the modification process is completed, the Golgi apparatus sorts the products of its processing and sends them to various parts of the cell. Molecular identification labels or tags are added by the Golgi enzymes to help with this. After everything is organized, the Golgi apparatus sends off its products by budding vesicles from its trans face. Vacuoles Vacuoles, like vesicles, are membrane-bound sacs within the cell. They are larger than vesicles and their specific function varies. The operations of vacuoles are different for plant and animal vacuoles. In plant cells, vacuoles cover anywhere from 30% to 90% of the total cell volume. Most mature plant cells contain one large central vacuole encompassed by a membrane called the tonoplast. Vacuoles of plant cells act as storage compartments for the nutrients and waste of a cell. The solution that these molecules are stored in is called the cell sap. Pigments that color the cell are sometime located in the cell sap. Vacuoles can also increase the size of the cell, which elongates as water is added, and they control the turgor pressure (the osmotic pressure that keeps the cell wall from caving in). Like lysosomes of animal cells, vacuoles have an acidic pH and contain hydrolytic enzymes. The pH of vacuoles enables them to perform homeostatic procedures in the cell. For example, when the pH in the cells environment drops, the H+ ions surging into the cytosol can be transferred to a vacuole in order to keep the cytosol's pH constant. In animals, vacuoles serve in exocytosis and endocytosis processes. Endocytosis refers to when substances are taken into the cell, whereas for exocytosis substances are moved from the cell into the extracellular space. Material to be taken-in is surrounded by the plasma membrane, and then transferred to a vacuole. There are two types of endocytosis, phagocytosis (cell eating) and pinocytosis (cell drinking). In phagocytosis, cells engulf large particles such as bacteria. Pinocytosis is the same process, except the substances being ingested are in the fluid form. Vesicles Vesicles are small membrane-enclosed transport units that can transfer molecules between different compartments. Most vesicles transfer the membranes assembled in the endoplasmic reticulum to the Golgi apparatus, and then from the Golgi apparatus to various locations. There are various types of vesicles each with a different protein configuration. Most are formed from specific regions of membranes. When a vesicle buds off from a membrane it contains specific proteins on its cytosolic surface. Each membrane a vesicle travels to contains a marker on its cytosolic surface. This marker corresponds with the proteins on the vesicle traveling to the membrane. Once the vesicle finds the membrane, they fuse. There are three well known types of vesicles. They are clathrin-coated, COPI-coated, and COPII-coated vesicles. Each performs different functions in the cell. For example, clathrin-coated vesicles transport substances between the Golgi apparatus and the plasma membrane. COPI- and COPII-coated vesicles are frequently used for transportation between the ER and the Golgi apparatus. Lysosomes Lysosomes are organelles that contain hydrolytic enzymes that are used for intracellular digestion. The main functions of a lysosome are to process molecules taken in by the cell and to recycle worn out cell parts. The enzymes inside of lysosomes are acid hydrolases which require an acidic environment for optimal performance. Lysosomes provide such an environment by maintaining a pH of 5.0 inside of the organelle. If a lysosome were to rupture, the enzymes released would not be very active because of the cytosol's neutral pH. However, if numerous lysosomes leaked the cell could be destroyed from autodigestion. Lysosomes carry out intracellular digestion, in a process called phagocytosis (from the Greek , to eat and , vessel, referring here to the cell), by fusing with a vacuole and releasing their enzymes into the vacuole. Through this process, sugars, amino acids, and other monomers pass into the cytosol and become nutrients for the cell. Lysosomes also use their hydrolytic enzymes to recycle the cell's obsolete organelles in a process called autophagy. The lysosome engulfs another organelle and uses its enzymes to take apart the ingested material. The resulting organic monomers are then returned to the cytosol for reuse. The last function of a lysosome is to digest the cell itself through autolysis. Spitzenkörper The spitzenkörper is a component of the endomembrane system found only in fungi, and is associated with hyphal tip growth. It is a phase-dark body that is composed of an aggregation of membrane-bound vesicles containing cell wall components, serving as a point of assemblage and release of such components intermediate between the Golgi and the cell membrane. The spitzenkörper is motile and generates new hyphal tip growth as it moves forward. Plasma membrane The plasma membrane is a phospholipid bilayer membrane that separates the cell from its environment and regulates the transport of molecules and signals into and out of the cell. Embedded in the membrane are proteins that perform the functions of the plasma membrane. The plasma membrane is not a fixed or rigid structure, the molecules that compose the membrane are capable of lateral movement. This movement and the multiple components of the membrane are why it is referred to as a fluid mosaic. Smaller molecules such as carbon dioxide, water, and oxygen can pass through the plasma membrane freely by diffusion or osmosis. Larger molecules needed by the cell are assisted by proteins through active transport. The plasma membrane of a cell has multiple functions. These include transporting nutrients into the cell, allowing waste to leave, preventing materials from entering the cell, averting needed materials from leaving the cell, maintaining the pH of the cytosol, and preserving the osmotic pressure of the cytosol. Transport proteins which allow some materials to pass through but not others are used for these functions. These proteins use ATP hydrolysis to pump materials against their concentration gradients. In addition to these universal functions, the plasma membrane has a more specific role in multicellular organisms. Glycoproteins on the membrane assist the cell in recognizing other cells, in order to exchange metabolites and form tissues. Other proteins on the plasma membrane allow attachment to the cytoskeleton and extracellular matrix; a function that maintains cell shape and fixes the location of membrane proteins. Enzymes that catalyze reactions are also found on the plasma membrane. Receptor proteins on the membrane have a shape that matches with a chemical messenger, resulting in various cellular responses. Evolution The origin of the endomembrane system is linked to the origin of eukaryotes themselves and the origin of eukaryoties to the endosymbiotic origin of mitochondria. Many models have been put forward to explain the origin of the endomembrane system (reviewed in). The most recent concept suggests that the endomembrane system evolved from outer membrane vesicles the endosymbiotic mitochondrion secreted, and got enclosed within infoldings of the host prokaryote (in turn, a result of the ingestion of the endosymbiont). This OMV (outer membrane vesicles)-based model for the origin of the endomembrane system is currently the one that requires the fewest novel inventions at eukaryote origin and explains the many connections of mitochondria with other compartments of the cell. Currently, this "inside-out" hypothesis (which states that the alphaproteobacteria, the ancestral mitochondria, were engulfed by the blebs of an asgardarchaeon, and later the blebs fused leaving infoldings which would eventually become the endomembrane system) is favored more than the outside-in one (which suggested that the endomembrane system arose due to infoldings within the archaeal membrane). References Cell anatomy Membrane biology
Endomembrane system
[ "Chemistry" ]
4,953
[ "Membrane biology", "Molecular biology" ]