id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
17,140
https://en.wikipedia.org/wiki/Katal
The katal (symbol: kat) is that catalytic activity that will raise the rate of conversion by one mole per second in a specified assay system. It is a unit of the International System of Units (SI) used for quantifying the catalytic activity of enzymes (that is, measuring the enzymatic activity level in enzyme catalysis) and other catalysts. The unit "katal" is not attached to a specified measurement procedure or assay condition, but any given catalytic activity is: the value measured depends on experimental conditions that must be specified. Therefore, to define the quantity of a catalyst in katals, the catalysed rate of conversion (the rate of conversion in presence of the catalyst minus the rate of spontaneous conversion) of a defined chemical reaction is measured in moles per second. One katal of trypsin, for example, is that amount of trypsin which breaks one mole of peptide bonds in one second under the associated specified conditions. Definition One katal refers to an amount of enzyme that gives a catalysed rate of conversion of one mole per second. Because this is such a large unit for most enzymatic reactions, the nanokatal (nkat) is used in practice. The katal is not used to express the rate of a reaction; that is expressed in units of concentration per second, as moles per liter per second. Rather, the katal is used to express catalytic activity, which is a property of the catalyst. SI multiples History The General Conference on Weights and Measures and other international organizations recommend use of the katal. It replaces the non-SI enzyme unit of catalytic activity. The enzyme unit is still more commonly used than the katal, especially in biochemistry. The adoption of the katal has been slow. Origin The name "katal" has been used for decades. The first proposal to make it an SI unit came in 1978, and it became an official SI unit in 1999. The name comes from the Ancient Greek κατάλυσις (katalysis), meaning "dissolution"; the word "catalysis" itself is a Latinized form of the Greek word. References External links Unit "katal" for catalytic activity (IUPAC Technical Report) Pure Appl. Chem. Vol. 73, No. 6, pp. 927–931 (2001) SI derived units Units of catalytic activity Units of chemical measurement
Katal
[ "Chemistry", "Mathematics" ]
499
[ "Catalysis", "Units of catalytic activity", "Quantity", "Chemical quantities", "Units of chemical measurement", "Units of measurement" ]
17,304
https://en.wikipedia.org/wiki/Karl%20Ernst%20von%20Baer
Karl Ernst Ritter von Baer Edler von Huthorn (; – ) was a Baltic German scientist and explorer. Baer was a naturalist, biologist, geologist, meteorologist, geographer, and is considered a, or the, founding father of embryology. He was a member of the Russian Academy of Sciences, a co-founder of the Russian Geographical Society, and the first president of the Russian Entomological Society, making him one of the most distinguished Baltic German scientists. Life Karl Ernst von Baer was born into the Baltic German noble Baer family (et) in the Piep Manor (et), Jerwen County, Governorate of Estonia (in present-day Lääne-Viru County, Estonia), as a knight by birthright. His patrilineal ancestors were of Westphalian origin and originated in Osnabrück. He spent his early childhood at Lasila manor, Estonia. He was educated at the Knight and Cathedral School in Reval (Tallinn) and the Imperial University of Dorpat (Tartu). In 1812, during his tenure at the university, he was sent to Riga to aid the city after Napoleon's armies had laid siege to it. As he attempted to help the sick and wounded, he realized that his education at Dorpat had been inadequate, and upon his graduation, he notified his father that he would need to go abroad to "finish" his education. In his autobiography, his discontent with his education at Dorpat inspired him to write a lengthy appraisal of education in general, a summary that dominated the content of the book. After leaving Tartu, he continued his education in Berlin, Vienna, and Würzburg, where Ignaz Döllinger introduced him to the new field of embryology. In 1817, he became a professor at Königsberg University and full professor of zoology in 1821, and of anatomy in 1826. In 1829, he taught briefly in St Petersburg, but returned to Königsberg (Kaliningrad). In 1834, Baer moved back to St Petersburg and joined the St Petersburg Academy of Sciences, first in zoology (1834–46) and then in comparative anatomy and physiology (1846–62). His interests while there were anatomy, ichthyology, ethnography, anthropology, and geography. While embryology had kept his attention in Königsberg, then in Russia von Baer engaged in a great deal of field research, including the exploration of the island Novaya Zemlya. The last years of his life (1867–76) were spent in Dorpat, where he became a major critic of Charles Darwin. Contributions Embryology Von Baer studied the embryonic development of animals, discovering the blastula stage of development and the notochord. Together with Heinz Christian Pander and based on the work by Caspar Friedrich Wolff, he described the germ layer theory of development (ectoderm, mesoderm, and endoderm) as a principle in a variety of species, laying the foundation for comparative embryology in the book Über Entwickelungsgeschichte der Thiere (1828). In 1826, Baer discovered the mammalian ovum. The human ovum was first described by Edgar Allen in 1928. In 1827, he completed research Ovi Mammalium et Hominis genesi for St Petersburg's Academy of Science (published at Leipzig). In 1827 von Baer became the first person to observe human ova. Only in 1876 did Oscar Hertwig prove that fertilization is due to fusion of an egg and sperm cell. von Baer formulated what became known as Baer's laws of embryology: General characteristics of the group to which an embryo belongs develop before special characteristics. General structural relations are likewise formed before the most specific appear. The form of any given embryo does not converge upon other definite forms, but separates itself from them. The embryo of a higher animal form never resembles the adult of another animal form, such as one less evolved, but only its embryo. Permafrost research Baer was a genius scientist covering not only the topics of embryology and ethnology, he also was especially interested in the geography of the northern parts of Russia, and explored Novaya Zemlya in 1837. In these arctic environments, he was studying periglacial features, permafrost occurrences, and collecting biological specimens. Other travels led him to subarctic regions of the North Cape and Lapland, but also to the Caspian Sea. He was one of the founders of the Russian Geographical Society. Thanks to Baer's research expeditions, the scientific investigation of permafrost began in Russia. Baer recorded the importance of permafrost research even before 1837 when observing in detail the geothermal gradient from a 116.7 m deep shaft in Yakutsk. At the end of the 1830s, he recommended sending expeditions to explore permafrost in Siberia and suggested Alexander von Middendorff as leader. Baer's expedition instructions written for Middendorff comprised over 200 pages. Baer summarized his knowledge in 1842/43 in a print-ready typescript. The German title is „Materialien zur Kenntniss des unvergänglichen Boden-Eises in Sibirien“ (=materials for the knowledge of the perennial ground ice in Siberia). This world's first permafrost textbook was conceived as a complete work for printing. But it remained lost for more than 150 years. However, from 1838 onwards, Baer published a larger number of small publications on permafrost. Numerous of Baer's papers on permafrost were already published as early as 1837 and 1838. Well known was his paper "On the Ground Ice or Frozen Soil of Siberia", published in the Journal of the Royal Geographical Society of London (1838, pp. 210–213) and reprinted 1839 in the American Journal of Sciences and Arts by S. Silliman. There are many other publications and small notes on permafrost by Baer, as shown in the Karl Ernst von Baer museum in Tartu (Estonia), now part of the Estonian University of Life Sciences. There are quite a number of studies in Russian about the origin of permafrost research. Russian authors usually relate with it the name Alexander von Middendorff (1815–1894), as he did much scientific work during the years 1842–1845 concerning permafrost on Taimyr Peninsula and in East-Siberia. However, Russian scientists during the 1940s also realized, that it was K. E. Baer who initiated this expedition and that the origin of scientific permafrost research must be fixed with Baer's thorough earlier scientific work. They even believed, that the scepticism about the permafrost findings and publications of Middendorff would not have risen, if Baer's original "materials for the study of the perennial ground-ice" would have been published in 1842 as intended. This was realized also by the Russian Academy of Sciences that honoured Baer with the publication of a tentative Russian translation done already in 1842 by Sumgin. These facts were completely forgotten until after the Second World War. In North America, permafrost research started after the Second World War with the creation of the Cold Regions Research and Engineering Laboratory (CRREL), a division of the US army. It was realized that the understanding of frozen ground and permafrost are essential factors in strategic northern areas during the Cold War. In the Soviet Union, the Melnikov Permafrost Institute in Yakutsk had similar aims. The first post-World War major contact between groups of senior Russian and American frozen ground researchers took place in November 1963 in Yakutsk.However, Baer's permafrost textbook remained still undiscovered. Thus in 2001 the discovery and annotated publication of the typescript from 1843 in the library archives of the University of Giessen was a scientific sensation. The full text of Baer's work is available online (234 pages). The editor Lorenz King added to the facsimile reprint a preface in English, two colour permafrost maps of Eurasia and some figures of permafrost features. Baer's text is introduced with detailed comments and references on additional 66 pages written by the Estonian historian Erki Tammiksaar. The work is fascinating to read, because both Baer's observations on permafrost distribution and his periglacial morphological descriptions are largely still correct today. He distinguished between "continental" and "insular" permafrost, saw the temporary existence of permafrost and postulated the formation and further development of permafrost as a result of the complex physio-geographical, geological and floristic site conditions. With his permafrost classification Baer laid the foundation for the modern permafrost terminology of the International Permafrost Association. With his compilation and analysis of all available data on ground ice and permafrost, Karl Ernst von Baer must be given the attribute "founder of scientific permafrost research". Evolution From his studies of comparative embryology, Baer had believed in the transmutation of species but rejected later in his career the theory of natural selection proposed by Charles Darwin. He produced an early tree-like branching diagram illustrating the sequential origins of derived character states in vertebrate embryos during ontogeny that implies a pattern of phylogenetic relationship. In the fifth edition of On the Origin of Species published in 1869, Charles Darwin added a Historical Sketch giving due credit to naturalists who had preceded him in publishing the opinion that species undergo modification, and that the existing forms of life have descended by true generation from pre-existing forms. According to Darwin: "Von Baer, towards whom all zoologists feel so profound a respect, expressed about the year 1859... his conviction, chiefly grounded on the laws of geographical distribution, that forms now perfectly distinct have descended from a single parent-form." He was a pioneer in studying biological time – the perception of time in different organisms. Baer believed in a teleological force in nature which directed evolution (orthogenesis). Other topics The term Baer's law is also applied to the unconfirmed proposition that in the Northern Hemisphere, erosion occurs mostly on the right banks of rivers, and in the Southern Hemisphere on the left banks. In its more thorough formulation, which Baer never formulated himself, the erosion of rivers depends on the direction of flow, as well. For example, in the Northern Hemisphere, a section of river flowing in a north–south direction, according to the theory, erodes on its right bank due to the coriolis effect, while in an east–west section there is no preference. However, this was repudiated by Albert Einstein's tea leaf paradox. Awards and distinctions In 1849, he was elected a foreign honorary of the American Academy of Arts and Sciences. He was elected a foreign member of the Royal Swedish Academy of Sciences in 1850. He was the president of the Estonian Naturalists' Society in 1869–1876, and was a co-founder and first president of the Russian Entomological Society. In 1875, he became a foreign member of the Royal Netherlands Academy of Arts and Sciences. Legacy A statue honouring him can be found on Toome Hill in Tartu, as well as at Lasila manor, Estonia, and at the Zoological Museum in St Petersburg, Russia. In Tartu, there is also located Baer House which also functions as Baer Museum. Before the Estonian conversion to the euro, the 2-kroon bank note bore his portrait. Baer Island in the Kara Sea was named after Karl Ernst von Baer for his important contributions to the research of arctic meteorology between 1830 and 1840. A duck, Baer's pochard, was also named after him. Works Karl Ernst von Baer, Gregor von Helmersen. Beiträge zur Kenntniss des Russischen Reiches und der angränzenden Länder Asiens, 2 vols. Kaiserlichen Akademie der Wissenschaften, 1839. Google Books. Karl Ernst von Baer, Welche Auffassung der lebenden Natur ist die richtige? Berlin, 1862 References Further reading Wood C, Trounson A. Clinical in Vitro Fertilization. Springer-Verlag, Berlin 1984, Page 6. Baer, K E v. "Über ein allgemeines Gesetz in der Gestaltung der Flußbetten", Kaspische Studien, 1860, VIII, S. 1–6. External links Bibliography and links at Max Planck Institute Overview of Piibe (Piep) manor (with photo of memorial stone) Short biography Glossary of Permafrost and Related Ground-Ice Terms (IPA) International Conferences on Permafrost Estonian banknotes Ajaloolane: Karl Ernst von Baer oli Venemaa luure asutaja (in Estonian) 1792 births 1876 deaths People from Väike-Maarja Parish People from Kreis Jerwen Baltic-German people from the Russian Empire 19th-century explorers from the Russian Empire Biologists from the Russian Empire Orthogenesis Proto-evolutionary biologists Recipients of the Copley Medal Novaya Zemlya Explorers of the Arctic University of Tartu alumni Academic staff of the University of Königsberg Members of the Royal Netherlands Academy of Arts and Sciences Members of the Royal Swedish Academy of Sciences Fellows of the American Academy of Arts and Sciences Founding members of the Russian Geographical Society Full members of the Saint Petersburg Academy of Sciences Honorary members of the Saint Petersburg Academy of Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences German military personnel of the Napoleonic Wars Recipients of the Pour le Mérite (civil class) Burials at Raadi cemetery Members of the Göttingen Academy of Sciences and Humanities 19th-century scientists from the Russian Empire
Karl Ernst von Baer
[ "Biology" ]
2,878
[ "Orthogenesis", "Obsolete biology theories", "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
17,327
https://en.wikipedia.org/wiki/Kinetic%20energy
In physics, the kinetic energy of an object is the form of energy that it possesses due to its motion. In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is . The kinetic energy of an object is equal to the work, force (F) times displacement (s), needed to achieve its stated velocity. Having gained this energy during its acceleration, the mass maintains this kinetic energy unless its speed changes. The same amount of work is done by the object when decelerating from its current speed to a state of rest. The SI unit of kinetic energy is the joule, while the English unit of kinetic energy is the foot-pound. In relativistic mechanics, is a good approximation of kinetic energy only when v is much less than the speed of light. History and etymology The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality. The principle of classical mechanics that E ∝ mv2 is conserved was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force or vis viva. Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship in 1722. By dropping weights from different heights into a block of clay, Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation. The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Thomas Young, who in his 1802 lecture to the Royal Society, was the first to use the term energy to refer to kinetic energy in its modern sense, instead of vis viva. Gaspard-Gustave Coriolis published in 1829 the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–1851. William Rankine, who had introduced the term "potential energy" in 1853, and the phrase "actual energy" to complement it, later cites William Thomson and Peter Tait as substituting the word "kinetic" for "actual". Overview Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy. Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat. Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant. Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant. Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound and binding energy (breaking bound structures). Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion. Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula mv2 given by classical mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed. Kinetic energy for non-relativistic velocity Treatments of kinetic energy depend upon the relative velocity of objects compared to the fixed speed of light. Speeds experienced directly by humans are non-relativisitic; higher speeds require the theory of relativity. Kinetic energy of rigid bodies In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. In formula form: where is the mass and is the speed (magnitude of the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules. For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e., Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed. The kinetic energy of an object is related to its momentum by the equation: where: is momentum is mass of the body For the translational kinetic energy, that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass , whose center of mass is moving in a straight line with speed , as seen above is equal to where: is the mass of the body is the speed of the center of mass of the body. The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy. The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole. Derivation Without vector calculus The work W done by a force F on an object over a distance s parallel to F equals . Using Newton's Second Law with m the mass and a the acceleration of the object and the distance traveled by the accelerated object in time t, we find with for the velocity v of the object With vector calculus The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx where we have assumed the relationship p = m v and the validity of Newton's Second Law. (However, also see the special relativistic derivation below.) Applying the product rule we see that: Therefore, (assuming constant mass so that dm = 0), we have, Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy: This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the momentum (p) of a body and the infinitesimal change of the velocity (v) of the body. It is assumed that the body starts with no kinetic energy when it is at rest (motionless). Rotating bodies If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy () which is simply the sum of the kinetic energies of its moving parts, and is thus given by: where: ω is the body's angular velocity r is the distance of any mass dm from that line is the body's moment of inertia, equal to . (In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape). Kinetic energy of systems A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains. A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However, all internal energies of all types contribute to a body's mass, inertia, and total energy. Fluid dynamics In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point. Dividing by V, the unit of volume: where is the dynamic pressure, and ρ is the density of the incompressible fluid. Frame of reference The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame. The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass. This may be simply shown: let be the relative velocity of the center of mass frame i in the frame k. Since Then, However, let the kinetic energy in the center of mass frame, would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass: . Substituting, we get: Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same). Rotation in systems It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy): where: Ek is the total kinetic energy Et is the translational kinetic energy Er is the rotational energy or angular kinetic energy in the rest frame Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation. Relativistic kinetic energy If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In relativity, the total energy is given by the energy-momentum relation: Here we use the relativistic expression for linear momentum: , where . with being an object's (rest) mass, speed, and c the speed of light in vacuum. Then kinetic energy is the total relativistic energy minus the rest energy: At low speeds, the square root can be expanded and the rest energy drops out, giving the Newtonian kinetic energy. Derivation Start with the expression for linear momentum , where . Integrating by parts yields Since , is a constant of integration for the indefinite integral. Simplifying the expression we obtain is found by observing that when and , giving resulting in the formula This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary. Low speed limit The mathematical by-product of this calculation is the mass–energy equivalence formula, that mass and energy are essentially the same thing: At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. To see this, apply the binomial approximation or take the first two terms of the Taylor expansion in powers of for the reciprocal square root: So, the total energy can be partitioned into the rest mass energy plus the non-relativistic kinetic energy at low speeds. When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation is small for low speeds. For example, for a speed of the correction to the non-relativistic kinetic energy is 0.0417 J/kg (on a non-relativistic kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a non-relativistic kinetic energy of 5 GJ/kg). The relativistic relation between kinetic energy and momentum is given by This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics: This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity. General relativity Using the convention that where the four-velocity of a particle is and is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity. If the particle has momentum as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is and the kinetic energy can be expressed as the total energy minus the rest energy: Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). Since where vα is the ordinary velocity measured w.r.t. the coordinate system, we get Solving for ut gives Thus for a stationary observer (v = 0) and thus the kinetic energy takes the form Factoring out the rest energy gives: This expression reduces to the special relativistic case for the flat-space metric where In the Newtonian approximation to general relativity where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies. Kinetic energy in quantum mechanics In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator . The kinetic energy operator in the non-relativistic case can be written as Notice that this can be obtained by replacing by in the classical expression for kinetic energy in terms of momentum, In the Schrödinger picture, takes the form where the derivative is taken with respect to position coordinates and hence The expectation value of the electron kinetic energy, , for a system of N electrons described by the wavefunction is a sum of 1-electron operator expectation values: where is the mass of the electron and is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons. The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density , the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as where is known as the von Weizsäcker kinetic energy functional. See also Escape velocity Foot-pound Joule Kinetic energy penetrator Kinetic energy per unit mass of projectiles Kinetic projectile Parallel axis theorem Potential energy Recoil Notes References External links Dynamics (mechanics) Forms of energy
Kinetic energy
[ "Physics" ]
4,179
[ "Physical phenomena", "Mechanical quantities", "Physical quantities", "Kinetic energy", "Classical mechanics", "Forms of energy", "Energy (physics)", "Motion (physics)", "Dynamics (mechanics)" ]
17,537
https://en.wikipedia.org/wiki/LSD
Lysergic acid diethylamide, commonly known as LSD (from German ), is a potent psychedelic drug that intensifies thoughts, emotions, and sensory perception. Often referred to as acid or lucy, LSD can cause mystical, spiritual, or religious experiences. At higher doses, it primarily induces visual and auditory hallucinations. LSD is not considered addictive, because it does not produce compulsive drug-seeking behavior. Using LSD can lead to adverse psychological reactions, such as anxiety, paranoia, and delusions. Additionally, it may trigger "flashbacks," also known as hallucinogen persisting perception disorder (HPPD), where individuals experience persistent visual distortions after use. The effects of LSD begin within 30 minutes of ingestion and can last up to 20 hours, with most trips averaging 8–12 hours. It is synthesized from lysergic acid and commonly administered via tabs of blotter paper. LSD is mainly used recreationally or for spiritual purposes. As a serotonin receptor agonist, LSD's precise effects are not fully understood, but it is known to alter the brain’s default mode network, leading to its powerful psychedelic effects. The drug was first synthesized by Swiss chemist Albert Hofmann in 1938 and became widely studied in the 1950s and 1960s. It was used experimentally in psychiatry for treating alcoholism and schizophrenia. However, its association with the counterculture movement of the 1960s led to its classification as a Schedule I drug in the U.S. in 1968. It was also listed as a Schedule I controlled substance by the United Nations in 1971 and remains without approved medical uses. Despite its legal restrictions, LSD remains influential in scientific and cultural contexts. Its therapeutic potential has been explored, particularly in treating mental health disorders. As of 2017, about 10% of people in the U.S. had used LSD at some point, with 0.7% having used it in the past year. Usage rates have risen, with a 56.4% increase in adult use in the U.S. from 2015 to 2018. Uses Recreational LSD is commonly used as a recreational drug. Spiritual LSD can catalyze intense spiritual experiences and is thus considered an entheogen. Some users have reported out of body experiences. In 1966, Timothy Leary established the League for Spiritual Discovery with LSD as its sacrament. Stanislav Grof has written that religious and mystical experiences observed during LSD sessions appear to be phenomenologically indistinguishable from similar descriptions in the sacred scriptures of the great religions of the world and the texts of ancient civilizations. Medical LSD currently has no approved uses in medicine. A meta analysis concluded that a single dose was shown to be effective at reducing alcohol consumption in people suffering from alcoholism. LSD has also been studied in depression, anxiety, and drug dependence, with positive preliminary results. Effects LSD is exceptionally potent, with as little as 20 μg capable of producing a noticeable effect. Physical LSD can induce physical effects such as pupil dilation, decreased appetite, increased sweating, and wakefulness. The physical reactions to LSD vary greatly and some may be a result of its psychological effects. Commonly observed symptoms include increased body temperature, blood sugar, and heart rate, as well as goose bumps, jaw clenching, dry mouth, and hyperreflexia. In cases of adverse reactions, users may experience numbness, weakness, nausea, and tremors. Psychological The primary immediate psychological effects of LSD are visual pseudo-hallucinations and altered thought, often referred to as "trips". These sensory alterations are considered pseudohallucinations because the subject does not perceive the patterns seen as being located in three-dimensional space outside the body. LSD is not considered addictive. These effects typically begin within 20–30 minutes of oral ingestion, peak three to four hours after ingestion, and can last up to 20 hours, particularly with higher doses. An "afterglow" effect, characterized by an improved mood or perceived mental state, may persist for days or weeks following ingestion. Positive experiences, or "good trips", are described as intensely pleasurable and can include feelings of joy, euphoria, an increased appreciation for life, decreased anxiety, a sense of spiritual enlightenment, and a feeling of interconnectedness with the universe. Negative experiences, commonly known as "bad trips", can induce feelings of fear, agitation, anxiety, panic, and paranoia. While the occurrence of a bad trip is unpredictable, factors such as mood, surroundings, sleep, hydration, and social setting, collectively referred to as "set and setting", can influence the risk and are considered important in minimizing the likelihood of a negative experience. Sensory LSD induces an animated sensory experience affecting senses, emotions, memories, time, and awareness, lasting from 6 to 20 hours, with the duration dependent on dosage and individual tolerance. Effects typically commence within 30 to 90 minutes post-ingestion, ranging from subtle perceptual changes to profound cognitive shifts. Alterations in auditory and visual perception are common. Users may experience enhanced visual phenomena, such as vibrant colors, objects appearing to morph, ripple or move, and geometric patterns on various surfaces. Changes in the perception of food's texture and taste are also noted, sometimes leading to aversion towards certain foods. There are reports of inanimate objects appearing animated, with static objects seeming to move in additional spatial dimensions. The auditory effects of LSD may include echo-like distortions of sounds, and an intensified experience of music. Basic visual effects often resemble phosphenes and can be influenced by concentration, thoughts, emotions, or music. Higher doses can lead to more intense sensory perception alterations, including synesthesia, perception of additional dimensions, and temporary dissociation. Adverse effects LSD, a classical psychedelic, is deemed physiologically safe at standard dosages (50–200 μg) and its primary risks lie in psychological effects rather than physiological harm. A 2010 study by David Nutt ranked LSD as significantly less harmful than alcohol, placing it near the bottom of a list assessing the harm of 20 drugs. Psychological effects Mental disorders LSD can induce panic attacks or extreme anxiety, colloquially termed a "bad trip". Despite lower rates of depression and substance abuse found in psychedelic drug users compared to controls, LSD presents heightened risks for individuals with severe mental illnesses like schizophrenia. These hallucinogens can catalyze psychiatric disorders in predisposed individuals, although they do not tend to induce illness in emotionally healthy people. Suggestibility While research from the 1960s indicated increased suggestibility under the influence of LSD among both mentally ill and healthy individuals, recent documents suggest that the CIA and Department of Defense have discontinued research into LSD as a means of mind control. Flashbacks Flashbacks are psychological episodes where individuals re-experience some of LSD's subjective effects after the drug has worn off, persisting for days or months post-hallucinogen use. These experiences are associated with hallucinogen persisting perception disorder (HPPD), where flashbacks occur intermittently or chronically, causing distress or functional impairment. The etiology of flashbacks is varied. Some cases are attributed to somatic symptom disorder, where individuals fixate on normal somatic experiences previously unnoticed prior to drug consumption. Other instances are linked to associative reactions to contextual cues, similar to responses observed in individuals with past trauma or emotional experiences. The risk factors for flashbacks remain unclear, but pre-existing psychopathologies may be significant contributors. Estimating the prevalence of HPPD is challenging. It is considered rare, with occurrences ranging from 1 in 20 users experiencing the transient and less severe type 1 HPPD, to 1 in 50,000 for the more concerning type 2 HPPD. Contrary to internet rumors, LSD is not stored long-term in the spinal cord or other body parts. Pharmacological evidence indicates LSD has a half-life of 175 minutes and is metabolized into water-soluble compounds like 2-oxo-3-hydroxy-LSD, eliminated through urine without evidence of long-term storage. Clinical evidence also suggests that chronic use of SSRIs can potentiate LSD-induced flashbacks, even months after stopping LSD use. Drug interactions Several psychedelics, including LSD, are metabolized by CYP2D6. Concurrent use of SSRIs, potent inhibitors of CYP2D6, with LSD may heighten the risk of serotonin syndrome. Chronic usage of SSRIs, TCAs, and MAOIs is believed to diminish the subjective effects of psychedelics, likely due to SSRI-induced 5-HT2A receptor downregulation and MAOI-induced 5-HT2A receptor desensitization. Interactions between psychedelics and antipsychotics or anticonvulsants are not well-documented; however, co-use with mood stabilizers like lithium may induce seizures and dissociative effects, particularly in individuals with bipolar disorder. Lithium notably intensifies LSD reactions, potentially leading to acute comatose states when combined. Lethal dose The lethal oral dose of LSD in humans is estimated at 100 mg, based on LD50 and lethal blood concentrations observed in rodent studies. Tolerance LSD shows significant tachyphylaxis, with tolerance developing 24 hours after administration. The progression of tolerance at intervals shorter than 24 hours remains largely unknown. Tolerance typically resets to baseline after 3–4 days of abstinence. Significant cross-tolerance occurs between LSD, mescaline and psilocybin. A slight cross-tolerance to DMT is observed in humans highly tolerant to LSD. Tolerance to LSD also builds up with consistent use, and is believed to result from serotonin 5-HT2A receptor downregulation. Researchers believe that tolerance returns to baseline after two weeks of not using psychedelics. Addiction and dependence liability LSD is widely considered to be non-addictive, despite its potential for abuse. Attempts to train laboratory animals to self-administer LSD have been largely unsuccessful. Although tolerance to LSD builds up rapidly, a withdrawal syndrome does not appear, suggesting that a potential syndrome does not necessarily relate to the possibility of acquiring rapid tolerance to a substance. A report examining substance use disorder for DSM-IV noted that almost no hallucinogens produced dependence, unlike psychoactive drugs of other classes such as stimulants and depressants. Cancer and pregnancy The mutagenic potential of LSD is unclear. Overall, the evidence seems to point to limited or no effect at commonly used doses. Studies showed no evidence of teratogenic or mutagenic effects. Overdose There have been no documented fatal human overdoses from LSD, although there has been no "comprehensive review since the 1950s" and "almost no legal clinical research since the 1970s". Eight individuals who had accidentally consumed an exceedingly high amount of LSD, mistaking it for cocaine, and had gastric levels of 1000–7000 μg LSD tartrate per 100 mL and blood plasma levels up to 26 μg/ml, had suffered from comatose states, vomiting, respiratory problems, hyperthermia, and light gastrointestinal bleeding; however, all of them survived without residual effects upon hospital intervention. Individuals experiencing a bad trip after LSD intoxication may present with severe anxiety and tachycardia, often accompanied by phases of psychotic agitation and varying degrees of delusions. Cases of death on a bad trip have been reported due to prone maximal restraint (commonly known as a hogtie) and positional asphyxia when the individuals were restrained by law enforcement personnel. Massive doses are largely managed by symptomatic treatments, and agitation can be addressed with benzodiazepines. Reassurance in a calm, safe environment is beneficial. Antipsychotics such as haloperidol are not recommended as they may have adverse psychotomimetic effects. Gastrointestinal decontamination with activated charcoal is of little use due to the rapid absorption of LSD, unless done within 30–60 minutes of ingesting exceedingly huge amounts. Administration of anticoagulants, vasodilators, and sympatholytics may be useful for treating ergotism. Designer drug overdose Many novel psychoactive substances of 25-NB (NBOMe) series, such as 25I-NBOMe and 25B-NBOMe, are regularly sold as LSD in blotter papers. NBOMe compounds are often associated with life-threatening toxicity and death. Fatalities involved in NBOMe intoxication suggest that a significant number of individuals ingested the substance which they believed was LSD, and researchers report that "users familiar with LSD may have a false sense of security when ingesting NBOMe inadvertently". Researchers state that the alleged physiological toxicity of LSD is likely due to psychoactive substances other than LSD. NBOMe compounds are reported to have a bitter taste, are not active orally, and are usually taken sublingually. When NBOMes are administered sublingually, numbness of the tongue and mouth followed by a metallic chemical taste was observed, and researchers describe this physical side effect as one of the main discriminants between NBOMe compounds and LSD. Despite its high potency, recreational doses of LSD have only produced low incidents of acute toxicity, but NBOMe compounds have extremely different safety profiles. Testing with Ehrlich's reagent gives a positive result for LSD and a negative result for NBOMe compounds. Pharmacology Pharmacodynamics Most serotonergic psychedelics are not significantly dopaminergic, and LSD is therefore atypical in this regard. The agonism of the D2 receptor by LSD may contribute to its psychoactive effects in humans. LSD binds to most serotonin receptor subtypes except for the 5-HT3 and 5-HT4 receptors. However, most of these receptors are affected at too low affinity to be sufficiently activated by the brain concentration of approximately 10–20 nM. In humans, recreational doses of LSD can affect 5-HT1A (Ki = 1.1 nM), 5-HT2A (Ki = 2.9 nM), 5-HT2B (Ki = 4.9 nM), 5-HT2C (Ki = 23 nM), 5-HT5A (Ki = 9 nM [in cloned rat tissues]), and 5-HT6 receptors (Ki = 2.3 nM). Although not present in humans, 5-HT5B receptors found in rodents also have a high affinity for LSD. The psychedelic effects of LSD are attributed to cross-activation of 5-HT2A receptor heteromers. Many but not all 5-HT2A agonists are psychedelics and 5-HT2A antagonists block the psychedelic activity of LSD. LSD exhibits functional selectivity at the 5-HT2A and 5-HT2C receptors in that it activates the signal transduction enzyme phospholipase A2 instead of activating the enzyme phospholipase C as the endogenous ligand serotonin does. Exactly how LSD produces its effects is unknown, but it is thought that it works by increasing glutamate release in the cerebral cortex and therefore excitation in this area, specifically in layer V. LSD, like many other drugs of recreational use, has been shown to activate DARPP-32-related pathways. The drug enhances dopamine D2 receptor protomer recognition and signaling of D2–5-HT2A receptor complexes, which may contribute to its psychotropic effects. LSD has been shown to have low affinity for H1 receptors, displaying antihistamine effects. LSD is a biased agonist that induces a conformation in serotonin receptors that preferentially recruits β-arrestin over activating G proteins. LSD also has an exceptionally long residence time when bound to serotonin receptors lasting hours, consistent with the long-lasting effects of LSD despite its relatively rapid clearance. A crystal structure of 5-HT2B bound to LSD reveals an extracellular loop that forms a lid over the diethylamide end of the binding cavity which explains the slow rate of LSD unbinding from serotonin receptors. The related lysergamide lysergic acid amide (LSA) that lacks the diethylamide moiety is far less hallucinogenic in comparison. LSD, like other psychedelics, has been found to increase the expression of genes related to synaptic plasticity. This is in part due to binding to brain-derived neurotrophic factor (BDNF) receptor TrkB. Mechanisms of action Neuroimaging studies using resting state fMRI recently suggested that LSD changes the cortical functional architecture. These modifications spatially overlap with the distribution of serotoninergic receptors. In particular, increased connectivity and activity were observed in regions with high expression of 5-HT2A receptor, while a decrease in activity and connectivity was observed in cortical areas that are dense with 5-HT1A receptor. Experimental data suggest that subcortical structures, particularly the thalamus, play a synergistic role with the cerebral cortex in mediating the psychedelic experience. LSD, through its binding to cortical 5-HT2A receptor, may enhance excitatory neurotransmission along frontostriatal projections and, consequently, reduce thalamic filtering of sensory stimuli towards the cortex. This phenomenon appears to selectively involve ventral, intralaminar, and pulvinar nuclei. Pharmacokinetics The acute effects of LSD normally last between 6 and 10 hours depending on dosage, tolerance, and age. Aghajanian and Bing (1964) found LSD had an elimination half-life of only 175 minutes (about 3 hours). However, using more accurate techniques, Papac and Foltz (1990) reported that 1 μg/kg oral LSD given to a single male volunteer had an apparent plasma half-life of 5.1 hours, with a peak plasma concentration of 5 ng/mL at 3 hours post-dose. The pharmacokinetics of LSD were not properly determined until 2015, which is not surprising for a drug with the kind of low-μg potency that LSD possesses. In a sample of 16 healthy subjects, a single mid-range 200 μg oral dose of LSD was found to produce mean maximal concentrations of 4.5 ng/mL at a median of 1.5 hours (range 0.5–4 hours) post-administration. Concentrations of LSD decreased following first-order kinetics with a half-life of 3.6±0.9 hours and a terminal half-life of 8.9±5.9 hours. The effects of the dose of LSD given lasted for up to 12 hours and were closely correlated with the concentrations of LSD present in circulation over time, with no acute tolerance observed. Only 1% of the drug was eliminated in urine unchanged, whereas 13% was eliminated as the major metabolite 2-oxo-3-hydroxy-LSD (O-H-LSD) within 24 hours. O-H-LSD is formed by cytochrome P450 enzymes, although the specific enzymes involved are unknown, and it does not appear to be known whether O-H-LSD is pharmacologically active or not. The oral bioavailability of LSD was crudely estimated as approximately 71% using previous data on intravenous administration of LSD. The sample was equally divided between male and female subjects and there were no significant sex differences observed in the pharmacokinetics of LSD. Chemistry LSD is a chiral compound with two stereocenters at the carbon atoms C-5 and C-8, so that theoretically four different optical isomers of LSD could exist. LSD, also called (+)-d-LSD, has the absolute configuration (5R,8R). 5S stereoisomers of lysergamides do not exist in nature and are not formed during the synthesis from d-lysergic acid. Retrosynthetically, the C-5 stereocenter could be analysed as having the same configuration of the alpha carbon of the naturally occurring amino acid L-tryptophan, the precursor to all biosynthetic ergoline compounds. However, LSD and iso-LSD, the two C-8 isomers, rapidly interconvert in the presence of bases, as the alpha proton is acidic and can be deprotonated and reprotonated. Non-psychoactive iso-LSD which has formed during the synthesis can be separated by chromatography and can be isomerized to LSD. Pure salts of LSD are triboluminescent, emitting small flashes of white light when shaken in the dark. LSD is strongly fluorescent and will glow bluish-white under UV light. Synthesis LSD is an ergoline derivative. It is commonly synthesized by reacting diethylamine with an activated form of lysergic acid. Activating reagents include phosphoryl chloride and peptide coupling reagents. Lysergic acid is made by alkaline hydrolysis of lysergamides like ergotamine, a substance usually derived from the ergot fungus on agar plate. Lysergic acid can also be produced synthetically, although these processes are not used in clandestine manufacture due to their low yields and high complexity. Albert Hofmann synthesized LSD in the following manner: (1) hydrazinolysis of ergotamine into D- and L-isolysergic acid hydrazide, (2) separation of the enantiomers with di-(p-toluyl)-D-tartaric acid to get D-isolysergic acid hydrazide, (3) enantiomerization into D-lysergic acid hydrazide, (4) substitution with HNO2 to D-lysergic acid azide and (5) finally substitution with diethylamine to form D-lysergic acid diethylamide. Research The precursor for LSD, lysergic acid, has been produced by GMO baker's yeast. Dosage A single dose of LSD is typically between 40 and 500 micrograms—an amount roughly equal to one-tenth the mass of a grain of sand. Threshold effects can be felt with as little as 25 micrograms of LSD. The practice of using sub-threshold doses is called microdosing. Dosages of LSD are measured in micrograms (μg), or millionths of a gram. In the mid-1960s, the most important black market LSD manufacturer (Owsley Stanley) distributed LSD at a standard concentration of 270 μg, while street samples of the 1970s contained 30 to 300 μg. By the 1980s, the amount had reduced to between 100 and 125 μg, dropping more in the 1990s to the 20–80 μg range, and even more in the 2000s (decade). Reactivity and degradation "LSD," writes the chemist Alexander Shulgin, "is an unusually fragile molecule ... As a salt, in water, cold, and free from air and light exposure, it is stable indefinitely." LSD has two labile protons at the tertiary stereogenic C5 and C8 positions, rendering these centers prone to epimerisation. The C8 proton is more labile due to the electron-withdrawing carboxamide attachment, but the removal of the chiral proton at the C5 position (which was once also an alpha proton of the parent molecule tryptophan) is assisted by the inductively withdrawing nitrogen and pi electron delocalisation with the indole ring. LSD also has enamine-type reactivity because of the electron-donating effects of the indole ring. Because of this, chlorine destroys LSD molecules on contact; even though chlorinated tap water contains only a slight amount of chlorine, the small quantity of compound typical to an LSD solution will likely be eliminated when dissolved in tap water. The double bond between the 8-position and the aromatic ring, being conjugated with the indole ring, is susceptible to nucleophilic attacks by water or alcohol, especially in the presence of UV or other kinds of light. LSD often converts to "lumi-LSD," which is inactive in human beings. A controlled study was undertaken to determine the stability of LSD in pooled urine samples. The concentrations of LSD in urine samples were followed over time at various temperatures, in different types of storage containers, at various exposures to different wavelengths of light, and at varying pH values. These studies demonstrated no significant loss in LSD concentration at 25 °C for up to four weeks. After four weeks of incubation, a 30% loss in LSD concentration at 37 °C and up to a 40% at 45 °C were observed. Urine fortified with LSD and stored in amber glass or nontransparent polyethylene containers showed no change in concentration under any light conditions. The stability of LSD in transparent containers under light was dependent on the distance between the light source and the samples, the wavelength of light, exposure time, and the intensity of light. After prolonged exposure to heat in alkaline pH conditions, 10 to 15% of the parent LSD epimerized to iso-LSD. Under acidic conditions, less than 5% of the LSD was converted to iso-LSD. It was also demonstrated that trace amounts of metal ions in the buffer or urine could catalyze the decomposition of LSD and that this process can be avoided by the addition of EDTA. Detection LSD can be detected in concentrations larger than approximately 10% in a sample using Ehrlich's reagent and Hofmann's reagent. However, detecting LSD in human tissues is more challenging due to its active dosage being significantly lower (in micrograms) compared to most other drugs (in milligrams). LSD may be quantified in urine for drug testing programs, in plasma or serum to confirm poisoning in hospitalized victims, or in whole blood for forensic investigations. The parent drug and its major metabolite are unstable in biofluids when exposed to light, heat, or alkaline conditions, necessitating protection from light, low-temperature storage, and quick analysis to minimize losses. Maximum plasma concentrations are typically observed 1.4 to 1.5 hours after oral administration of 100 μg and 200 μg, respectively, with a plasma half-life of approximately 2.6 hours (ranging from 2.2 to 3.4 hours among test subjects). Due to its potency in microgram quantities, LSD is often not included in standard pre-employment urine or hair analyses. However, advanced liquid chromatography–mass spectrometry methods can detect LSD in biological samples even after a single use. History Swiss chemist Albert Hofmann first synthesized LSD in 1938 from lysergic acid, a chemical derived from the hydrolysis of ergotamine, an alkaloid found in ergot, a fungus that infects grain. LSD was the 25th of various lysergamides Hofmann synthesized from lysergic acid while trying to develop a new analeptic, hence the alternate name LSD-25. Hofmann discovered its effects in humans in 1943, after unintentionally ingesting an unknown amount, possibly absorbing it through his skin. LSD was subject to exceptional interest within the field of psychiatry in the 1950s and early 1960s, with Sandoz distributing LSD to researchers under the trademark name Delysid in an attempt to find a marketable use for it. During this period, LSD was controversially administered to hospitalised schizophrenic autistic children, with varying degrees of therapeutic success. LSD-assisted psychotherapy was used in the 1950s and early 1960s by psychiatrists such as Humphry Osmond, who pioneered the application of LSD to the treatment of alcoholism, with promising results. Osmond coined the term "psychedelic" (lit. mind manifesting) as a term for LSD and related hallucinogens, superseding the previously held "psychotomimetic" model in which LSD was believed to mimic schizophrenia. In contrast to schizophrenia, LSD can induce transcendent experiences, or mental states that transcend the experience of everyday consciousness, with lasting psychological benefit. During this time, the Central Intelligence Agency (CIA) began using LSD in the research project Project MKUltra, which used psychoactive substances to aid interrogation. The CIA administered LSD to unwitting test subjects to observe how they would react, the most well-known example of this being Operation Midnight Climax. LSD was one of several psychoactive substances evaluated by the U.S. Army Chemical Corps as possible non-lethal incapacitants in the Edgewood Arsenal human experiments. In the 1960s, LSD and other psychedelics were adopted by and became synonymous with, the counterculture movement due to their perceived ability to expand consciousness. This resulted in LSD being viewed as a cultural threat to American values and the Vietnam War effort, and it was designated as a Schedule I (illegal for medical as well as recreational use) substance in 1968. It was listed as a Schedule I controlled substance by the United Nations in 1971 and currently has no approved medical uses. , about 10% of people in the United States have used LSD at some point in their lives, while 0.7% have used it in the last year. It was most popular in the 1960s to 1980s. The use of LSD among US adults increased by 56.4% from 2015 to 2018. LSD was first synthesized on November 16, 1938 by Swiss chemist Albert Hofmann at the Sandoz Laboratories in Basel, Switzerland as part of a large research program searching for medically useful ergot alkaloid derivatives. The abbreviation "LSD" is from the German "Lysergsäurediethylamid". LSD's psychedelic properties were discovered 5 years later when Hofmann himself accidentally ingested an unknown quantity of the chemical. The first intentional ingestion of LSD occurred on April 19, 1943, when Hofmann ingested 250 μg of LSD. He said this would be a threshold dose based on the dosages of other ergot alkaloids. Hofmann found the effects to be much stronger than he anticipated. Sandoz Laboratories introduced LSD as a psychiatric drug in 1947 and marketed LSD as a psychiatric panacea, hailing it "as a cure for everything from schizophrenia to criminal behavior, 'sexual perversions', and alcoholism." Sandoz would send the drug for free to researchers investigating its effects. Beginning in the 1950s, the US Central Intelligence Agency (CIA) began a research program code-named Project MKUltra. The CIA introduced LSD to the United States, purchasing the entire world's supply for $240,000 and propagating the LSD through CIA front organizations to American hospitals, clinics, prisons, and research centers. Experiments included administering LSD to CIA employees, military personnel, doctors, other government agents, prostitutes, mentally ill patients, and members of the general public to study their reactions, usually without the subjects' knowledge. The project was revealed in the US congressional Rockefeller Commission report in 1975. In 1963, the Sandoz patents on LSD expired and the Czech company Spofa began to produce the substance. Sandoz stopped the production and distribution in 1965. Several figures, including Aldous Huxley, Timothy Leary, and Al Hubbard, had begun to advocate the consumption of LSD. LSD became central to the counterculture of the 1960s. In the early 1960s the use of LSD and other hallucinogens was advocated by new proponents of consciousness expansion such as Leary, Huxley, Alan Watts and Arthur Koestler, and according to L. R. Veysey they profoundly influenced the thinking of the new generation of youth. On October 24, 1968, possession of LSD was made illegal in the United States. The last FDA approved study of LSD in patients ended in 1980, while a study in healthy volunteers was made in the late 1980s. Legally approved and regulated psychiatric use of LSD continued in Switzerland until 1993. In November 2020, Oregon became the first US state to decriminalize possession of small amounts of LSD after voters approved Ballot Measure 110. Society and culture Counterculture By the mid-1960s, the youth countercultures in California, particularly in San Francisco, had widely adopted the use of hallucinogenic drugs, including LSD. The first major underground LSD factory was established by Owsley Stanley. Around this time, the Merry Pranksters, associated with novelist Ken Kesey, organized the Acid Tests, events in San Francisco involving LSD consumption, accompanied by light shows and improvised music. Their activities, including cross-country trips in a psychedelically decorated bus and interactions with major figures of the beat movement, were later documented in Tom Wolfe's The Electric Kool-Aid Acid Test (1968). In San Francisco's Haight-Ashbury neighborhood, the Psychedelic Shop was opened in January 1966 by brothers Ron and Jay Thelin to promote the safe use of LSD. This shop played a significant role in popularizing LSD in the area and establishing Haight-Ashbury as the epicenter of the hippie counterculture. The Thelins also organized the Love Pageant Rally in Golden Gate Park in October 1966, protesting against California's ban on LSD. A similar movement developed in London, led by British academic Michael Hollingshead, who first tried LSD in America in 1961. After experiencing LSD and interacting with notable figures such as Aldous Huxley, Timothy Leary, and Richard Alpert, Hollingshead played a key role in the famous LSD research at Millbrook before moving to New York City for his experiments. In 1965, he returned to the UK and founded the World Psychedelic Center in Chelsea, London. Music and Art The influence of LSD in the realms of music and art became pronounced in the 1960s, especially through the Acid Tests and related events involving bands like the Grateful Dead, Jefferson Airplane, and Big Brother and the Holding Company. San Francisco-based artists such as Rick Griffin, Victor Moscoso, and Wes Wilson contributed to this movement through their psychedelic poster and album art. The Grateful Dead, in particular, became central to the culture of "Deadheads," with their music heavily influenced by LSD. In the United Kingdom, Michael Hollingshead, reputed for introducing LSD to various artists and musicians like Storm Thorgerson, Donovan, Keith Richards, and members of the Beatles, played a significant role in the drug's proliferation in the British art and music scene. Despite LSD's illegal status from 1966, it was widely used by groups including the Beatles, the Rolling Stones, and the Moody Blues. Their experiences influenced works such as the Beatles' Sgt. Pepper's Lonely Hearts Club Band and Cream's Disraeli Gears, featuring psychedelic-themed music and artwork. Psychedelic music of the 1960s often sought to replicate the LSD experience, incorporating exotic instrumentation, electric guitars with effects pedals, and elaborate studio techniques. Artists and bands utilized instruments like sitars and tablas, and employed studio effects such as backward tapes, panning, and phasing. Songs such as John Prine's "Illegal Smile" and the Beatles' "Lucy in the Sky with Diamonds" have been associated with LSD, although the latter's authors denied such claims. Contemporary artists influenced by LSD include Keith Haring in the visual arts, various electronic dance music creators, and the jam band Phish. The 2018 Leo Butler play All You Need is LSD is inspired by the author's interest in the history of LSD. Legal status The United Nations Convention on Psychotropic Substances of 1971 mandates that signing parties, including the United States, Australia, New Zealand, and most of Europe, prohibit LSD. Enforcement of these laws varies by country. The convention allows medical and scientific research with LSD. Australia In Australia, LSD is classified as a Schedule 9 prohibited substance under the Poisons Standard (February 2017), indicating it may be abused or misused and its manufacture, possession, sale, or use should be prohibited except for approved research purposes. In Western Australia, the Misuse of Drugs Act 1981 provides guidelines for possession and trafficking of substances like LSD. Canada In Canada, LSD is listed under Schedule III of the Controlled Drugs and Substances Act. Unauthorized possession and trafficking of the substance can lead to significant legal penalties. United Kingdom In the United Kingdom, LSD is a Class A drug under the Misuse of Drugs Act 1971, making unauthorized possession and trafficking punishable by severe penalties. The Runciman Report and Transform Drug Policy Foundation have made recommendations and proposals regarding the legal regulation of LSD and other psychedelics. United States In the United States, LSD is classified as a Schedule I controlled substance under the Controlled Substances Act of 1970, making its manufacture, possession, and distribution illegal without a DEA license. The law considers LSD to have a high potential for abuse, no legitimate medical use, and to be unsafe even under medical supervision. The US Supreme Court case Neal v. United States (1995) clarified the sentencing guidelines related to LSD possession. Oregon decriminalized personal possession of small amounts of drugs, including LSD, in February 2021, and California has seen legislative efforts to decriminalize psychedelics. Mexico Mexico decriminalized the possession of small amounts of drugs, including LSD, for personal use in 2009. The law specifies possession limits and establishes that possession is not a crime within designated quantities. Czech Republic In the Czech Republic, possession of "amount larger than small" of LSD is criminalized, while possession of smaller amounts is a misdemeanor. The definition of "amount larger than small" is determined by judicial practice and specific regulations. Economics Production An active dose of LSD is very minute, allowing a large number of doses to be synthesized from a comparatively small amount of raw material. Twenty-five kilograms of precursor ergotamine tartrate can produce 5–6 kg of pure crystalline LSD; this corresponds to around 50–60 million doses at 100 μg. Because the masses involved are so small, concealing and transporting illicit LSD is much easier than smuggling cocaine, cannabis, or other illegal drugs. Manufacturing LSD requires laboratory equipment and experience in the field of organic chemistry. It takes two to three days to produce 30 to 100 grams of pure compound. It is believed that LSD is not usually produced in large quantities, but rather in a series of small batches. This technique minimizes the loss of precursor chemicals in case a step does not work as expected. Forms LSD is produced in crystalline form and is then mixed with excipients or redissolved for production in ingestible forms. Liquid solution is either distributed in small vials or, more commonly, sprayed onto or soaked into a distribution medium. Historically, LSD solutions were first sold on sugar cubes, but practical considerations forced a change to tablet form. Appearing in 1968 as an orange tablet measuring about 6 mm across, "Orange Sunshine" acid was the first largely available form of LSD after its possession was made illegal. Tim Scully, a prominent chemist, made some of these tablets, but said that most "Sunshine" in the USA came by way of Ronald Stark, who imported approximately thirty-five million doses from Europe. Over some time, tablet dimensions, weight, shape and concentration of LSD evolved from large (4.5–8.1 mm diameter), heavyweight (≥150 mg), round, high concentration (90–350 μg/tab) dosage units to small (2.0–3.5 mm diameter) lightweight (as low as 4.7 mg/tab), variously shaped, lower concentration (12–85 μg/tab, average range 30–40 μg/tab) dosage units. LSD tablet shapes have included cylinders, cones, stars, spacecraft, and heart shapes. The smallest tablets became known as "Microdots." After tablets came "computer acid" or "blotter paper LSD," typically made by dipping a preprinted sheet of blotting paper into an LSD/water/alcohol solution. More than 200 types of LSD tablets have been encountered since 1969 and more than 350 blotter paper designs have been observed since 1975. About the same time as blotter paper LSD came "Windowpane" (AKA "Clearlight"), which contained LSD inside a thin gelatin square a quarter of an inch (6 mm) across. LSD has been sold under a wide variety of often short-lived and regionally restricted street names including Acid, Trips, Uncle Sid, Blotter, Lucy, Alice and doses, as well as names that reflect the designs on the sheets of blotter paper. Authorities have encountered the drug in other forms—including powder or crystal, and capsule. Modern distribution LSD manufacturers and traffickers in the United States can be categorized into two groups: A few large-scale producers, and an equally limited number of small, clandestine chemists, consisting of independent producers who, operating on a comparatively limited scale, can be found throughout the country. As a group, independent producers are of less concern to the Drug Enforcement Administration than the large-scale groups because their product reaches only local markets. Many LSD dealers and chemists describe a religious or humanitarian purpose that motivates their illicit activity. Nicholas Schou's book Orange Sunshine: The Brotherhood of Eternal Love and Its Quest to Spread Peace, Love, and Acid to the World describes one such group, the Brotherhood of Eternal Love. The group was a major American LSD trafficking group in the late 1960s and early 1970s. In the second half of the 20th century, dealers and chemists loosely associated with the Grateful Dead like Owsley Stanley, Nicholas Sand, Karen Horning, Sarah Maltzer, "Dealer McDope," and Leonard Pickard played an essential role in distributing LSD. Mimics Since 2005, law enforcement in the United States and elsewhere has seized several chemicals and combinations of chemicals in blotter paper which were sold as LSD mimics, including DOB, a mixture of DOC and DOI, 25I-NBOMe, and a mixture of DOC and DOB. Many mimics are toxic in comparatively small doses, or have extremely different safety profiles. Many street users of LSD are often under the impression that blotter paper which is actively hallucinogenic can only be LSD because that is the only chemical with low enough doses to fit on a small square of blotter paper. While it is true that LSD requires lower doses than most other hallucinogens, blotter paper is capable of absorbing a much larger amount of material. The DEA performed a chromatographic analysis of blotter paper containing 2C-C which showed that the paper contained a much greater concentration of the active chemical than typical LSD doses, although the exact quantity was not determined. Blotter LSD mimics can have relatively small dose squares; a sample of blotter paper containing DOC seized by Concord, California police had dose markings approximately 6 mm apart. Several deaths have been attributed to 25I-NBOMe. Research In the United States, the earliest research began in the 1950s. Albert Kurland and his colleagues published research on LSD's therapeutic potential to treat schizophrenia. In Canada, Humphry Osmond and Abram Hoffer completed LSD studies as early as 1952. By the 1960s, controversies surrounding "hippie" counterculture began to deplete institutional support for continued studies. Currently, several organizations—including the Beckley Foundation, MAPS, Heffter Research Institute and the Albert Hofmann Foundation—exist to fund, encourage and coordinate research into the medicinal and spiritual uses of LSD and related psychedelics. New clinical LSD experiments in humans started in 2009 for the first time in 35 years. As it is illegal in many areas of the world, potential medical uses are difficult to study. In 2001 the United States Drug Enforcement Administration stated that LSD "produces no aphrodisiac effects, does not increase creativity, has no lasting positive effect in treating alcoholics or criminals, does not produce a "model psychosis", and does not generate immediate personality change." More recently, experimental uses of LSD have included the treatment of alcoholism, pain and cluster headache relief, and prospective studies on depression. A 2020 meta-review indicated possible positive effects of LSD in reducing psychiatric symptoms, mainly in cases of alcoholism. There is evidence that psychedelics induce molecular and cellular adaptations related to neuroplasticity and that these could potentially underlie therapeutic benefits. Psychedelic therapy In the 1950s and 1960s, LSD was used in psychiatry to enhance psychotherapy, known as psychedelic therapy. Some psychiatrists, such as Ronald A. Sandison, who pioneered its use at Powick Hospital in England, believed LSD was especially useful at helping patients to "unblock" repressed subconscious material through other psychotherapeutic methods, and also for treating alcoholism. One study concluded, "The root of the therapeutic value of the LSD experience is its potential for producing self-acceptance and self-surrender," presumably by forcing the user to face issues and problems in that individual's psyche. Two recent reviews concluded that conclusions drawn from most of these early trials are unreliable due to serious methodological flaws. These include the absence of adequate control groups, lack of follow-up, and vague criteria for therapeutic outcome. In many cases, studies failed to convincingly demonstrate whether the drug or the therapeutic interaction was responsible for any beneficial effects. In recent years, organizations like the Multidisciplinary Association for Psychedelic Studies (MAPS) have renewed clinical research of LSD. It has been proposed that LSD be studied for use in the therapeutic setting, particularly in anxiety. In 2024, the FDA designated a form of LSD as a breakthrough therapy to treat generalized anxiety disorder which is being developed by MindMed. Other uses In the 1950s and 1960s, some psychiatrists (e.g., Oscar Janiger) explored the potential effect of LSD on creativity. Experimental studies attempted to measure the effect of LSD on creative activity and aesthetic appreciation. In 1966 Dr. James Fadiman conducted a study with the central question "How can psychedelics be used to facilitate problem solving?" This study attempted to solve 44 different problems and had 40 satisfactory solutions when the FDA banned all research into psychedelics. LSD was a key component of this study. Since 2008 there has been ongoing research into using LSD to alleviate anxiety for terminally ill cancer patients coping with their impending deaths. A 2012 meta-analysis found evidence that a single dose of LSD in conjunction with various alcoholism treatment programs was associated with a decrease in alcohol abuse, lasting for several months, but no effect was seen at one year. Adverse events included seizure, moderate confusion and agitation, nausea, vomiting, and acting in a bizarre fashion. LSD has been used as a treatment for cluster headaches with positive results in some small studies. LSD is a potent psychoplastogen, a compound capable of promoting rapid and sustained neural plasticity that may have wide-ranging therapeutic benefit. LSD has been shown to increase markers of neuroplasticity in human brain organoids and improve memory performance in human subjects. LSD may have analgesic properties related to pain in terminally ill patients and phantom pain and may be useful for treating inflammatory diseases including rheumatoid arthritis. Notable individuals Some notable individuals have commented publicly on their experiences with LSD. Some of these comments date from the era when it was legally available in the US and Europe for non-medical uses, and others pertain to psychiatric treatment in the 1950s and 1960s. Still others describe experiences with illegal LSD, obtained for philosophic, artistic, therapeutic, spiritual, or recreational purposes. W. H. Auden, the poet, said, "I myself have taken mescaline once and L.S.D. once. Aside from a slight schizophrenic dissociation of the I from the Not-I, including my body, nothing happened at all." He also said, "LSD was a complete frost. … What it does seem to destroy is the power of communication. I have listened to tapes done by highly articulate people under LSD, for example, and they talk absolute drivel. They may have seen something interesting, but they certainly lose either the power or the wish to communicate." He also said, "Nothing much happened but I did get the distinct impression that some birds were trying to communicate with me." Daniel Ellsberg, an American peace activist, says he has had several hundred experiences with psychedelics. Richard Feynman, a notable physicist at California Institute of Technology, tried LSD during his professorship at Caltech. Feynman largely sidestepped the issue when dictating his anecdotes; he mentions it in passing in the "O Americano, Outra Vez" section. Jerry Garcia stated in a July 3, 1989 interview for Relix Magazine, in response to the question "Have your feelings about LSD changed over the years?," "They haven't changed much. My feelings about LSD are mixed. It's something that I both fear and that I love at the same time. I never take any psychedelic, have a psychedelic experience, without having that feeling of, "I don't know what's going to happen." In that sense, it's still fundamentally an enigma and a mystery." Bill Gates implied in an interview with Playboy that he tried LSD during his youth. Aldous Huxley, author of Brave New World, became a user of psychedelics after moving to Hollywood. He was at the forefront of the counterculture's use of psychedelic drugs, which led to his 1954 work The Doors of Perception. Dying from cancer, he asked his wife on 22 November 1963 to inject him with 100 μg of LSD. He died later that day. Steve Jobs, co-founder and former CEO of Apple Inc., said, "Taking LSD was a profound experience, one of the most important things in my life." Ernst Jünger, German writer and philosopher, throughout his life had experimented with drugs such as ether, cocaine, and hashish; and later in life he used mescaline and LSD. These experiments were recorded comprehensively in Annäherungen (1970, Approaches). The novel Besuch auf Godenholm (1952, Visit to Godenholm) is clearly influenced by his early experiments with mescaline and LSD. He met with LSD inventor Albert Hofmann and they took LSD together several times. Hofmann's memoir LSD, My Problem Child describes some of these meetings. In a 2004 interview, Paul McCartney said that The Beatles' songs "Day Tripper" and "Lucy in the Sky with Diamonds" were inspired by LSD trips. Nonetheless, John Lennon consistently stated over the course of many years that the fact that the initials of "Lucy in the Sky with Diamonds" spelled out L-S-D was a coincidence (he stated that the title came from a picture drawn by his son Julian) and that the band members did not notice until after the song had been released, and Paul McCartney corroborated that story. John Lennon, George Harrison, and Ringo Starr also used the drug, although McCartney cautioned that "it's easy to overestimate the influence of drugs on the Beatles' music." Michel Foucault had an LSD experience with Simeon Wade in Death Valley and later wrote "it was the greatest experience of his life, and that it profoundly changed his life and his work." According to Wade, as soon as he came back to Paris, Foucault scrapped the second History of Sexuality's manuscript, and totally rethought the whole project. Kary Mullis is reported to credit LSD with helping him develop DNA amplification technology, for which he received the Nobel Prize in Chemistry in 1993. Carlo Rovelli, an Italian theoretical physicist and writer, has credited his use of LSD with sparking his interest in theoretical physics. Oliver Sacks, a neurologist famous for writing best-selling case histories about his patients' disorders and unusual experiences, talks about his own experiences with LSD and other perception altering chemicals, in his book, Hallucinations. Matt Stone and Trey Parker, creators of the TV series South Park, claimed to have shown up at the 72nd Academy Awards, at which they were nominated for Best Original Song, under the influence of LSD. See also 1P-LSD 1cP-LSD Claviceps purpurea (ergot) LSD art LSZ Psychoplastogen Notes References Further reading External links LSD-25 at Erowid LSD at TiHKAL by Alexander Shulgin LSD at PsychonautWiki Documentaries Hofmann's Potion a documentary on the origins of LSD, 2002 Inside LSD National Geographic Channel, 2009 How to Change Your Mind Netflix docuseries, 2022 1938 introductions 1938 in science 1938 in Switzerland Counterculture of the 1960s Dopamine agonists Drugs developed by Novartis Entheogens Experimental hallucinogens Incapacitating agents Light-sensitive chemicals Mind control Serotonin receptor agonists Swiss inventions Withdrawn drugs Wikipedia medicine articles ready to translate Articles containing video clips
LSD
[ "Chemistry" ]
11,255
[ "Incapacitating agents", "Light-sensitive chemicals", "Chemical weapons", "Stereochemistry", "Enantiopure drugs", "Light reactions" ]
17,553
https://en.wikipedia.org/wiki/Kepler%27s%20laws%20of%20planetary%20motion
In astronomy, Kepler's laws of planetary motion, published by Johannes Kepler in 1609 (except the third law, and was fully published in 1619), describe the orbits of planets around the Sun. These laws replaced circular orbits and epicycles in the heliocentric theory of Nicolaus Copernicus with elliptical orbits and explained how planetary velocities vary. The three laws state that: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of a planet's orbital period is proportional to the cube of the length of the semi-major axis of its orbit. The elliptical orbits of planets were indicated by calculations of the orbit of Mars. From this, Kepler inferred that other bodies in the Solar System, including those farther away from the Sun, also have elliptical orbits. The second law establishes that when a planet is closer to the Sun, it travels faster. The third law expresses that the farther a planet is from the Sun, the longer its orbital period. Isaac Newton showed in 1687 that relationships like Kepler's would apply in the Solar System as a consequence of his own laws of motion and law of universal gravitation. A more precise historical approach is found in Astronomia nova and Epitome Astronomiae Copernicanae. Comparison to Copernicus Johannes Kepler's laws improved the model of Copernicus. According to Copernicus: The planetary orbit is a circle with epicycles. The Sun is approximately at the center of the orbit. The speed of the planet in the main orbit is constant. Despite being correct in saying that the planets revolved around the Sun, Copernicus was incorrect in defining their orbits. Introducing physical explanations for movement in space beyond just geometry, Kepler correctly defined the orbit of planets as follows: The planetary orbit is not a circle with epicycles, but an ellipse. The Sun is not at the center but at a focal point of the elliptical orbit. Neither the linear speed nor the angular speed of the planet in the orbit is constant, but the area speed (closely linked historically with the concept of angular momentum) is constant. The eccentricity of the orbit of the Earth makes the time from the March equinox to the September equinox, around 186 days, unequal to the time from the September equinox to the March equinox, around 179 days. A diameter would cut the orbit into equal parts, but the plane through the Sun parallel to the equator of the Earth cuts the orbit into two parts with areas in a 186 to 179 ratio, so the eccentricity of the orbit of the Earth is approximately which is close to the correct value (0.016710218). The accuracy of this calculation requires that the two dates chosen be along the elliptical orbit's minor axis and that the midpoints of each half be along the major axis. As the two dates chosen here are equinoxes, this will be correct when perihelion, the date the Earth is closest to the Sun, falls on a solstice. The current perihelion, near January 4, is fairly close to the solstice of December 21 or 22. Nomenclature It took nearly two centuries for the current formulation of Kepler's work to take on its settled form. Voltaire's Eléments de la philosophie de Newton (Elements of Newton's Philosophy) of 1738 was the first publication to use the terminology of "laws". The Biographical Encyclopedia of Astronomers in its article on Kepler (p. 620) states that the terminology of scientific laws for these discoveries was current at least from the time of Joseph de Lalande. It was the exposition of Robert Small, in An account of the astronomical discoveries of Kepler (1814) that made up the set of three laws, by adding in the third. Small also claimed, against the history, that these were empirical laws, based on inductive reasoning. Further, the current usage of "Kepler's Second Law" is something of a misnomer. Kepler had two versions, related in a qualitative sense: the "distance law" and the "area law". The "area law" is what became the Second Law in the set of three; but Kepler did himself not privilege it in that way. History Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Kepler's third law was published in 1619. Kepler had believed in the Copernican model of the Solar System, which called for circular orbits, but he could not reconcile Brahe's highly precise observations with a circular fit to Mars' orbit – Mars coincidentally having the highest eccentricity of all planets except Mercury. His first law reflected this discovery. In 1621, Kepler noted that his third law applies to the four brightest moons of Jupiter. Godefroy Wendelin also made this observation in 1643. The second law, in the "area law" form, was contested by Nicolaus Mercator in a book from 1664, but by 1670 his Philosophical Transactions were in its favour. As the century proceeded it became more widely accepted. The reception in Germany changed noticeably between 1688, the year in which Newton's Principia was published and was taken to be basically Copernican, and 1690, by which time work of Gottfried Leibniz on Kepler had been published. Newton was credited with understanding that the second law is not special to the inverse square law of gravitation, being a consequence just of the radial nature of that law, whereas the other laws do depend on the inverse square form of the attraction. Carl Runge and Wilhelm Lenz much later identified a symmetry principle in the phase space of planetary motion (the orthogonal group O(4) acting) which accounts for the first and third laws in the case of Newtonian gravitation, as conservation of angular momentum does via rotational symmetry for the second law. Formulary The mathematical model of the kinematics of a planet subject to the laws allows a large range of further calculations. First law Kepler's first law states that: The orbit of every planet is an ellipse with the sun at one of the two foci. Mathematically, an ellipse can be represented by the formula: where is the semi-latus rectum, ε is the eccentricity of the ellipse, r is the distance from the Sun to the planet, and θ is the angle to the planet's current position from its closest approach, as seen from the Sun. So (r, θ) are polar coordinates. For an ellipse 0 < ε < 1 ; in the limiting case ε = 0, the orbit is a circle with the Sun at the centre (i.e. where there is zero eccentricity). At θ = 0°, perihelion, the distance is minimum At θ = 90° and at θ = 270° the distance is equal to . At θ = 180°, aphelion, the distance is maximum (by definition, aphelion is – invariably – perihelion plus 180°) The semi-major axis a is the arithmetic mean between rmin and rmax: The semi-minor axis b is the geometric mean between rmin and rmax: The semi-latus rectum p is the harmonic mean between rmin and rmax: The eccentricity ε is the coefficient of variation between rmin and rmax: The area of the ellipse is The special case of a circle is ε = 0, resulting in r = p = rmin = rmax = a = b and A = πr2. Second law Kepler's second law states that: A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. The orbital radius and angular velocity of the planet in the elliptical orbit will vary. This is shown in the animation: the planet travels faster when closer to the Sun, then slower when farther from the Sun. Kepler's second law states that the blue sector has constant area. History and proofs Kepler notably arrived at this law through assumptions that were either only approximately true or outright false and can be outlined as follows: Planets are pushed around the Sun by a force from the Sun. This false assumption relies on incorrect Aristotelian physics that an object needs to be pushed to maintain motion. The propelling force from the Sun is inversely proportional to the distance from the Sun. Kepler reasoned this, believing that gravity spreading in three dimensions would be a waste, since the planets inhabited a plane. Thus, an inverse instead of the [correct] inverse square law. Because Kepler believed that force would be proportional to velocity, it followed from statements #1 and #2 that velocity would be inverse to the distance from the sun. This is also an incorrect tenet of Aristotelian physics. Since velocity is inverse to time, the distance from the sun would be proportional to the time to cover a small piece of the orbit. This is approximately true for elliptical orbits. The area swept out is proportional to the overall time. This is also approximately true. The orbits of a planet are circular (Kepler discovered his Second Law before his First Law, which contradicts this). Nevertheless, the result of the Second Law is exactly true, as it is logically equivalent to the conservation of angular momentum, which is true for any body experiencing a radially symmetric force. A correct proof can be shown through this. Since the cross product of two vectors gives the area of a parallelogram possessing sides of those vectors, the triangular area dA swept out in a short period of time is given by half the cross product of the r and dx vectors, for some short piece of the orbit, dx. for a small piece of the orbit dx and time to cover it dt. Thus Since the final expression is proportional to the total angular momentum , Kepler's equal area law will hold for any system that conserves angular momentum. Since any radial force will produce no torque on the planet's motion, angular momentum will be conserved. In terms of elliptical parameters In a small time the planet sweeps out a small triangle having base line and height and area , so the constant areal velocity is The area enclosed by the elliptical orbit is . So the period satisfies and the mean motion of the planet around the Sun satisfies And so, Third law Kepler's third law states that: The ratio of the square of an object's orbital period with the cube of the semi-major axis of its orbit is the same for all objects orbiting the same primary. This captures the relationship between the distance of planets from the Sun, and their orbital periods. Kepler enunciated in 1619 this third law in a laborious attempt to determine what he viewed as the "music of the spheres" according to precise laws, and express it in terms of musical notation. It was therefore known as the harmonic law. The original form of this law (referring to not the semi-major axis, but rather a "mean distance") holds true only for planets with small eccentricities near zero. Using Newton's law of gravitation (published 1687), this relation can be found in the case of a circular orbit by setting the centripetal force equal to the gravitational force: Then, expressing the angular velocity ω in terms of the orbital period and then rearranging, results in Kepler's Third Law: A more detailed derivation can be done with general elliptical orbits, instead of circles, as well as orbiting the center of mass, instead of just the large mass. This results in replacing a circular radius, , with the semi-major axis, , of the elliptical relative motion of one mass relative to the other, as well as replacing the large mass with . However, with planet masses being so much smaller than the Sun, this correction is often ignored. The full corresponding formula is: where is the mass of the Sun, is the mass of the planet, is the gravitational constant, is the orbital period and is the elliptical semi-major axis, and is the astronomical unit, the average distance from earth to the sun. Table The following table shows the data used by Kepler to empirically derive his law: Kepler became aware of John Napier's recent invention of logarithms and log-log graphs before he discovered the pattern. Upon finding this pattern Kepler wrote: For comparison, here are modern estimates: Planetary acceleration Isaac Newton computed in his Philosophiæ Naturalis Principia Mathematica the acceleration of a planet moving according to Kepler's first and second laws. The direction of the acceleration is towards the Sun. The magnitude of the acceleration is inversely proportional to the square of the planet's distance from the Sun (the inverse square law). This implies that the Sun may be the physical cause of the acceleration of planets. However, Newton states in his Principia that he considers forces from a mathematical point of view, not a physical, thereby taking an instrumentalist view. Moreover, he does not assign a cause to gravity. Newton defined the force acting on a planet to be the product of its mass and the acceleration (see Newton's laws of motion). So: Every planet is attracted towards the Sun. The force acting on a planet is directly proportional to the mass of the planet and is inversely proportional to the square of its distance from the Sun. The Sun plays an unsymmetrical part, which is unjustified. So he assumed, in Newton's law of universal gravitation: All bodies in the Solar System attract one another. The force between two bodies is in direct proportion to the product of their masses and in inverse proportion to the square of the distance between them. As the planets have small masses compared to that of the Sun, the orbits conform approximately to Kepler's laws. Newton's model improves upon Kepler's model, and fits actual observations more accurately. (See two-body problem.) Below comes the detailed calculation of the acceleration of a planet moving according to Kepler's first and second laws. Acceleration vector From the heliocentric point of view consider the vector to the planet where is the distance to the planet and is a unit vector pointing towards the planet. where is the unit vector whose direction is 90 degrees counterclockwise of , and is the polar angle, and where a dot on top of the variable signifies differentiation with respect to time. Differentiate the position vector twice to obtain the velocity vector and the acceleration vector: So where the radial acceleration is and the transversal acceleration is Inverse square law Kepler's second law says that is constant. The transversal acceleration is zero: So the acceleration of a planet obeying Kepler's second law is directed towards the Sun. The radial acceleration is Kepler's first law states that the orbit is described by the equation: Differentiating with respect to time or Differentiating once more The radial acceleration satisfies Substituting the equation of the ellipse gives The relation gives the simple final result This means that the acceleration vector of any planet obeying Kepler's first and second law satisfies the inverse square law where is a constant, and is the unit vector pointing from the Sun towards the planet, and is the distance between the planet and the Sun. Since mean motion where is the period, according to Kepler's third law, has the same value for all the planets. So the inverse square law for planetary accelerations applies throughout the entire Solar System. The inverse square law is a differential equation. The solutions to this differential equation include the Keplerian motions, as shown, but they also include motions where the orbit is a hyperbola or parabola or a straight line. (See Kepler orbit.) Newton's law of gravitation By Newton's second law, the gravitational force that acts on the planet is: where is the mass of the planet and has the same value for all planets in the Solar System. According to Newton's third law, the Sun is attracted to the planet by a force of the same magnitude. Since the force is proportional to the mass of the planet, under the symmetric consideration, it should also be proportional to the mass of the Sun, . So where is the gravitational constant. The acceleration of Solar System body number i is, according to Newton's laws: where is the mass of body j, is the distance between body i and body j, is the unit vector from body i towards body j, and the vector summation is over all bodies in the Solar System, besides i itself. In the special case where there are only two bodies in the Solar System, Earth and Sun, the acceleration becomes which is the acceleration of the Kepler motion. So this Earth moves around the Sun according to Kepler's laws. If the two bodies in the Solar System are Moon and Earth the acceleration of the Moon becomes So in this approximation, the Moon moves around the Earth according to Kepler's laws. In the three-body case the accelerations are These accelerations are not those of Kepler orbits, and the three-body problem is complicated. But Keplerian approximation is the basis for perturbation calculations. (See Lunar theory.) Position as a function of time Kepler used his two first laws to compute the position of a planet as a function of time. His method involves the solution of a transcendental equation called Kepler's equation. The procedure for calculating the heliocentric polar coordinates (r,θ) of a planet as a function of the time t since perihelion, is the following five steps: Compute the mean motion , where P is the period. Compute the mean anomaly , where t is the time since perihelion. Compute the eccentric anomaly E by solving Kepler's equation: where is the eccentricity. Compute the true anomaly θ by solving the equation: Compute the heliocentric distance r: where is the semimajor axis. The position polar coordinates (r,θ) can now be written as a Cartesian vector and the Cartesian velocity vector can then be calculated as , where is the standard gravitational parameter. The important special case of circular orbit, ε = 0, gives . Because the uniform circular motion was considered to be normal, a deviation from this motion was considered an anomaly. The proof of this procedure is shown below. Mean anomaly, M The Keplerian problem assumes an elliptical orbit and the four points: s the Sun (at one focus of ellipse); z the perihelion c the center of the ellipse p the planet and distance between center and perihelion, the semimajor axis, the eccentricity, the semiminor axis, the distance between Sun and planet. the direction to the planet as seen from the Sun, the true anomaly. The problem is to compute the polar coordinates (r,θ) of the planet from the time since perihelion, t. It is solved in steps. Kepler considered the circle with the major axis as a diameter, and the projection of the planet to the auxiliary circle the point on the circle such that the sector areas |zcy| and |zsx| are equal, the mean anomaly. The sector areas are related by The circular sector area The area swept since perihelion, is by Kepler's second law proportional to time since perihelion. So the mean anomaly, M, is proportional to time since perihelion, t. where n is the mean motion. Eccentric anomaly, E When the mean anomaly M is computed, the goal is to compute the true anomaly θ. The function θ = f(M) is, however, not elementary. Kepler's solution is to use x as seen from the centre, the eccentric anomaly as an intermediate variable, and first compute E as a function of M by solving Kepler's equation below, and then compute the true anomaly θ from the eccentric anomaly E. Here are the details. Division by a2/2 gives Kepler's equation This equation gives M as a function of E. Determining E for a given M is the inverse problem. Iterative numerical algorithms are commonly used. Having computed the eccentric anomaly E, the next step is to calculate the true anomaly θ. But note: Cartesian position coordinates with reference to the center of ellipse are (a cos E, b sin E) With reference to the Sun (with coordinates (c,0) = (ae,0) ), r = (a cos E – ae, b sin E) True anomaly would be arctan(ry/rx), magnitude of r would be . True anomaly, θ Note from the figure that so that Dividing by and inserting from Kepler's first law to get The result is a usable relationship between the eccentric anomaly E and the true anomaly θ. A computationally more convenient form follows by substituting into the trigonometric identity: Get Multiplying by 1 + ε gives the result This is the third step in the connection between time and position in the orbit. Distance, r The fourth step is to compute the heliocentric distance r from the true anomaly θ by Kepler's first law: Using the relation above between θ and E the final equation for the distance r is: See also Circular motion Free-fall time Gravity Kepler orbit Kepler problem Kepler's equation Laplace–Runge–Lenz vector Specific relative angular momentum, relatively easy derivation of Kepler's laws starting with conservation of angular momentum Explanatory notes References General bibliography Kepler's life is summarized on pp. 523–627 and Book Five of his magnum opus, Harmonice Mundi (harmonies of the world), is reprinted on: A derivation of Kepler's third law of planetary motion is a standard topic in engineering mechanics classes. See, for example: . External links B.Surendranath Reddy; animation of Kepler's laws: applet Crowell, Benjamin, Light and Matter, an online book that gives a proof of the first law without the use of calculus (see section 15.7) David McNamara and Gianfranco Vidali, "Kepler's Second Law – Java Interactive Tutorial", an interactive Java applet that aids in the understanding of Kepler's Second Law. Cain, Gay (May 10, 2010), Astronomy Cast, "Ep. 189: Johannes Kepler and His Laws of Planetary Motion" University of Tennessee's Dept. Physics & Astronomy: Astronomy 161, "Johannes Kepler: The Laws of Planetary Motion" Solar System Simulator (Interactive Applet) "Kepler and His Laws" in From Stargazers to Starships by David P. Stern (10 October 2016) by Jens Puhle (Dec 27, 2023) – a video explaining and visualizing Kepler's three laws of planetary motion 1609 in science 1619 in science Copernican Revolution Eponymous laws of physics Equations of astronomy Johannes Kepler Orbits
Kepler's laws of planetary motion
[ "Physics", "Astronomy" ]
4,729
[ "Concepts in astronomy", "History of astronomy", "Copernican Revolution", "Equations of astronomy" ]
17,556
https://en.wikipedia.org/wiki/Laser
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word laser originated as an acronym for light amplification by stimulated emission of radiation. The first laser was built in 1960 by Theodore Maiman at Hughes Research Laboratories, based on theoretical work by Charles H. Townes and Arthur Leonard Schawlow and the optical amplifier patented by Gordon Gould. A laser differs from other sources of light in that it emits light that is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as optical communication, laser cutting, and lithography. It also allows a laser beam to stay narrow over great distances (collimation), a feature used in applications such as laser pointers, lidar, and free-space optical communication. Lasers can also have high temporal coherence, which permits them to emit light with a very narrow frequency spectrum. Temporal coherence can also be used to produce ultrashort pulses of light with a broad spectrum but durations as short as an attosecond. Lasers are used in fiber-optic and free-space optical communications, optical disc drives, laser printers, barcode scanners, semiconductor chip manufacturing (photolithography, etching), laser surgery and skin treatments, cutting and welding materials, military and law enforcement devices for marking targets and measuring range and speed, and in laser lighting displays for entertainment. Lasers transport the majority of Internet traffic. The laser is regarded as one of the greatest inventions of the 20th century. Terminology The first device using amplification by stimulated emission operated at microwave frequencies, and was called a maser, for "microwave amplification by stimulated emission of radiation". When similar optical devices were developed they were first called optical masers, until "microwave" was replaced by "light" in the acronym, to become laser. Today, all such devices operating at frequencies higher than microwaves (approximately above 300 GHz) are called lasers (e.g. infrared lasers, ultraviolet lasers, X-ray lasers, gamma-ray lasers), whereas devices operating at microwave or lower radio frequencies are called masers. The back-formed verb "to lase" is frequently used in the field, meaning "to give off coherent light," especially about the gain medium of a laser; when a laser is operating, it is said to be "lasing". The terms laser and maser are also used for naturally occurring coherent emissions, as in astrophysical maser and atom laser. A laser that produces light by itself is technically an optical oscillator rather than an optical amplifier as suggested by the acronym. It has been humorously noted that the acronym LOSER, for "light oscillation by stimulated emission of radiation", would have been more correct. Some sources refer to the word laser as an anacronym, meaning an acronym so widely used as a noun that it is no longer considered an abbreviation. Fundamentals Photons, the quanta of electromagnetic radiation are released and absorbed from energy levels in atoms and molecules. In a lightbulb or a star, the energy is emitted from many different levels giving photons with a broad range of energies. This process is called thermal radiation. The underlying physical process creating photons in a laser is the same as in thermal radiation, but the actual emission is not the result of random thermal processes. Instead, the release of a photon is triggered by the nearby passage of another photon. This is called stimulated emission. For this process to work, the passing photon must be similar in energy, and thus wavelength, to the one that could be released by the atom or molecule, and the atom or molecule must be in the suitable excited state. The photon that is emitted by stimulated emission is identical to the photon that triggered its emission, and both photons can go on to trigger stimulated emission in other atoms, creating the possibility of a chain reaction. For this to happen, many of the atoms or molecules must be in the proper excited state so that the photons can trigger them. In most materials, atoms or molecules drop out of excited states fairly rapidly, making it difficult or impossible to produce a chain reaction. The materials chosen for lasers are the ones that have metastable states, which stay excited for a relatively long time. In laser physics, such a material is called an active laser medium. Combined with an energy source that continues to "pump" energy into the material, it is possible to have enough atoms or molecules in an excited state for a chain reaction to develop. Lasers are distinguished from other light sources by their coherence. Spatial (or transverse) coherence is typically expressed through the output being a narrow beam, which is diffraction-limited. Laser beams can be focused to very tiny spots, achieving a very high irradiance, or they can have a very low divergence to concentrate their power at a great distance. Temporal (or longitudinal) coherence implies a polarized wave at a single frequency, whose phase is correlated over a relatively great distance (the coherence length) along the beam. A beam produced by a thermal or other incoherent light source has an instantaneous amplitude and phase that vary randomly with respect to time and position, thus having a short coherence length. Lasers are characterized according to their wavelength in a vacuum. Most "single wavelength" lasers produce radiation in several modes with slightly different wavelengths. Although temporal coherence implies some degree of monochromaticity, some lasers emit a broad spectrum of light or emit different wavelengths of light simultaneously. Certain lasers are not single spatial mode and have light beams that diverge more than is required by the diffraction limit. All such devices are classified as "lasers" based on the method of producing light by stimulated emission. Lasers are employed where light of the required spatial or temporal coherence can not be produced using simpler technologies. Design A laser consists of a gain medium, a mechanism to energize it, and something to provide optical feedback. The gain medium is a material with properties that allow it to amplify light by way of stimulated emission. Light of a specific wavelength that passes through the gain medium is amplified (power increases). Feedback enables stimulated emission to amplify predominantly the optical frequency at the peak of the gain-frequency curve. As stimulated emission grows, eventually one frequency dominates over all others, meaning that a coherent beam has been formed. The process of stimulated emission is analogous to that of an audio oscillator with positive feedback which can occur, for example, when the speaker in a public-address system is placed in proximity to the microphone. The screech one hears is audio oscillation at the peak of the gain-frequency curve for the amplifier. For the gain medium to amplify light, it needs to be supplied with energy in a process called pumping. The energy is typically supplied as an electric current or as light at a different wavelength. Pump light may be provided by a flash lamp or by another laser. The most common type of laser uses feedback from an optical cavitya pair of mirrors on either end of the gain medium. Light bounces back and forth between the mirrors, passing through the gain medium and being amplified each time. Typically one of the two mirrors, the output coupler, is partially transparent. Some of the light escapes through this mirror. Depending on the design of the cavity (whether the mirrors are flat or curved), the light coming out of the laser may spread out or form a narrow beam. In analogy to electronic oscillators, this device is sometimes called a laser oscillator. Most practical lasers contain additional elements that affect the properties of the emitted light, such as the polarization, wavelength, and shape of the beam. Laser physics Electrons and how they interact with electromagnetic fields are important in our understanding of chemistry and physics. Stimulated emission In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below: An electron in an atom can absorb energy from light (photons) or heat (phonons) only if there is a transition between energy levels that match the energy carried by the photon or phonon. For light, this means that any given transition will only absorb one particular wavelength of light. Photons with the correct wavelength can cause an electron to jump from the lower to the higher energy level. The photon is consumed in this process. When an electron is excited from one state to that at a higher energy level with energy difference ΔE, it will not stay that way forever. Eventually, a photon will be spontaneously created from the vacuum having energy ΔE. Conserving energy, the electron transitions to a lower energy level that is not occupied, with transitions to different levels having different time constants. This process is called spontaneous emission. Spontaneous emission is a quantum-mechanical effect and a direct physical manifestation of the Heisenberg uncertainty principle. The emitted photon has a random direction, but its wavelength matches the absorption wavelength of the transition. This is the mechanism of fluorescence and thermal emission. A photon with the correct wavelength to be absorbed by a transition can also cause an electron to drop from the higher to the lower level, emitting a new photon. The emitted photon exactly matches the original photon in wavelength, phase, and direction. This process is called stimulated emission. Gain medium and cavity The gain medium is put into an excited state by an external source of energy. In most lasers, this medium consists of a population of atoms that have been excited into such a state using an outside light source, or an electrical field that supplies energy for atoms to absorb and be transformed into their excited states. The gain medium of a laser is normally a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission described above. This material can be of any state: gas, liquid, solid, or plasma. The gain medium absorbs pump energy, which raises some electrons into higher energy ("excited") quantum states. Particles can interact with light by either absorbing or emitting photons. Emission can be spontaneous or stimulated. In the latter case, the photon is emitted in the same direction as the light that is passing by. When the number of particles in one excited state exceeds the number of particles in some lower-energy state, population inversion is achieved. In this state, the rate of stimulated emission is larger than the rate of absorption of light in the medium, and therefore the light is amplified. A system with this property is called an optical amplifier. When an optical amplifier is placed inside a resonant optical cavity, one obtains a laser. For lasing media with extremely high gain, so-called superluminescence, light can be sufficiently amplified in a single pass through the gain medium without requiring a resonator. Although often referred to as a laser (see, for example, nitrogen laser), the light output from such a device lacks the spatial and temporal coherence achievable with lasers. Such a device cannot be described as an oscillator but rather as a high-gain optical amplifier that amplifies its spontaneous emission. The same mechanism describes so-called astrophysical masers/lasers. The optical resonator is sometimes referred to as an "optical cavity", but this is a misnomer: lasers use open resonators as opposed to the literal cavity that would be employed at microwave frequencies in a maser. The resonator typically consists of two mirrors between which a coherent beam of light travels in both directions, reflecting on itself so that an average photon will pass through the gain medium repeatedly before it is emitted from the output aperture or lost to diffraction or absorption. If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power, the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the lasing threshold. The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification. The light emitted In most lasers, lasing begins with spontaneous emission into the lasing mode. This initial light is then amplified by stimulated emission in the gain medium. Stimulated emission produces light that matches the input signal in direction, wavelength, and polarization, whereas the phase of the emitted light is 90 degrees in lead of the stimulating light. This, combined with the filtering effect of the optical resonator gives laser light its characteristic coherence, and may give it uniform polarization and monochromaticity, depending on the resonator's design. The fundamental laser linewidth of light emitted from the lasing resonator can be orders of magnitude narrower than the linewidth of light emitted from the passive resonator. Some lasers use a separate injection seeder to start the process off with a beam that is already highly coherent. This can produce beams with a narrower spectrum than would otherwise be possible. In 1963, Roy J. Glauber showed that coherent states are formed from combinations of photon number states, for which he was awarded the Nobel Prize in Physics. A coherent beam of light is formed by single-frequency quantum photon states distributed according to a Poisson distribution. As a result, the arrival rate of photons in a laser beam is described by Poisson statistics. Many lasers produce a beam that can be approximated as a Gaussian beam; such beams have the minimum divergence possible for a given beam diameter. Some lasers, particularly high-power ones, produce multimode beams, with the transverse modes often approximated using Hermite–Gaussian or Laguerre-Gaussian functions. Some high-power lasers use a flat-topped profile known as a "tophat beam". Unstable laser resonators (not used in most lasers) produce fractal-shaped beams. Specialized optical systems can produce more complex beam geometries, such as Bessel beams and optical vortexes. Near the "waist" (or focal region) of a laser beam, it is highly collimated: the wavefronts are planar, normal to the direction of propagation, with no beam divergence at that point. However, due to diffraction, that can only remain true well within the Rayleigh range. The beam of a single transverse mode (gaussian beam) laser eventually diverges at an angle that varies inversely with the beam diameter, as required by diffraction theory. Thus, the "pencil beam" directly generated by a common helium–neon laser would spread out to a size of perhaps 500 kilometers when shone on the Moon (from the distance of the Earth). On the other hand, the light from a semiconductor laser typically exits the tiny crystal with a large divergence: up to 50°. However even such a divergent beam can be transformed into a similarly collimated beam employing a lens system, as is always included, for instance, in a laser pointer whose light originates from a laser diode. That is possible due to the light being of a single spatial mode. This unique property of laser light, spatial coherence, cannot be replicated using standard light sources (except by discarding most of the light) as can be appreciated by comparing the beam from a flashlight (torch) or spotlight to that of almost any laser. A laser beam profiler is used to measure the intensity profile, width, and divergence of laser beams. Diffuse reflection of a laser beam from a matte surface produces a speckle pattern with interesting properties. Quantum vs. classical emission processes The mechanism of producing radiation in a laser relies on stimulated emission, where energy is extracted from a transition in an atom or molecule. This is a quantum phenomenon that was predicted by Albert Einstein, who derived the relationship between the A coefficient, describing spontaneous emission, and the B coefficient which applies to absorption and stimulated emission. In the case of the free-electron laser, atomic energy levels are not involved; it appears that the operation of this rather exotic device can be explained without reference to quantum mechanics. Modes of operation A laser can be classified as operating in either continuous or pulsed mode, depending on whether the power output is essentially continuous over time or whether its output takes the form of pulses of light on one or another time scale. Of course, even a laser whose output is normally continuous can be intentionally turned on and off at some rate to create pulses of light. When the modulation rate is on time scales much slower than the cavity lifetime and the period over which energy can be stored in the lasing medium or pumping mechanism, then it is still classified as a "modulated" or "pulsed" continuous wave laser. Most laser diodes used in communication systems fall into that category. Continuous-wave operation Some applications of lasers depend on a beam whose output power is constant over time. Such a laser is known as a continuous-wave (CW) laser. Many types of lasers can be made to operate in continuous-wave mode to satisfy such an application. Many of these lasers lase in several longitudinal modes at the same time, and beats between the slightly different optical frequencies of those oscillations will produce amplitude variations on time scales shorter than the round-trip time (the reciprocal of the frequency spacing between modes), typically a few nanoseconds or less. In most cases, these lasers are still termed "continuous-wave" as their output power is steady when averaged over longer periods, with the very high-frequency power variations having little or no impact on the intended application. (However, the term is not applied to mode-locked lasers, where the intention is to create very short pulses at the rate of the round-trip time.) For continuous-wave operation, the population inversion of the gain medium needs to be continually replenished by a steady pump source. In some lasing media, this is impossible. In some other lasers, it would require pumping the laser at a very high continuous power level, which would be impractical, or destroying the laser by producing excessive heat. Such lasers cannot be run in CW mode. Pulsed operation The pulsed operation of lasers refers to any laser not classified as a continuous wave so that the optical power appears in pulses of some duration at some repetition rate. This encompasses a wide range of technologies addressing many different motivations. Some lasers are pulsed simply because they cannot be run in continuous mode. In other cases, the application requires the production of pulses having as large an energy as possible. Since the pulse energy is equal to the average power divided by the repetition rate, this goal can sometimes be satisfied by lowering the rate of pulses so that more energy can be built up between pulses. In laser ablation, for example, a small volume of material at the surface of a workpiece can be evaporated if it is heated in a very short time, while supplying the energy gradually would allow for the heat to be absorbed into the bulk of the piece, never attaining a sufficiently high temperature at a particular point. Other applications rely on the peak pulse power (rather than the energy in the pulse), especially to obtain nonlinear optical effects. For a given pulse energy, this requires creating pulses of the shortest possible duration utilizing techniques such as Q-switching. The optical bandwidth of a pulse cannot be narrower than the reciprocal of the pulse width. In the case of extremely short pulses, that implies lasing over a considerable bandwidth, quite contrary to the very narrow bandwidths typical of CW lasers. The lasing medium in some dye lasers and vibronic solid-state lasers produces optical gain over a wide bandwidth, making a laser possible that can thus generate pulses of light as short as a few femtoseconds (10−15 s). Q-switching In a Q-switched laser, the population inversion is allowed to build up by introducing loss inside the resonator which exceeds the gain of the medium; this can also be described as a reduction of the quality factor or 'Q' of the cavity. Then, after the pump energy stored in the laser medium has approached the maximum possible level, the introduced loss mechanism (often an electro- or acousto-optical element) is rapidly removed (or that occurs by itself in a passive device), allowing lasing to begin which rapidly obtains the stored energy in the gain medium. This results in a short pulse incorporating that energy, and thus a high peak power. Mode locking A mode-locked laser is capable of emitting extremely short pulses on the order of tens of picoseconds down to less than 10 femtoseconds. These pulses repeat at the round-trip time, that is, the time that it takes light to complete one round trip between the mirrors comprising the resonator. Due to the Fourier limit (also known as energy–time uncertainty), a pulse of such short temporal length has a spectrum spread over a considerable bandwidth. Thus such a gain medium must have a gain bandwidth sufficiently broad to amplify those frequencies. An example of a suitable material is titanium-doped, artificially grown sapphire (Ti:sapphire), which has a very wide gain bandwidth and can thus produce pulses of only a few femtoseconds duration. Such mode-locked lasers are a most versatile tool for researching processes occurring on extremely short time scales (known as femtosecond physics, femtosecond chemistry and ultrafast science), for maximizing the effect of nonlinearity in optical materials (e.g. in second-harmonic generation, parametric down-conversion, optical parametric oscillators and the like). Unlike the giant pulse of a Q-switched laser, consecutive pulses from a mode-locked laser are phase-coherent; that is, the pulses (and not just their envelopes) are identical and perfectly periodic. For this reason, and the extremely large peak powers attained by such short pulses, such lasers are invaluable in certain areas of research. Pulsed pumping Another method of achieving pulsed laser operation is to pump the laser material with a source that is itself pulsed, either through electronic charging in the case of flash lamps, or another laser that is already pulsed. Pulsed pumping was historically used with dye lasers where the inverted population lifetime of a dye molecule was so short that a high-energy, fast pump was needed. The way to overcome this problem was to charge up large capacitors which are then switched to discharge through flashlamps, producing an intense flash. Pulsed pumping is also required for three-level lasers in which the lower energy level rapidly becomes highly populated, preventing further lasing until those atoms relax to the ground state. These lasers, such as the excimer laser and the copper vapor laser, can never be operated in CW mode. History Foundations In 1917, Albert Einstein established the theoretical foundations for the laser and the maser in the paper "Zur Quantentheorie der Strahlung" ("On the Quantum Theory of Radiation") via a re-derivation of Max Planck's law of radiation, conceptually based upon probability coefficients (Einstein coefficients) for the absorption, spontaneous emission, and stimulated emission of electromagnetic radiation. In 1928, Rudolf W. Ladenburg confirmed the existence of the phenomena of stimulated emission and negative absorption. In 1939, Valentin A. Fabrikant predicted using stimulated emission to amplify "short" waves. In 1947, Willis E. Lamb and R.C.Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission. In 1950, Alfred Kastler (Nobel Prize for Physics 1966) proposed the method of optical pumping, which was experimentally demonstrated two years later by Brossel, Kastler, and Winter. Maser In 1951, Joseph Weber submitted a paper on using stimulated emissions to make a microwave amplifier to the June 1952 Institute of Radio Engineers Vacuum Tube Research Conference in Ottawa, Ontario, Canada. After this presentation, RCA asked Weber to give a seminar on this idea, and Charles H. Townes asked him for a copy of the paper. In 1953, Charles H. Townes and graduate students James P. Gordon and Herbert J. Zeiger produced the first microwave amplifier, a device operating on similar principles to the laser, but amplifying microwave radiation rather than infrared or visible radiation. Townes's maser was incapable of continuous output. Meanwhile, in the Soviet Union, Nikolay Basov and Aleksandr Prokhorov were independently working on the quantum oscillator and solved the problem of continuous-output systems by using more than two energy levels. These gain media could release stimulated emissions between an excited state and a lower excited state, not the ground state, facilitating the maintenance of a population inversion. In 1955, Prokhorov and Basov suggested optical pumping of a multi-level system as a method for obtaining the population inversion, later a main method of laser pumping. Townes reports that several eminent physicistsamong them Niels Bohr, John von Neumann, and Llewellyn Thomasargued the maser violated Heisenberg's uncertainty principle and hence could not work. Others such as Isidor Rabi and Polykarp Kusch expected that it would be impractical and not worth the effort. In 1964, Charles H. Townes, Nikolay Basov, and Aleksandr Prokhorov shared the Nobel Prize in Physics, "for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser–laser principle". Laser In April 1957, Japanese engineer Jun-ichi Nishizawa proposed the concept of a "semiconductor optical maser" in a patent application. That same year, Charles H. Townes and Arthur Leonard Schawlow, then at Bell Labs, began a serious study of infrared "optical masers". As ideas developed, they abandoned infrared radiation to instead concentrate on visible light. Simultaneously, Columbia University graduate student Gordon Gould was working on a doctoral thesis about the energy levels of excited thallium. Gould and Townes met and talked about radiation emission as a general subject, but not the specific work they were pursuing. Later, in November 1957, Gould noted his ideas for how a "laser" could be made, including using an open resonator (an essential laser-device component). His notebook included a diagram of an optically pumped laser. It also contained the first recorded use of the term "laser," an acronym for "light amplification by stimulated emission of radiation," along with suggestions for potential applications of the coherent light beams described. In 1958, Bell Labs filed a patent application for Schawlow and Townes's proposed optical maser; and Schawlow and Townes published a paper with their theoretical calculations in the Physical Review. That same year, Prokhorov independently proposed using an open resonator, the first published appearance of this idea. At a conference in 1959, Gordon Gould first published the acronym "LASER" in the paper The LASER, Light Amplification by Stimulated Emission of Radiation. Gould's intention was that different "-ASER" acronyms should be used for different parts of the spectrum: "XASER" for x-rays, "UVASER" for ultraviolet, "RASER" for radio-wave, etc. Instead, the term "LASER" ended up being used for all devices operating at wavelengths shorter than microwaves. Gould's notes included possible applications for a laser, such as optical telecommunications, spectrometry, interferometry, radar, and nuclear fusion. He continued developing the idea and filed a patent application in April 1959. The United States Patent and Trademark Office (USPTO) denied his application, and awarded a patent to Bell Labs, in 1960. That provoked a twenty-eight-year lawsuit, featuring scientific prestige and money as the stakes. Gould won his first minor patent in 1977, yet it was not until 1987 that he won the first significant patent lawsuit victory when a Federal judge ordered the USPTO to issue patents to Gould for the optically pumped and the gas discharge laser devices. The question of just how to assign credit for inventing the laser remains unresolved by historians. On May 16, 1960, Theodore H. Maiman operated the first functioning laser at Hughes Research Laboratories, Malibu, California, ahead of several research teams, including those of Townes, at Columbia University, Arthur L. Schawlow, at Bell Labs, and Gould, at the TRG (Technical Research Group) company. Maiman's functional laser used a flashlamp-pumped synthetic ruby crystal to produce red laser light at 694 nanometers wavelength. The device was only capable of pulsed operation, due to its three-level pumping design scheme. Later that year, the Iranian physicist Ali Javan, and William R. Bennett Jr., and Donald R. Herriott, constructed the first gas laser, using helium and neon that was capable of continuous operation in the infrared (U.S. Patent 3,149,290); later, Javan received the Albert Einstein World Award of Science in 1993. In 1962, Robert N. Hall demonstrated the first semiconductor laser, which was made of gallium arsenide and emitted in the near-infrared band of the spectrum at 850 nm. Later that year, Nick Holonyak Jr. demonstrated the first semiconductor laser with a visible emission. This first semiconductor laser could only be used in pulsed-beam operation, and when cooled to liquid nitrogen temperatures (77 K). In 1970, Zhores Alferov, in the USSR, and Izuo Hayashi and Morton Panish of Bell Labs also independently developed room-temperature, continual-operation diode lasers, using the heterojunction structure. Recent innovations Since the early period of laser history, laser research has produced a variety of improved and specialized laser types, optimized for different performance goals, including: new wavelength bands maximum average output power maximum peak pulse energy maximum peak pulse power minimum output pulse duration minimum linewidth maximum power efficiency minimum cost Research on improving these aspects of lasers continues to this day. In 2015, researchers made a white laser, whose light is modulated by a synthetic nanosheet made out of zinc, cadmium, sulfur, and selenium that can emit red, green, and blue light in varying proportions, with each wavelength spanning 191 nm. In 2017, researchers at the Delft University of Technology demonstrated an AC Josephson junction microwave laser. Since the laser operates in the superconducting regime, it is more stable than other semiconductor-based lasers. The device has the potential for applications in quantum computing. In 2017, researchers at the Technical University of Munich demonstrated the smallest mode locking laser capable of emitting pairs of phase-locked picosecond laser pulses with a repetition frequency up to 200 GHz. In 2017, researchers from the Physikalisch-Technische Bundesanstalt (PTB), together with US researchers from JILA, a joint institute of the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder, established a new world record by developing an erbium-doped fiber laser with a linewidth of only 10millihertz. Types and operating principles Gas lasers Following the invention of the HeNe gas laser, many other gas discharges have been found to amplify light coherently. Gas lasers using many different gases have been built and used for many purposes. The helium–neon laser (HeNe) can operate at many different wavelengths, however, the vast majority are engineered to lase at 633 nm; these relatively low-cost but highly coherent lasers are extremely common in optical research and educational laboratories. Commercial carbon dioxide (CO2) lasers can emit many hundreds of watts in a single spatial mode which can be concentrated into a tiny spot. This emission is in the thermal infrared at 10.6 μm; such lasers are regularly used in industry for cutting and welding. The efficiency of a CO2 laser is unusually high: over 30%. Argon-ion lasers can operate at several lasing transitions between 351 and 528.7 nm. Depending on the optical design one or more of these transitions can be lasing simultaneously; the most commonly used lines are 458 nm, 488 nm and 514.5 nm. A nitrogen transverse electrical discharge in gas at atmospheric pressure (TEA) laser is an inexpensive gas laser, often home-built by hobbyists, which produces rather incoherent UV light at 337.1 nm. Metal ion lasers are gas lasers that generate deep ultraviolet wavelengths. Helium-silver (HeAg) 224 nm and neon-copper (NeCu) 248 nm are two examples. Like all low-pressure gas lasers, the gain media of these lasers have quite narrow oscillation linewidths, less than 3 GHz (0.5 picometers), making them candidates for use in fluorescence suppressed Raman spectroscopy. Lasing without maintaining the medium excited into a population inversion was demonstrated in 1992 in sodium gas and again in 1995 in rubidium gas by various international teams. This was accomplished by using an external maser to induce "optical transparency" in the medium by introducing and destructively interfering the ground electron transitions between two paths so that the likelihood for the ground electrons to absorb any energy has been canceled. Chemical lasers Chemical lasers are powered by a chemical reaction permitting a large amount of energy to be released quickly. Such very high-power lasers are especially of interest to the military; however continuous wave chemical lasers at very high power levels, fed by streams of gasses, have been developed and have some industrial applications. As examples, in the hydrogen fluoride laser (2700–2900 nm) and the deuterium fluoride laser (3800 nm) the reaction is the combination of hydrogen or deuterium gas with combustion products of ethylene in nitrogen trifluoride. Excimer lasers Excimer lasers are a special sort of gas laser powered by an electric discharge in which the lasing medium is an excimer, or more precisely an exciplex in existing designs. These are molecules that can only exist with one atom in an excited electronic state. Once the molecule transfers its excitation energy to a photon, its atoms are no longer bound to each other, and the molecule disintegrates. This drastically reduces the population of the lower energy state thus greatly facilitating a population inversion. Excimers currently used are all noble gas compounds; noble gasses are chemically inert and can only form compounds while in an excited state. Excimer lasers typically operate at ultraviolet wavelengths, with major applications including semiconductor photolithography and LASIK eye surgery. Commonly used excimer molecules include ArF (emission at 193 nm), KrCl (222 nm), KrF (248 nm), XeCl (308 nm), and XeF (351 nm). The molecular fluorine laser, emitting at 157 nm in the vacuum ultraviolet, is sometimes referred to as an excimer laser; however, this appears to be a misnomer since F2 is a stable compound. Solid-state lasers Solid-state lasers use a crystalline or glass rod that is "doped" with ions that provide the required energy states. For example, the first working laser was a ruby laser, made from ruby (chromium-doped corundum). The population inversion is maintained in the dopant. These materials are pumped optically using a shorter wavelength than the lasing wavelength, often from a flash tube or another laser. The usage of the term "solid-state" in laser physics is narrower than in typical use. Semiconductor lasers (laser diodes) are typically not referred to as solid-state lasers. Neodymium is a common dopant in various solid-state laser crystals, including yttrium orthovanadate (Nd:YVO4), yttrium lithium fluoride (Nd:YLF) and yttrium aluminium garnet (Nd:YAG). All these lasers can produce high powers in the infrared spectrum at 1064 nm. They are used for cutting, welding, and marking of metals and other materials, and also in spectroscopy and for pumping dye lasers. These lasers are also commonly doubled, tripled or quadrupled in frequency to produce 532 nm (green, visible), 355 nm and 266 nm (UV) beams, respectively. Frequency-doubled diode-pumped solid-state (DPSS) lasers are used to make bright green laser pointers. Ytterbium, holmium, thulium, and erbium are other common "dopants" in solid-state lasers. Ytterbium is used in crystals such as Yb:YAG, Yb:KGW, Yb:KYW, Yb:SYS, Yb:BOYS, Yb:CaF2, typically operating around 1020–1050 nm. They are potentially very efficient and high-powered due to a small quantum defect. Extremely high powers in ultrashort pulses can be achieved with Yb:YAG. Holmium-doped YAG crystals emit at 2097 nm and form an efficient laser operating at infrared wavelengths strongly absorbed by water-bearing tissues. The Ho-YAG is usually operated in a pulsed mode and passed through optical fiber surgical devices to resurface joints, remove rot from teeth, vaporize cancers, and pulverize kidney and gall stones. Titanium-doped sapphire (Ti:sapphire) produces a highly tunable infrared laser, commonly used for spectroscopy. It is also notable for use as a mode-locked laser producing ultrashort pulses of extremely high peak power. Thermal limitations in solid-state lasers arise from unconverted pump power that heats the medium. This heat, when coupled with a high thermo-optic coefficient (dn/dT) can cause thermal lensing and reduce the quantum efficiency. Diode-pumped thin disk lasers overcome these issues by having a gain medium that is much thinner than the diameter of the pump beam. This allows for a more uniform temperature in the material. Thin disk lasers have been shown to produce beams of up to one kilowatt. Fiber lasers Solid-state lasers or laser amplifiers where the light is guided due to the total internal reflection in a single mode optical fiber are instead called fiber lasers. Guiding of light allows extremely long gain regions, providing good cooling conditions; fibers have a high surface area to volume ratio which allows efficient cooling. In addition, the fiber's waveguiding properties tend to reduce thermal distortion of the beam. Erbium and ytterbium ions are common active species in such lasers. Quite often, the fiber laser is designed as a double-clad fiber. This type of fiber consists of a fiber core, an inner cladding, and an outer cladding. The index of the three concentric layers is chosen so that the fiber core acts as a single-mode fiber for the laser emission while the outer cladding acts as a highly multimode core for the pump laser. This lets the pump propagate a large amount of power into and through the active inner core region while still having a high numerical aperture (NA) to have easy launching conditions. Pump light can be used more efficiently by creating a fiber disk laser, or a stack of such lasers. Fiber lasers, like other optical media, can suffer from the effects of photodarkening when they are exposed to radiation of certain wavelengths. In particular, this can lead to degradation of the material and loss in laser functionality over time. The exact causes and effects of this phenomenon vary from material to material, although it often involves the formation of color centers. Photonic crystal lasers Photonic crystal lasers are lasers based on nano-structures that provide the mode confinement and the density of optical states (DOS) structure required for the feedback to take place. They are typical micrometer-sized and tunable on the bands of the photonic crystals. Semiconductor lasers Semiconductor lasers are diodes that are electrically pumped. Recombination of electrons and holes created by the applied current introduces optical gain. Reflection from the ends of the crystal forms an optical resonator, although the resonator can be external to the semiconductor in some designs. Commercial laser diodes emit at wavelengths from 375 nm to 3500 nm. Low to medium power laser diodes are used in laser pointers, laser printers and CD/DVD players. Laser diodes are also frequently used to optically pump other lasers with high efficiency. The highest-power industrial laser diodes, with power of up to 20 kW, are used in industry for cutting and welding. External-cavity semiconductor lasers have a semiconductor active medium in a larger cavity. These devices can generate high power outputs with good beam quality, wavelength-tunable narrow-linewidth radiation, or ultrashort laser pulses. In 2012, Nichia and OSRAM developed and manufactured commercial high-power green laser diodes (515/520 nm), which compete with traditional diode-pumped solid-state lasers. Vertical cavity surface-emitting lasers (VCSELs) are semiconductor lasers whose emission direction is perpendicular to the surface of the wafer. VCSEL devices typically have a more circular output beam than conventional laser diodes. As of 2005, only 850 nm VCSELs are widely available, with 1300 nm VCSELs beginning to be commercialized and 1550 nm devices being an area of research. VECSELs are external-cavity VCSELs. Quantum cascade lasers are semiconductor lasers that have an active transition between energy sub-bands of an electron in a structure containing several quantum wells. The development of a silicon laser is important in the field of optical computing. Silicon is the material of choice for integrated circuits, and so electronic and silicon photonic components (such as optical interconnects) could be fabricated on the same chip. Unfortunately, silicon is a difficult lasing material to deal with, since it has certain properties which block lasing. However, recently teams have produced silicon lasers through methods such as fabricating the lasing material from silicon and other semiconductor materials, such as indium(III) phosphide or gallium(III) arsenide, materials that allow coherent light to be produced from silicon. These are called hybrid silicon lasers. Recent developments have also shown the use of monolithically integrated nanowire lasers directly on silicon for optical interconnects, paving the way for chip-level applications. These heterostructure nanowire lasers capable of optical interconnects in silicon are also capable of emitting pairs of phase-locked picosecond pulses with a repetition frequency up to 200 GHz, allowing for on-chip optical signal processing. Another type is a Raman laser, which takes advantage of Raman scattering to produce a laser from materials such as silicon. Dye lasers Dye lasers use an organic dye as the gain medium. The wide gain spectrum of available dyes, or mixtures of dyes, allows these lasers to be highly tunable, or to produce very short-duration pulses (on the order of a few femtoseconds). Although these tunable lasers are mainly known in their liquid form, researchers have also demonstrated narrow-linewidth tunable emission in dispersive oscillator configurations incorporating solid-state dye gain media. In their most prevalent form, these solid-state dye lasers use dye-doped polymers as laser media. Bubble lasers are dye lasers that use a bubble as the optical resonator. Whispering gallery modes in the bubble produce an output spectrum composed of hundreds of evenly spaced peaks: a frequency comb. The spacing of the whispering gallery modes is directly related to the bubble circumference, allowing bubble lasers to be used as highly sensitive pressure sensors. Free-electron lasers Free-electron lasers (FEL) generate coherent, high-power radiation that is widely tunable, currently ranging in wavelength from microwaves through terahertz radiation and infrared to the visible spectrum, to soft X-rays. They have the widest frequency range of any laser type. While FEL beams share the same optical traits as other lasers, such as coherent radiation, FEL operation is quite different. Unlike gas, liquid, or solid-state lasers, which rely on bound atomic or molecular states, FELs use a relativistic electron beam as the lasing medium, hence the term free-electron. Exotic media The pursuit of a high-quantum-energy laser using transitions between isomeric states of an atomic nucleus has been the subject of wide-ranging academic research since the early 1970s. Much of this is summarized in three review articles. This research has been international in scope but mainly based in the former Soviet Union and the United States. While many scientists remain optimistic that a breakthrough is near, an operational gamma-ray laser is yet to be realized. Some of the early studies were directed toward short pulses of neutrons exciting the upper isomer state in a solid so the gamma-ray transition could benefit from the line-narrowing of Mössbauer effect. In conjunction, several advantages were expected from two-stage pumping of a three-level system. It was conjectured that the nucleus of an atom embedded in the near field of a laser-driven coherently-oscillating electron cloud would experience a larger dipole field than that of the driving laser. Furthermore, the nonlinearity of the oscillating cloud would produce both spatial and temporal harmonics, so nuclear transitions of higher multipolarity could also be driven at multiples of the laser frequency. In September 2007, the BBC News reported that there was speculation about the possibility of using positronium annihilation to drive a very powerful gamma ray laser. David Cassidy of the University of California, Riverside proposed that a single such laser could be used to ignite a nuclear fusion reaction, replacing the banks of hundreds of lasers currently employed in inertial confinement fusion experiments. Space-based X-ray lasers pumped by nuclear explosions have also been proposed as antimissile weapons. Such devices would be one-shot weapons. Living cells have been used to produce laser light. The cells were genetically engineered to produce green fluorescent protein, which served as the laser's gain medium. The cells were then placed between two 20-micrometer-wide mirrors, which acted as the laser cavity. When the cell was illuminated with blue light, it emitted intensely directed green laser light. Natural lasers Like astrophysical masers, irradiated planetary or stellar gases may amplify light producing a natural laser. Mars, Venus, and MWC 349 exhibit this phenomenon. Uses When the laser was first invented, it was called "a solution looking for a problem", although Gould noted numerous possible applications in his notebook and patent applications. Since then, they have become ubiquitous, finding utility in thousands of highly varied applications in every section of modern society, including consumer electronics, information technology, science, medicine, industry, law enforcement, entertainment, and the military. Fiber-optic communication relies on multiplexed lasers in dense wave-division multiplexing (WDM) systems to transmit large amounts of data over long distances. The first widely noticeable use of lasers was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become common, commercialized in 1982, followed shortly by laser printers. Some other uses are: Communications: besides fiber-optic communication, lasers are used for free-space optical communication, including laser communication in space Medicine: see below Industry: cutting including converting thin materials, welding, material heat treatment, marking parts (engraving and bonding), additive manufacturing or 3D printing processes such as selective laser sintering and selective laser melting, laser metal deposition, and non-contact measurement of parts and 3D scanning, and laser cleaning. Military: marking targets, guiding munitions, missile defense, electro-optical countermeasures (EOCM), lidar, blinding troops, firearms sights. See below Law enforcement: LIDAR traffic enforcement. Lasers are used for latent fingerprint detection in the forensic identification field Research: spectroscopy, laser ablation, laser annealing, laser scattering, laser interferometry, lidar, laser capture microdissection, fluorescence microscopy, metrology, laser cooling Commercial products: laser printers, barcode scanners, thermometers, laser pointers, holograms, bubblegrams Entertainment: optical discs, laser lighting displays, laser turntables. Informational markings: Laser lighting display technology can be used to project informational markings onto surfaces such as playing fields, roads, runways, or warehouse floors. In 2004, excluding diode lasers, approximately 131,000 lasers were sold ,with a value of  billion. In the same year, approximately 733 million diode lasers, valued at  billion, were sold. Global Industrial laser sales in 2023 reached $21.85 billion. In medicine Lasers have many uses in medicine, including laser surgery (particularly eye surgery), laser healing (photobiomodulation therapy), kidney stone treatment, ophthalmoscopy, and cosmetic skin treatments such as acne treatment, cellulite and striae reduction, and hair removal. Lasers are used to treat cancer by shrinking or destroying tumors or precancerous growths. They are most commonly used to treat superficial cancers that are on the surface of the body or the lining of internal organs. They are used to treat basal cell skin cancer and the very early stages of others like cervical, penile, vaginal, vulvar, and non-small cell lung cancer. Laser therapy is often combined with other treatments, such as surgery, chemotherapy, or radiation therapy. Laser-induced interstitial thermotherapy (LITT), or interstitial laser photocoagulation, uses lasers to treat some cancers using hyperthermia, which uses heat to shrink tumors by damaging or killing cancer cells. Lasers are more precise than traditional surgery methods and cause less damage, pain, bleeding, swelling, and scarring. A disadvantage is that surgeons must acquire specialized training, and thus it will likely be more expensive than other treatments. As weapons A laser weapon is a type of directed-energy weapon that uses lasers to inflict damage. Whether they will be deployed as practical, high-performance military weapons remains to be seen. One of the major issues with laser weapons is atmospheric thermal blooming, which is still largely unsolved. This issue is exacerbated when there is fog, smoke, dust, rain, snow, smog, foam, or purposely dispersed obscurant chemicals present. The United States Navy has tested the very short range (1 mile), 30-kW Laser Weapon System or LaWS to be used against targets like small UAVs, rocket-propelled grenades, and visible motorboat or helicopter engines. It has been described as "six welding lasers strapped together." A 60 kW system, HELIOS, is being developed for destroyer-class ships . Lasers can be used as incapacitating non-lethal weapons. They can cause temporary or permanent vision loss when directed at the eyes. Even lasers with a power output of less than one watt can cause immediate and permanent vision loss under certain conditions, making them potentially non-lethal but incapacitating weapons. The use of such lasers is morally controversial due to the extreme handicap that laser-induced blindness represents. The Protocol on Blinding Laser Weapons bans the use of weapons designed to cause permanent blindness. Weapons designed to cause temporary blindness, known as dazzlers, are used by military and sometimes law enforcement organizations. Hobbies In recent years, some hobbyists have taken an interest in lasers. Lasers used by hobbyists are generally of class IIIa or IIIb , although some have made their own class IV types. However, due to the cost and potential dangers, this is an uncommon hobby. Some hobbyists salvage laser diodes from broken DVD players (red), Blu-ray players (violet), or even higher power laser diodes from CD or DVD burners. Hobbyists have also used surplus lasers taken from retired military applications and modified them for holography. Pulsed ruby and YAG lasers work well for this application. Examples by power Different applications need lasers with different output powers. Lasers that produce a continuous beam or a series of short pulses can be compared on the basis of their average power. Lasers that produce pulses can also be characterized based on the peak power of each pulse. The peak power of a pulsed laser is many orders of magnitude greater than its average power. The average output power is always less than the power consumed. Examples of pulsed systems with high peak power: 700 TW (700×1012 W)National Ignition Facility, a 192-beam, 1.8-megajoule laser system adjoining a 10-meter-diameter target chamber 10 PW (10×1015 W)world's most powerful laser as of 2019, located at the ELI-NP facility in Măgurele, Romania. Safety Even the first laser was recognized as being potentially dangerous. Theodore Maiman characterized the first laser as having the power of one "Gillette", as it could burn through one Gillette razor blade. Today, it is accepted that even low-power lasers with only a few milliwatts of output power can be hazardous to human eyesight when the beam hits the eye directly or after reflection from a shiny surface. At wavelengths which the cornea and the lens can focus well, the coherence and low divergence of laser light means that it can be focused by the eye into an extremely small spot on the retina, resulting in localized burning and permanent damage in seconds or even less time. Lasers are usually labeled with a safety class number, which identifies how dangerous the laser is: Class 1 is inherently safe, usually because the light is contained in an enclosure, for example in CD players Class 2 is safe during normal use; the blink reflex of the eye will prevent damage. Usually up to 1 mW power, for example, laser pointers. Class 3R (formerly IIIa) lasers are usually up to 5 mW and involve a small risk of eye damage within the time of the blink reflex. Staring into such a beam for several seconds is likely to cause damage to a spot on the retina. Class 3B lasers (5–499 mW) can cause immediate eye damage upon exposure. Class 4 lasers (≥ 500 mW) can burn skin, and in some cases, even scattered light from these lasers can cause eye and/or skin damage. Many industrial and scientific lasers are in this class. The indicated powers are for visible-light, continuous-wave lasers. For pulsed lasers and invisible wavelengths, other power limits apply. People working with class 3B and class 4 lasers can protect their eyes with safety goggles which are designed to absorb light of a particular wavelength. Infrared lasers with wavelengths longer than about 1.4micrometers are often referred to as "eye-safe", because the cornea tends to absorb light at these wavelengths, protecting the retina from damage. The label "eye-safe" can be misleading, however, as it applies only to relatively low-power continuous wave beams; a high-power or Q-switched laser at these wavelengths can burn the cornea, causing severe eye damage, and even moderate-power lasers can injure the eye. Lasers can be a hazard to both civil and military aviation, due to the potential to temporarily distract or blind pilots. See Lasers and aviation safety for more on this topic. Cameras based on charge-coupled devices may be more sensitive to laser damage than biological eyes. See also Coherent perfect absorber Homogeneous broadening Laser linewidth List of laser articles List of light sources Nanolaser Sound amplification by stimulated emission of radiation Spaser Fabry–Pérot interferometer Ultrashort pulse laser References Further reading Books Bertolotti, Mario (1999, trans. 2004). The History of the Laser. Institute of Physics. . Bromberg, Joan Lisa (1991). The Laser in America, 1950–1970. MIT Press. . Csele, Mark (2004). Fundamentals of Light Sources and Lasers. Wiley. . Koechner, Walter (1992). Solid-State Laser Engineering. 3rd ed. Springer-Verlag. . Siegman, Anthony E. (1986). Lasers. University Science Books. . Silfvast, William T. (1996). Laser Fundamentals. Cambridge University Press. . Svelto, Orazio (1998). Principles of Lasers. 4th ed. Trans. David Hanna. Springer. . Wilson, J. & Hawkes, J.F.B. (1987). Lasers: Principles and Applications. Prentice Hall International Series in Optoelectronics, Prentice Hall. . Yariv, Amnon (1989). Quantum Electronics. 3rd ed. Wiley. . Periodicals Applied Physics B: Lasers and Optics () IEEE Journal of Lightwave Technology () IEEE Journal of Quantum Electronics () IEEE Journal of Selected Topics in Quantum Electronics () IEEE Photonics Technology Letters () Journal of the Optical Society of America B: Optical Physics () Laser Focus World () Optics Letters () Photonics Spectra () External links Encyclopedia of laser physics and technology by Rüdiger Paschotta A Practical Guide to Lasers for Experimenters and Hobbyists by Samuel M. Goldwasser Homebuilt Lasers Page by Professor Mark Csele Powerful laser is 'brightest light in the universe'The world's most powerful laser as of 2008 might create supernova-like shock waves and possibly even antimatter "Laser Fundamentals" an online course by F. Balembois and S. Forget. Northrop Grumman's Press Release on the Firestrike 15 kW tactical laser product Website on Lasers 50th anniversary by APS, OSA, SPIE Advancing the Laser anniversary site by SPIE: Video interviews, open-access articles, posters, DVDs Bright Idea: The First Lasers history of the invention, with audio interview clips. Free software for Simulation of random laser dynamics Video Demonstrations in Lasers and Optics Produced by the Massachusetts Institute of Technology (MIT). Real-time effects are demonstrated in a way that would be difficult to see in a classroom setting. MIT Video Lecture: Understanding Lasers and Fiberoptics Virtual Museum of Laser History, from the touring exhibit by SPIE website with animations, applications and research about laser and other quantum based phenomena Universite Paris Sud 1960 introductions American inventions Articles containing video clips Photonics Quantum optics Russian inventions Soviet inventions
Laser
[ "Physics" ]
12,256
[ "Quantum optics", "Quantum mechanics" ]
17,561
https://en.wikipedia.org/wiki/Lithium
Lithium () is a chemical element; it has symbol Li and atomic number 3. It is a soft, silvery-white alkali metal. Under standard conditions, it is the least dense metal and the least dense solid element. Like all alkali metals, lithium is highly reactive and flammable, and must be stored in vacuum, inert atmosphere, or inert liquid such as purified kerosene or mineral oil. It exhibits a metallic luster. It corrodes quickly in air to a dull silvery gray, then black tarnish. It does not occur freely in nature, but occurs mainly as pegmatitic minerals, which were once the main source of lithium. Due to its solubility as an ion, it is present in ocean water and is commonly obtained from brines. Lithium metal is isolated electrolytically from a mixture of lithium chloride and potassium chloride. The nucleus of the lithium atom verges on instability, since the two stable lithium isotopes found in nature have among the lowest binding energies per nucleon of all stable nuclides. Because of its relative nuclear instability, lithium is less common in the solar system than 25 of the first 32 chemical elements even though its nuclei are very light: it is an exception to the trend that heavier nuclei are less common. For related reasons, lithium has important uses in nuclear physics. The transmutation of lithium atoms to helium in 1932 was the first fully human-made nuclear reaction, and lithium deuteride serves as a fusion fuel in staged thermonuclear weapons. Lithium and its compounds have several industrial applications, including heat-resistant glass and ceramics, lithium grease lubricants, flux additives for iron, steel and aluminium production, lithium metal batteries, and lithium-ion batteries. These uses consume more than three-quarters of lithium production. Lithium is present in biological systems in trace amounts. It has no established metabolic function in humans. Lithium-based drugs are useful as a mood stabilizer and antidepressant in the treatment of mental illness such as bipolar disorder. Properties Atomic and physical The alkali metals are also called the lithium family, after its leading element. Like the other alkali metals (which are sodium (Na), potassium (K), rubidium (Rb), caesium (Cs), and francium (Fr)), lithium has a single valence electron that, in the presence of solvents, is easily released to form Li+. Because of this, lithium is a good conductor of heat and electricity as well as a highly reactive element, though it is the least reactive of the alkali metals. Lithium's lower reactivity is due to the proximity of its valence electron to its nucleus (the remaining two electrons are in the 1s orbital, much lower in energy, and do not participate in chemical bonds). Molten lithium is significantly more reactive than its solid form. Lithium metal is soft enough to be cut with a knife. It is silvery-white. In air it oxidizes to lithium oxide. Its melting point of and its boiling point of are each the highest of all the alkali metals while its density of 0.534 g/cm3 is the lowest. Lithium has a very low density (0.534 g/cm3), comparable with pine wood. It is the least dense of all elements that are solids at room temperature; the next lightest solid element (potassium, at 0.862 g/cm3) is more than 60% denser. Apart from helium and hydrogen, as a solid it is less dense than any other element as a liquid, being only two-thirds as dense as liquid nitrogen (0.808 g/cm3). Lithium can float on the lightest hydrocarbon oils and is one of only three metals that can float on water, the other two being sodium and potassium. Lithium's coefficient of thermal expansion is twice that of aluminium and almost four times that of iron. Lithium is superconductive below 400 μK at standard pressure and at higher temperatures (more than 9 K) at very high pressures (>20 GPa). At temperatures below 70 K, lithium, like sodium, undergoes diffusionless phase change transformations. At 4.2 K it has a rhombohedral crystal system (with a nine-layer repeat spacing); at higher temperatures it transforms to face-centered cubic and then body-centered cubic. At liquid-helium temperatures (4 K) the rhombohedral structure is prevalent. Multiple allotropic forms have been identified for lithium at high pressures. Lithium has a mass specific heat capacity of 3.58 kilojoules per kilogram-kelvin, the highest of all solids. Because of this, lithium metal is often used in coolants for heat transfer applications. Isotopes Naturally occurring lithium is composed of two stable isotopes, 6Li and 7Li, the latter being the more abundant (95.15% natural abundance). Both natural isotopes have anomalously low nuclear binding energy per nucleon (compared to the neighboring elements on the periodic table, helium and beryllium); lithium is the only low numbered element that can produce net energy through nuclear fission. The two lithium nuclei have lower binding energies per nucleon than any other stable nuclides other than hydrogen-1, deuterium and helium-3. As a result of this, though very light in atomic weight, lithium is less common in the Solar System than 25 of the first 32 chemical elements. Seven radioisotopes have been characterized, the most stable being 8Li with a half-life of 838 ms and 9Li with a half-life of 178 ms. All of the remaining radioactive isotopes have half-lives that are shorter than 8.6 ms. The shortest-lived isotope of lithium is 4Li, which decays through proton emission and has a half-life of 7.6 × 10−23 s. The 6Li isotope is one of only five stable nuclides to have both an odd number of protons and an odd number of neutrons, the other four stable odd-odd nuclides being hydrogen-2, boron-10, nitrogen-14, and tantalum-180m. 7Li is one of the primordial elements (or, more properly, primordial nuclides) produced in Big Bang nucleosynthesis. A small amount of both 6Li and 7Li are produced in stars during stellar nucleosynthesis, but it is further "burned" as fast as produced. 7Li can also be generated in carbon stars. Additional small amounts of both 6Li and 7Li may be generated from solar wind, cosmic rays hitting heavier atoms, and from early solar system 7Be radioactive decay. Lithium isotopes fractionate substantially during a wide variety of natural processes, including mineral formation (chemical precipitation), metabolism, and ion exchange. Lithium ions substitute for magnesium and iron in octahedral sites in clay minerals, where 6Li is preferred to 7Li, resulting in enrichment of the light isotope in processes of hyperfiltration and rock alteration. The exotic 11Li is known to exhibit a neutron halo, with 2 neutrons orbiting around its nucleus of 3 protons and 6 neutrons. The process known as laser isotope separation can be used to separate lithium isotopes, in particular 7Li from 6Li. Nuclear weapons manufacture and other nuclear physics applications are a major source of artificial lithium fractionation, with the light isotope 6Li being retained by industry and military stockpiles to such an extent that it has caused slight but measurable change in the 6Li to 7Li ratios in natural sources, such as rivers. This has led to unusual uncertainty in the standardized atomic weight of lithium, since this quantity depends on the natural abundance ratios of these naturally-occurring stable lithium isotopes, as they are available in commercial lithium mineral sources. Both stable isotopes of lithium can be laser cooled and were used to produce the first quantum degenerate Bose–Fermi mixture. Occurrence Astronomical Although it was synthesized in the Big Bang, lithium (together with beryllium and boron) is markedly less abundant in the universe than other elements. This is a result of the comparatively low stellar temperatures necessary to destroy lithium, along with a lack of common processes to produce it. According to modern cosmological theory, lithium—in both stable isotopes (lithium-6 and lithium-7)—was one of the three elements synthesized in the Big Bang. Though the amount of lithium generated in Big Bang nucleosynthesis is dependent upon the number of photons per baryon, for accepted values the lithium abundance can be calculated, and there is a "cosmological lithium discrepancy" in the universe: older stars seem to have less lithium than they should, and some younger stars have much more. The lack of lithium in older stars is apparently caused by the "mixing" of lithium into the interior of stars, where it is destroyed, while lithium is produced in younger stars. Although it transmutes into two atoms of helium due to collision with a proton at temperatures above 2.4 million degrees Celsius (most stars easily attain this temperature in their interiors), lithium is more abundant than computations would predict in later-generation stars. Lithium is also found in brown dwarf substellar objects and certain anomalous orange stars. Because lithium is present in cooler, less-massive brown dwarfs, but is destroyed in hotter red dwarf stars, its presence in the stars' spectra can be used in the "lithium test" to differentiate the two, as both are smaller than the Sun. Certain orange stars can also contain a high concentration of lithium. Those orange stars found to have a higher than usual concentration of lithium (such as Centaurus X-4) orbit massive objects—neutron stars or black holes—whose gravity evidently pulls heavier lithium to the surface of a hydrogen-helium star, causing more lithium to be observed. On 27 May 2020, astronomers reported that classical nova explosions are galactic producers of lithium-7. Terrestrial Although lithium is widely distributed on Earth, it does not naturally occur in elemental form due to its high reactivity. The total lithium content of seawater is very large and is estimated as 230 billion tonnes, where the element exists at a relatively constant concentration of 0.14 to 0.25 parts per million (ppm), or 25 micromolar; higher concentrations approaching 7 ppm are found near hydrothermal vents. Estimates for the Earth's crustal content range from 20 to 70 ppm by weight. In keeping with its name, lithium forms a minor part of igneous rocks, with the largest concentrations in granites. Granitic pegmatites also provide the greatest abundance of lithium-containing minerals, with spodumene and petalite being the most commercially viable sources. Another significant mineral of lithium is lepidolite which is now an obsolete name for a series formed by polylithionite and trilithionite. Another source for lithium is hectorite clay, the only active development of which is through the Western Lithium Corporation in the United States. At 20 mg lithium per kg of Earth's crust, lithium is the 31st most abundant element. According to the Handbook of Lithium and Natural Calcium, "Lithium is a comparatively rare element, although it is found in many rocks and some brines, but always in very low concentrations. There are a fairly large number of both lithium mineral and brine deposits but only comparatively few of them are of actual or potential commercial value. Many are very small, others are too low in grade." Chile is estimated (2020) to have the largest reserves by far (9.2 million tonnes), and Australia the highest annual production (40,000 tonnes). One of the largest reserve bases of lithium is in the Salar de Uyuni area of Bolivia, which has 5.4 million tonnes. Other major suppliers include Australia, Argentina and China. As of 2015, the Czech Geological Survey considered the entire Ore Mountains in the Czech Republic as lithium province. Five deposits are registered, one near is considered as a potentially economical deposit, with 160 000 tonnes of lithium. In December 2019, Finnish mining company Keliber Oy reported its Rapasaari lithium deposit has estimated proven and probable ore reserves of 5.280 million tonnes. In June 2010, The New York Times reported that American geologists were conducting ground surveys on dry salt lakes in western Afghanistan believing that large deposits of lithium are located there. These estimates are "based principally on old data, which was gathered mainly by the Soviets during their occupation of Afghanistan from 1979–1989". The Department of Defense estimated the lithium reserves in Afghanistan to amount to the ones in Bolivia and dubbed it as a potential "Saudi-Arabia of lithium". In Cornwall, England, the presence of brine rich in lithium was well known due to the region's historic mining industry, and private investors have conducted tests to investigate potential lithium extraction in this area. Biological Lithium is found in trace amount in numerous plants, plankton, and invertebrates, at concentrations of 69 to 5,760 parts per billion (ppb). In vertebrates the concentration is slightly lower, and nearly all vertebrate tissue and body fluids contain lithium ranging from 21 to 763 ppb. Marine organisms tend to bioaccumulate lithium more than terrestrial organisms. Whether lithium has a physiological role in any of these organisms is unknown. Lithium concentrations in human tissue averages about 24 ppb (4 ppb in blood, and 1.3 ppm in bone). Lithium is easily absorbed by plants and lithium concentration in plant tissue is typically around 1 ppm. Some plant families bioaccumulate more lithium than others. Dry weight lithium concentrations for members of the family Solanaceae (which includes potatoes and tomatoes), for instance, can be as high as 30 ppm while this can be as low as 0.05 ppb for corn grains. Studies of lithium concentrations in mineral-rich soil give ranges between around 0.1 and 50−100 ppm, with some concentrations as high as 100−400 ppm, although it is unlikely that all of it is available for uptake by plants. Lithium accumulation does not appear to affect the essential nutrient composition of plants. Tolerance to lithium varies by plant species and typically parallels sodium tolerance; maize and Rhodes grass, for example, are highly tolerant to lithium injury while avocado and soybean are very sensitive. Similarly, lithium at concentrations of 5 ppm reduces seed germination in some species (e.g. Asian rice and chickpea) but not in others (e.g. barley and wheat). Many of lithium's major biological effects can be explained by its competition with other ions. The monovalent lithium ion competes with other ions such as sodium (immediately below lithium on the periodic table), which like lithium is also a monovalent alkali metal. Lithium also competes with bivalent magnesium ions, whose ionic radius (86 pm) is approximately that of the lithium ion (90 pm). Mechanisms that transport sodium across cellular membranes also transport lithium. For instance, sodium channels (both voltage-gated and epithelial) are particularly major pathways of entry for lithium. Lithium ions can also permeate through ligand-gated ion channels as well as cross both nuclear and mitochondrial membranes. Like sodium, lithium can enter and partially block (although not permeate) potassium channels and calcium channels. The biological effects of lithium are many and varied but its mechanisms of action are only partially understood. For instance, studies of lithium-treated patients with bipolar disorder show that, among many other effects, lithium partially reverses telomere shortening in these patients and also increases mitochondrial function, although how lithium produces these pharmacological effects is not understood. Even the exact mechanisms involved in lithium toxicity are not fully understood. History Petalite (LiAlSi4O10) was discovered in 1800 by the Brazilian chemist and statesman José Bonifácio de Andrada e Silva in a mine on the island of Utö, Sweden. However, it was not until 1817 that Johan August Arfwedson, then working in the laboratory of the chemist Jöns Jakob Berzelius, detected the presence of a new element while analyzing petalite ore. This element formed compounds similar to those of sodium and potassium, though its carbonate and hydroxide were less soluble in water and less alkaline. Berzelius gave the alkaline material the name "lithion/lithina", from the Greek word λιθoς (transliterated as lithos, meaning "stone"), to reflect its discovery in a solid mineral, as opposed to potassium, which had been discovered in plant ashes, and sodium, which was known partly for its high abundance in animal blood. He named the new element "lithium". Arfwedson later showed that this same element was present in the minerals spodumene and lepidolite. In 1818, Christian Gmelin was the first to observe that lithium salts give a bright red color to flame. However, both Arfwedson and Gmelin tried and failed to isolate the pure element from its salts. It was not isolated until 1821, when William Thomas Brande obtained it by electrolysis of lithium oxide, a process that had previously been employed by the chemist Sir Humphry Davy to isolate the alkali metals potassium and sodium. Brande also described some pure salts of lithium, such as the chloride, and, estimating that lithia (lithium oxide) contained about 55% metal, estimated the atomic weight of lithium to be around 9.8 g/mol (modern value ~6.94 g/mol). In 1855, larger quantities of lithium were produced through the electrolysis of lithium chloride by Robert Bunsen and Augustus Matthiessen. The discovery of this procedure led to commercial production of lithium in 1923 by the German company Metallgesellschaft AG, which performed an electrolysis of a liquid mixture of lithium chloride and potassium chloride. Australian psychiatrist John Cade is credited with reintroducing and popularizing the use of lithium to treat mania in 1949. Shortly after, throughout the mid 20th century, lithium's mood stabilizing applicability for mania and depression took off in Europe and the United States. The production and use of lithium underwent several drastic changes in history. The first major application of lithium was in high-temperature lithium greases for aircraft engines and similar applications in World War II and shortly after. This use was supported by the fact that lithium-based soaps have a higher melting point than other alkali soaps, and are less corrosive than calcium based soaps. The small demand for lithium soaps and lubricating greases was supported by several small mining operations, mostly in the US. The demand for lithium increased dramatically during the Cold War with the production of nuclear fusion weapons. Both lithium-6 and lithium-7 produce tritium when irradiated by neutrons, and are thus useful for the production of tritium by itself, as well as a form of solid fusion fuel used inside hydrogen bombs in the form of lithium deuteride. The US became the prime producer of lithium between the late 1950s and the mid-1980s. At the end, the stockpile of lithium was roughly 42,000 tonnes of lithium hydroxide. The stockpiled lithium was depleted in lithium-6 by 75%, which was enough to affect the measured atomic weight of lithium in many standardized chemicals, and even the atomic weight of lithium in some "natural sources" of lithium ion which had been "contaminated" by lithium salts discharged from isotope separation facilities, which had found its way into ground water. Lithium is used to decrease the melting temperature of glass and to improve the melting behavior of aluminium oxide in the Hall-Héroult process. These two uses dominated the market until the middle of the 1990s. After the end of the nuclear arms race, the demand for lithium decreased and the sale of department of energy stockpiles on the open market further reduced prices. In the mid-1990s, several companies started to isolate lithium from brine which proved to be a less expensive option than underground or open-pit mining. Most of the mines closed or shifted their focus to other materials because only the ore from zoned pegmatites could be mined for a competitive price. For example, the US mines near Kings Mountain, North Carolina, closed before the beginning of the 21st century. The development of lithium-ion batteries increased the demand for lithium and became the dominant use in 2007. With the surge of lithium demand in batteries in the 2000s, new companies have expanded brine isolation efforts to meet the rising demand. Chemistry Of lithium metal Lithium reacts with water easily, but with noticeably less vigor than other alkali metals. The reaction forms hydrogen gas and lithium hydroxide. When placed over a flame, lithium compounds give off a striking crimson color, but when the metal burns strongly, the flame becomes a brilliant silver. Lithium will ignite and burn in oxygen when exposed to water or water vapor. In moist air, lithium rapidly tarnishes to form a black coating of lithium hydroxide (LiOH and LiOH·H2O), lithium nitride (Li3N) and lithium carbonate (Li2CO3, the result of a secondary reaction between LiOH and CO2). Lithium is one of the few metals that react with nitrogen gas. Because of its reactivity with water, and especially nitrogen, lithium metal is usually stored in a hydrocarbon sealant, often petroleum jelly. Although the heavier alkali metals can be stored under mineral oil, lithium is not dense enough to fully submerge itself in these liquids. Lithium has a diagonal relationship with magnesium, an element of similar atomic and ionic radius. Chemical resemblances between the two metals include the formation of a nitride by reaction with N2, the formation of an oxide () and peroxide () when burnt in O2, salts with similar solubilities, and thermal instability of the carbonates and nitrides. The metal reacts with hydrogen gas at high temperatures to produce lithium hydride (LiH). Lithium forms a variety of binary and ternary materials by direct reaction with the main group elements. These Zintl phases, although highly covalent, can be viewed as salts of polyatomic anions such as Si44-, P73-, and Te52-. With graphite, lithium forms a variety of intercalation compounds. It dissolves in ammonia (and amines) to give [Li(NH3)4]+ and the solvated electron. Inorganic compounds Lithium forms salt-like derivatives with all halides and pseudohalides. Some examples include the halides LiF, LiCl, LiBr, LiI, as well as the pseudohalides and related anions. Lithium carbonate has been described as the most important compound of lithium. This white solid is the principal product of beneficiation of lithium ores. It is a precursor to other salts including ceramics and materials for lithium batteries. The compounds and are useful reagents. These salts and many other lithium salts exhibit distinctively high solubility in ethers, in contrast with salts of heavier alkali metals. In aqueous solution, the coordination complex [Li(H2O)4]+ predominates for many lithium salts. Related complexes are known with amines and ethers. Organic chemistry Organolithium compounds are numerous and useful. They are defined by the presence of a bond between carbon and lithium. They serve as metal-stabilized carbanions, although their solution and solid-state structures are more complex than this simplistic view. Thus, these are extremely powerful bases and nucleophiles. They have also been applied in asymmetric synthesis in the pharmaceutical industry. For laboratory organic synthesis, many organolithium reagents are commercially available in solution form. These reagents are highly reactive, and are sometimes pyrophoric. Like its inorganic compounds, almost all organic compounds of lithium formally follow the duet rule (e.g., BuLi, MeLi). However, it is important to note that in the absence of coordinating solvents or ligands, organolithium compounds form dimeric, tetrameric, and hexameric clusters (e.g., BuLi is actually [BuLi]6 and MeLi is actually [MeLi]4) which feature multi-center bonding and increase the coordination number around lithium. These clusters are broken down into smaller or monomeric units in the presence of solvents like dimethoxyethane (DME) or ligands like tetramethylethylenediamine (TMEDA). As an exception to the duet rule, a two-coordinate lithate complex with four electrons around lithium, [Li(thf)4]+[((Me3Si)3C)2Li]–, has been characterized crystallographically. Production Lithium production has greatly increased since the end of World War II. The main sources of lithium are brines and ores. Lithium metal is produced through electrolysis applied to a mixture of fused 55% lithium chloride and 45% potassium chloride at about 450 °C. Lithium is one of the elements critical in a world running on renewable energy and dependent on batteries. This suggests that lithium will be one of the main objects of geopolitical competition, but this perspective has also been criticised for underestimating the power of economic incentives for expanded production. Reserves and occurrence The small ionic size makes it difficult for lithium to be included in early stages of mineral crystallization. As a result, lithium remains in the molten phases, where it gets enriched, until it gets solidified in the final stages. Such lithium enrichment is responsible for all commercially promising lithium ore deposits. Brines (and dry salt) are another important source of Li+. Although the number of known lithium-containing deposits and brines is large, most of them are either small or have too low Li+ concentrations. Thus, only a few appear to be of commercial value. The US Geological Survey (USGS) estimated worldwide identified lithium reserves in 2020 and 2021 to be 17 million and 21 million tonnes, respectively. An accurate estimate of world lithium reserves is difficult. One reason for this is that most lithium classification schemes are developed for solid ore deposits, whereas brine is a fluid that is problematic to treat with the same classification scheme due to varying concentrations and pumping effects. In 2019, world production of lithium from spodumene was around 80,000t per annum, primarily from the Greenbushes pegmatite and from some Chinese and Chilean sources. The Talison mine in Greenbushes is reported to be the largest and to have the highest grade of ore at 2.4% Li2O (2012 figures). Lithium triangle and other brine sources The world's top four lithium-producing countries from 2019, as reported by the US Geological Survey, are Australia, Chile, China and Argentina. The three countries of Chile, Bolivia, and Argentina contain a region known as the Lithium Triangle. The Lithium Triangle is known for its high-quality salt flats, which include Bolivia's Salar de Uyuni, Chile's Salar de Atacama, and Argentina's Salar de Arizaro. The Lithium Triangle is believed to contain over 75% of existing known lithium reserves. Deposits are also found in South America throughout the Andes mountain chain. Chile is the leading producer, followed by Argentina. Both countries recover lithium from brine pools. According to USGS, Bolivia's Uyuni Desert has 5.4 million tonnes of lithium. Half the world's known reserves are located in Bolivia along the central eastern slope of the Andes. The Bolivian government has invested US$900 million in lithium production and in 2021 successfully produced 540 tons. The brines in the salt pans of the Lithium Triangle vary widely in lithium content. Concentrations can also vary in time as brines are fluids that are changeable and mobile. In the US, lithium is recovered from brine pools in Nevada. Projects are also under development in Lithium Valley in California. Hard-rock deposits Since 2018 the Democratic Republic of Congo is known to have the largest lithium spodumene hard-rock deposit in the world. The deposit located in Manono, DRC, may hold up to 1.5 billion tons of lithium spodumene hard-rock. The two largest pegmatites (known as the Carriere de l'Este Pegmatite and the Roche Dure Pegmatite) are each of similar size or larger than the famous Greenbushes Pegmatite in Western Australia. Thus, the Democratic Republic of Congo is expected to be a significant supplier of lithium to the world with its high grade and low impurities. On 16 July 2018 2.5 million tonnes of high-grade lithium resources and 124 million pounds of uranium resources were found in the Falchani hard rock deposit in the region Puno, Peru. In 2020, Australia granted Major Project Status (MPS) to the Finniss Lithium Project for a strategically important lithium deposit: an estimated 3.45 million tonnes (Mt) of mineral resource at 1.4 percent lithium oxide. Operational mining began in 2022. A deposit discovered in 2013 in Wyoming's Rock Springs Uplift is estimated to contain 228,000 tons. Additional deposits in the same formation were estimated to be as much as 18 million tons. Similarly in Nevada, the McDermitt Caldera hosts lithium-bearing volcanic muds that consist of the largest known deposits of lithium within the United States. The Pampean Pegmatite Province in Argentina is known to have a total of at least 200,000 tons of spodumene with lithium oxide (Li2O) grades varying between 5 and 8 wt %. In Russia the largest lithium deposit Kolmozerskoye is located in Murmansk region. In 2023, Polar Lithium, a joint venture between Nornickel and Rosatom, has been granted the right to develop the deposit. The project aims to produce 45,000 tonnes of lithium carbonate and hydroxide per year and plans to reach full design capacity by 2030. Sources Another potential source of lithium was identified as the leachates of geothermal wells, which are carried to the surface. Recovery of this type of lithium has been demonstrated in the field; the lithium is separated by simple filtration. Reserves are more limited than those of brine reservoirs and hard rock. Pricing In 1998, the price of lithium metal was about (or US$43/lb). After the 2007 financial crisis, major suppliers, such as Sociedad Química y Minera (SQM), dropped lithium carbonate pricing by 20%. Prices rose in 2012. A 2012 Business Week article outlined an oligopoly in the lithium space: "SQM, controlled by billionaire Julio Ponce, is the second-largest, followed by Rockwood, which is backed by Henry Kravis's KKR & Co., and Philadelphia-based FMC", with Talison mentioned as the biggest producer. Global consumption may jump to 300,000 metric tons a year by 2020 from about 150,000 tons in 2012, to match the demand for lithium batteries that has been growing at about 25% a year, outpacing the 4% to 5% overall gain in lithium production. The price information service ISE - Institute of Rare Earths Elements and Strategic Metals - gives for various lithium substances in the average of March to August 2022 the following kilo prices stable in the course: Lithium carbonate, purity 99.5% min, from various producers between 63 and 72 EUR/kg. Lithium hydroxide monohydrate LiOH 56.5% min, China, at 66 to 72 EUR/kg; delivered South Korea - 73 EUR/kg. Lithium metal 99.9% min, delivered China - 42 EUR/kg. Extraction Lithium and its compounds were historically isolated and extracted from hard rock but by the 1990s mineral springs, brine pools, and brine deposits had become the dominant source. Most of these were in Chile, Argentina and Bolivia. Large lithium-clay deposits under development in the McDermitt caldera (Nevada, United States) require concentrated sulfuric acid to leach lithium from the clay ore. By early 2021, much of the lithium mined globally comes from either "spodumene, the mineral contained in hard rocks found in places such as Australia and North Carolina" or from the salty brine pumped directly out of the ground, as it is in locations in Chile. In Chile's Salar de Atacama, the lithium concentration in the brine is raised by solar evaporation in a system of ponds. The enrichment by evaporation process may require up to one-and-a-half years, when the brine reaches a lithium content of 6%. The final processing in this example is done near the city of Antofagasta on the coast where pure lithium carbonate, lithium hydroxide, and lithium chloride are produced from the brine. Low-cobalt cathodes for lithium batteries are expected to require lithium hydroxide rather than lithium carbonate as a feedstock, and this trend favors rock as a source. One method for lithium extraction, as well as other valuable minerals, is to process geothermal brine water through an electrolytic cell, located within a membrane. The use of electrodialysis and electrochemical intercalation has been proposed to extract lithium compounds from seawater (which contains lithium at 0.2 parts per million). Ion-selective cells within a membrane in principle could collect lithium either by use of electric field or a concentration difference. In 2024, a redox/electrodialysis system was claimed to offer enormous cost savings, shorter timelines, and less environmental damage than traditional evaporation-based systems. Environmental issues The manufacturing processes of lithium, including the solvent and mining waste, presents significant environmental and health hazards. Lithium extraction can be fatal to aquatic life due to water pollution. It is known to cause surface water contamination, drinking water contamination, respiratory problems, ecosystem degradation and landscape damage. It also leads to unsustainable water consumption in arid regions (1.9 million liters per ton of lithium). Massive byproduct generation of lithium extraction also presents unsolved problems, such as large amounts of magnesium and lime waste. In the United States, open-pit mining and mountaintop removal mining compete with brine extraction mining. Environmental concerns include wildlife habitat degradation, potable water pollution including arsenic and antimony contamination, unsustainable water table reduction, and massive mining waste, including radioactive uranium byproduct and sulfuric acid discharge. Human rights issues A study of relationships between lithium extraction companies and indigenous peoples in Argentina indicated that the state may not have protected indigenous peoples' right to free prior and informed consent, and that extraction companies generally controlled community access to information and set the terms for discussion of the projects and benefit sharing. Development of the Thacker Pass lithium mine in Nevada, United States, has met with protests and lawsuits from several indigenous tribes who have said they were not provided free prior and informed consent and that the project threatens cultural and sacred sites. They have also expressed concerns that development of the project will create risks to indigenous women, because resource extraction is linked to missing and murdered indigenous women. Protestors have been occupying the site of the proposed mine since January 2021. Applications Batteries In 2021, most lithium is used to make lithium-ion batteries for electric cars and mobile devices. Ceramics and glass Lithium oxide is widely used as a flux for processing silica, reducing the melting point and viscosity of the material and leading to glazes with improved physical properties including low coefficients of thermal expansion. Worldwide, this is one of the largest use for lithium compounds. Glazes containing lithium oxides are used for ovenware. Lithium carbonate (Li2CO3) is generally used in this application because it converts to the oxide upon heating. Electrical and electronic Late in the 20th century, lithium became an important component of battery electrolytes and electrodes, because of its high electrode potential. Because of its low atomic mass, it has a high charge- and power-to-weight ratio. A typical lithium-ion battery can generate approximately 3 volts per cell, compared with 2.1 volts for lead-acid and 1.5 volts for zinc-carbon. Lithium-ion batteries, which are rechargeable and have a high energy density, differ from lithium metal batteries, which are disposable (primary) batteries with lithium or its compounds as the anode. Other rechargeable batteries that use lithium include the lithium-ion polymer battery, lithium iron phosphate battery, and the nanowire battery. Over the years opinions have been differing about potential growth. A 2008 study concluded that "realistically achievable lithium carbonate production would be sufficient for only a small fraction of future PHEV and EV global market requirements", that "demand from the portable electronics sector will absorb much of the planned production increases in the next decade", and that "mass production of lithium carbonate is not environmentally sound, it will cause irreparable ecological damage to ecosystems that should be protected and that LiIon propulsion is incompatible with the notion of the 'Green Car'". Lubricating greases The third most common use of lithium is in greases. Lithium hydroxide is a strong base, and when heated with a fat, it produces a soap, such as lithium stearate from stearic acid. Lithium soap has the ability to thicken oils, and it is used to manufacture all-purpose, high-temperature lubricating greases. Metallurgy Lithium (e.g. as lithium carbonate) is used as an additive to continuous casting mould flux slags where it increases fluidity, a use which accounts for 5% of global lithium use (2011). Lithium compounds are also used as additives (fluxes) to foundry sand for iron casting to reduce veining. Lithium (as lithium fluoride) is used as an additive to aluminium smelters (Hall–Héroult process), reducing melting temperature and increasing electrical resistance, a use which accounts for 3% of production (2011). When used as a flux for welding or soldering, metallic lithium promotes the fusing of metals during the process and eliminates the formation of oxides by absorbing impurities. Alloys of the metal with aluminium, cadmium, copper and manganese are used to make high-performance, low density aircraft parts (see also Lithium-aluminium alloys). Silicon nano-welding Lithium has been found effective in assisting the perfection of silicon nano-welds in electronic components for electric batteries and other devices. Pyrotechnics Lithium compounds are used as pyrotechnic colorants and oxidizers in red fireworks and flares. Air purification Lithium chloride and lithium bromide are hygroscopic and are used as desiccants for gas streams. Lithium hydroxide and lithium peroxide are the salts most commonly used in confined areas, such as aboard spacecraft and submarines, for carbon dioxide removal and air purification. Lithium hydroxide absorbs carbon dioxide from the air by forming lithium carbonate, and is preferred over other alkaline hydroxides for its low weight. Lithium peroxide (Li2O2) in presence of moisture not only reacts with carbon dioxide to form lithium carbonate, but also releases oxygen. The reaction is as follows: 2 Li2O2 + 2 CO2 → 2 Li2CO3 + O2 Some of the aforementioned compounds, as well as lithium perchlorate, are used in oxygen candles that supply submarines with oxygen. These can also include small amounts of boron, magnesium, aluminium, silicon, titanium, manganese, and iron. Optics Lithium fluoride, artificially grown as crystal, is clear and transparent and often used in specialist optics for IR, UV and VUV (vacuum UV) applications. It has one of the lowest refractive indices and the furthest transmission range in the deep UV of most common materials. Finely divided lithium fluoride powder has been used for thermoluminescent radiation dosimetry (TLD): when a sample of such is exposed to radiation, it accumulates crystal defects which, when heated, resolve via a release of bluish light whose intensity is proportional to the absorbed dose, thus allowing this to be quantified. Lithium fluoride is sometimes used in focal lenses of telescopes. The high non-linearity of lithium niobate also makes it useful in non-linear optics applications. It is used extensively in telecommunication products such as mobile phones and optical modulators, for such components as resonant crystals. Lithium applications are used in more than 60% of mobile phones. Organic and polymer chemistry Organolithium compounds are widely used in the production of polymer and fine-chemicals. In the polymer industry, which is the dominant consumer of these reagents, alkyl lithium compounds are catalysts/initiators in anionic polymerization of unfunctionalized olefins. For the production of fine chemicals, organolithium compounds function as strong bases and as reagents for the formation of carbon-carbon bonds. Organolithium compounds are prepared from lithium metal and alkyl halides. Many other lithium compounds are used as reagents to prepare organic compounds. Some popular compounds include lithium aluminium hydride (LiAlH4), lithium triethylborohydride, n-butyllithium and tert-butyllithium. Military Metallic lithium and its complex hydrides, such as lithium aluminium hydride (LiAlH4), are used as high-energy additives to rocket propellants. LiAlH4 can also be used by itself as a solid fuel. The Mark 50 torpedo stored chemical energy propulsion system (SCEPS) uses a small tank of sulfur hexafluoride, which is sprayed over a block of solid lithium. The reaction generates heat, creating steam to propel the torpedo in a closed Rankine cycle. Lithium hydride containing lithium-6 is used in thermonuclear weapons, where it serves as fuel for the fusion stage of the bomb. Nuclear Lithium-6 is valued as a source material for tritium production and as a neutron absorber in nuclear fusion. Natural lithium contains about 7.5% lithium-6 from which large amounts of lithium-6 have been produced by isotope separation for use in nuclear weapons. Lithium-7 gained interest for use in nuclear reactor coolants. Lithium deuteride was the fusion fuel of choice in early versions of the hydrogen bomb. When bombarded by neutrons, both 6Li and 7Li produce tritium — this reaction, which was not fully understood when hydrogen bombs were first tested, was responsible for the runaway yield of the Castle Bravo nuclear test. Tritium fuses with deuterium in a fusion reaction that is relatively easy to achieve. Although details remain secret, lithium-6 deuteride apparently still plays a role in modern nuclear weapons as a fusion material. Lithium fluoride, when highly enriched in the lithium-7 isotope, forms the basic constituent of the fluoride salt mixture LiF-BeF2 used in liquid fluoride nuclear reactors. Lithium fluoride is exceptionally chemically stable and LiF-BeF2 mixtures have low melting points. In addition, 7Li, Be, and F are among the few nuclides with low enough thermal neutron capture cross-sections not to poison the fission reactions inside a nuclear fission reactor. In conceptualized (hypothetical) nuclear fusion power plants, lithium will be used to produce tritium in magnetically confined reactors using deuterium and tritium as the fuel. Naturally occurring tritium is extremely rare and must be synthetically produced by surrounding the reacting plasma with a 'blanket' containing lithium, where neutrons from the deuterium-tritium reaction in the plasma will fission the lithium to produce more tritium: 6Li + n → 4He + 3H. Lithium is also used as a source for alpha particles, or helium nuclei. When 7Li is bombarded by accelerated protons 8Be is formed, which almost immediately undergoes fission to form two alpha particles. This feat, called "splitting the atom" at the time, was the first fully human-made nuclear reaction. It was produced by Cockroft and Walton in 1932. Injection of lithium powders is used in fusion reactors to manipulate plasma-material interactions and dissipate energy in the hot thermo-nuclear fusion plasma boundary. In 2013, the US Government Accountability Office said a shortage of lithium-7 critical to the operation of 65 out of 100 American nuclear reactors "places their ability to continue to provide electricity at some risk." The problem stems from the decline of US nuclear infrastructure. The equipment needed to separate lithium-6 from lithium-7 is mostly a cold war leftover. The US shut down most of this machinery in 1963, when it had a huge surplus of separated lithium, mostly consumed during the twentieth century. The report said it would take five years and $10 million to $12 million to reestablish the ability to separate lithium-6 from lithium-7. Reactors that use lithium-7 heat water under high pressure and transfer heat through heat exchangers that are prone to corrosion. The reactors use lithium to counteract the corrosive effects of boric acid, which is added to the water to absorb excess neutrons. Medicine Lithium is useful in the treatment of bipolar disorder. Lithium salts may also be helpful for related diagnoses, such as schizoaffective disorder and cyclic major depressive disorder. The active part of these salts is the lithium ion Li+. Lithium may increase the risk of developing Ebstein's cardiac anomaly in infants born to women who take lithium during the first trimester of pregnancy. Precautions Lithium metal is corrosive and requires special handling to avoid skin contact. Breathing lithium dust or lithium compounds (which are often alkaline) initially irritate the nose and throat, while higher exposure can cause a buildup of fluid in the lungs, leading to pulmonary edema. The metal itself is a handling hazard because contact with moisture produces the caustic lithium hydroxide. Lithium is safely stored in non-reactive compounds such as naphtha. See also Cosmological lithium problem Dilithium Halo nucleus Isotopes of lithium List of countries by lithium production Lithia water Lithium–air battery Lithium burning Lithium compounds (category) Lithium-ion battery Lithium Tokamak Experiment Notes References External links McKinsey review of 2018 (PDF) Lithium at The Periodic Table of Videos (University of Nottingham) International Lithium Alliance (archived, August 2009) USGS: Lithium Statistics and Information Lithium Supply & Markets 2009 IM Conference 2009 Sustainable lithium supplies through 2020 in the face of sustainable market growth University of Southampton, Mountbatten Centre for International Studies, Nuclear History Working Paper No5. (PDF) (archived February 26 February 2008) Lithium preserves by Country at investingnews.com Chemical elements Alkali metals Reducing agents Chemical elements with body-centered cubic structure Least dense things
Lithium
[ "Physics", "Chemistry" ]
9,719
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
17,570
https://en.wikipedia.org/wiki/Linear%20equation
In mathematics, a linear equation is an equation that may be put in the form where are the variables (or unknowns), and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken. The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true. In the case of just one variable, there is exactly one solution (provided that ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in variables form a hyperplane (a subspace of dimension ) in the Euclidean space of dimension . Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. One variable A linear equation in one variable can be written as with . The solution is . Two variables A linear equation in two variables and can be written as where and are not both . If and are real numbers, it has infinitely many solutions. Linear function If , the equation is a linear equation in the single variable for every value of . It has therefore a unique solution for , which is given by This defines a function. The graph of this function is a line with slope and -intercept The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when , that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that are often called linear maps. Geometric interpretation Each solution of a linear equation may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that and are not both zero. Conversely, every line is the set of all solutions of a linear equation. The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line. If , the line is the graph of the function of that has been defined in the preceding section. If , the line is a vertical line (that is a line parallel to the -axis) of equation which is not the graph of a function of . Similarly, if , the line is the graph of a function of , and, if , one has a horizontal line of equation Equation of a line There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case. Slope–intercept form or Gradient-intercept form A non-vertical line can be defined by its slope , and its -intercept (the coordinate of its intersection with the -axis). In this case, its linear equation can be written If, moreover, the line is not horizontal, it can be defined by its slope and its -intercept . In this case, its equation can be written or, equivalently, These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation these forms can be easily deduced from the relations Point–slope form or Point-gradient form A non-vertical line can be defined by its slope , and the coordinates of any point of the line. In this case, a linear equation of the line is or This equation can also be written for emphasizing that the slope of a line can be computed from the coordinates of any two points. Intercept form A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values and of these two points are nonzero, and an equation of the line is (It is easy to verify that the line defined by this equation has and as intercept values). Two-point form Given two different points and , there is exactly one line that passes through them. There are several ways to write a linear equation of this line. If , the slope of the line is Thus, a point-slope form is By clearing denominators, one gets the equation which is valid also when (for verifying this, it suffices to verify that the two given points satisfy the equation). This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms: (exchanging the two points changes the sign of the left-hand side of the equation). Determinant form The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that. The equation is the result of expanding the determinant in the equation The equation can be obtained by expanding with respect to its first row the determinant in the equation Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through points in a space of dimension . These equations rely on the condition of linear dependence of points in a projective space. More than two variables A linear equation with more than two variables may always be assumed to have the form The coefficient , often denoted is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the with . When dealing with variables, it is common to use and instead of indexed variables. A solution of such an equation is a -tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality. For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for ) as having no solution, or all are solutions. The -tuples that are solutions of a linear equation in are the Cartesian coordinates of the points of an -dimensional hyperplane in an Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane. If a linear equation is given with , then the equation can be solved for , yielding If the coefficients are real numbers, this defines a real-valued function of real variables. See also Linear equation over a ring Algebraic equation Line coordinates Linear inequality Nonlinear equation Notes References External links Elementary algebra Equations
Linear equation
[ "Mathematics" ]
1,561
[ "Mathematical objects", "Elementary algebra", "Equations", "Elementary mathematics", "Algebra" ]
17,692
https://en.wikipedia.org/wiki/Sampling%20bias
In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower or higher sampling probability than others. It results in a biased sample of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling. Medical sources sometimes refer to sampling bias as ascertainment bias. Ascertainment bias has basically the same definition, but is still sometimes classified as a separate type of bias. Distinction from selection bias Sampling bias is usually classified as a subtype of selection bias, sometimes specifically termed sample selection bias, but some classify it as a separate type of bias. A distinction, albeit not universally accepted, of sampling bias is that it undermines the external validity of a test (the ability of its results to be generalized to the entire population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias. However, selection bias and sampling bias are often used synonymously. Types Selection from a specific real area. For example, a survey of high school students to measure teenage use of illegal drugs will be a biased sample because it does not include home-schooled students or dropouts. A sample is also biased if certain members are underrepresented or overrepresented relative to others in the population. For example, a "man on the street" interview which selects people who walk by a certain location is going to have an overrepresentation of healthy individuals who are more likely to be out of the home than individuals with a chronic illness. This may be an extreme form of biased sampling, because certain members of the population are totally excluded from the sample (that is, they have zero probability of being selected). Self-selection bias (see also Non-response bias), which is possible whenever the group of people being studied has any form of control over whether to participate (as current standards of human-subject research ethics require for many real-time and some longitudinal forms of study). Participants' decision to participate may be correlated with traits that affect the study, making the participants a non-representative sample. For example, people who have strong opinions or substantial knowledge may be more willing to spend time answering a survey than those who do not. Another example is online and phone-in polls, which are biased samples because the respondents are self-selected. Those individuals who are highly motivated to respond, typically individuals who have strong opinions, are overrepresented, and individuals that are indifferent or apathetic are less likely to respond. This often leads to a polarization of responses with extreme perspectives being given a disproportionate weight in the summary. As a result, these types of polls are regarded as unscientific. Exclusion bias results from exclusion of particular groups from the sample, e.g. exclusion of subjects who have recently migrated into the study area (this may occur when newcomers are not available in a register used to identify the source population). Excluding subjects who move out of the study area during follow-up is rather equivalent of dropout or nonresponse, a selection bias in that it rather affects the internal validity of the study. Healthy user bias, when the study population is likely healthier than the general population. For example, someone in poor health is unlikely to have a job as manual laborer, so if a study is conducted on manual laborers, the health of the general population will likely be overestimated. Berkson's fallacy, when the study population is selected from a hospital and so is less healthy than the general population. This can result in a spurious negative correlation between diseases: a hospital patient without diabetes is more likely to have another given disease such as cholecystitis, since they must have had some reason to enter the hospital in the first place. Overmatching, matching for an apparent confounder that actually is a result of the exposure. The control group becomes more similar to the cases in regard to exposure than does the general population. Survivorship bias, in which only "surviving" subjects are selected, ignoring those that fell out of view. For example, using the record of current companies as an indicator of business climate or economy ignores the businesses that failed and no longer exist. Malmquist bias, an effect in observational astronomy which leads to the preferential detection of intrinsically bright objects. Spotlight fallacy, the uncritical assumption that all members or cases of a certain class or type are like those that receive the most attention or coverage in the media. Symptom-based sampling The study of medical conditions begins with anecdotal reports. By their nature, such reports only include those referred for diagnosis and treatment. A child who can't function in school is more likely to be diagnosed with dyslexia than a child who struggles but passes. A child examined for one condition is more likely to be tested for and diagnosed with other conditions, skewing comorbidity statistics. As certain diagnoses become associated with behavior problems or intellectual disability, parents try to prevent their children from being stigmatized with those diagnoses, introducing further bias. Studies carefully selected from whole populations are showing that many conditions are much more common and usually much milder than formerly believed. Truncate selection in pedigree studies Geneticists are limited in how they can obtain data from human populations. As an example, consider a human characteristic. We are interested in deciding if the characteristic is inherited as a simple Mendelian trait. Following the laws of Mendelian inheritance, if the parents in a family do not have the characteristic, but carry the allele for it, they are carriers (e.g. a non-expressive heterozygote). In this case their children will each have a 25% chance of showing the characteristic. The problem arises because we can't tell which families have both parents as carriers (heterozygous) unless they have a child who exhibits the characteristic. The description follows the textbook by Sutton. The figure shows the pedigrees of all the possible families with two children when the parents are carriers (Aa). Nontruncate selection. In a perfect world we should be able to discover all such families with a gene including those who are simply carriers. In this situation the analysis would be free from ascertainment bias and the pedigrees would be under "nontruncate selection" In practice, most studies identify, and include, families in a study based upon them having affected individuals. Truncate selection. When afflicted individuals have an equal chance of being included in a study this is called truncate selection, signifying the inadvertent exclusion (truncation) of families who are carriers for a gene. Because selection is performed on the individual level, families with two or more affected children would have a higher probability of becoming included in the study. Complete truncate selection is a special case where each family with an affected child has an equal chance of being selected for the study. The probabilities of each of the families being selected is given in the figure, with the sample frequency of affected children also given. In this simple case, the researcher will look for a frequency of or for the characteristic, depending on the type of truncate selection used. The caveman effect An example of selection bias is called the "caveman effect". Much of our understanding of prehistoric peoples comes from caves, such as cave paintings made nearly 40,000 years ago. If there had been contemporary paintings on trees, animal skins or hillsides, they would have been washed away long ago. Similarly, evidence of fire pits, middens, burial sites, etc. are most likely to remain intact to the modern era in caves. Prehistoric people are associated with caves because that is where the data still exists, not necessarily because most of them lived in caves for most of their lives. Problems due to sampling bias Sampling bias is problematic because it is possible that a statistic computed of the sample is systematically erroneous. Sampling bias can lead to a systematic over- or under-estimation of the corresponding parameter in the population. Sampling bias occurs in practice as it is practically impossible to ensure perfect randomness in sampling. If the degree of misrepresentation is small, then the sample can be treated as a reasonable approximation to a random sample. Also, if the sample does not differ markedly in the quantity being measured, then a biased sample can still be a reasonable estimate. The word bias has a strong negative connotation. Indeed, biases sometimes come from deliberate intent to mislead or other scientific fraud. In statistical usage, bias merely represents a mathematical property, no matter if it is deliberate or unconscious or due to imperfections in the instruments used for observation. While some individuals might deliberately use a biased sample to produce misleading results, more often, a biased sample is just a reflection of the difficulty in obtaining a truly representative sample, or ignorance of the bias in their process of measurement or analysis. An example of how ignorance of a bias can exist is in the widespread use of a ratio (a.k.a. fold change) as a measure of difference in biology. Because it is easier to achieve a large ratio with two small numbers with a given difference, and relatively more difficult to achieve a large ratio with two large numbers with a larger difference, large significant differences may be missed when comparing relatively large numeric measurements. Some have called this a 'demarcation bias' because the use of a ratio (division) instead of a difference (subtraction) removes the results of the analysis from science into pseudoscience (See Demarcation Problem). Some samples use a biased statistical design which nevertheless allows the estimation of parameters. The U.S. National Center for Health Statistics, for example, deliberately oversamples from minority populations in many of its nationwide surveys in order to gain sufficient precision for estimates within these groups. These surveys require the use of sample weights (see later on) to produce proper estimates across all ethnic groups. Provided that certain conditions are met (chiefly that the weights are calculated and used correctly) these samples permit accurate estimation of population parameters. Historical examples A classic example of a biased sample and the misleading results it produced occurred in 1936. In the early days of opinion polling, the American Literary Digest magazine collected over two million postal surveys and predicted that the Republican candidate in the U.S. presidential election, Alf Landon, would beat the incumbent president, Franklin Roosevelt, by a large margin. The result was the exact opposite. The Literary Digest survey represented a sample collected from readers of the magazine, supplemented by records of registered automobile owners and telephone users. This sample included an over-representation of wealthy individuals, who, as a group, were more likely to vote for the Republican candidate. In contrast, a poll of only 50 thousand citizens selected by George Gallup's organization successfully predicted the result, leading to the popularity of the Gallup poll. Another classic example occurred in the 1948 presidential election. On election night, the Chicago Tribune printed the headline DEWEY DEFEATS TRUMAN, which turned out to be mistaken. In the morning the grinning president-elect, Harry S. Truman, was photographed holding a newspaper bearing this headline. The reason the Tribune was mistaken is that their editor trusted the results of a phone survey. Survey research was then in its infancy, and few academics realized that a sample of telephone users was not representative of the general population. Telephones were not yet widespread, and those who had them tended to be prosperous and have stable addresses. (In many cities, the Bell System telephone directory contained the same names as the Social Register). In addition, the Gallup poll that the Tribune based its headline on was over two weeks old at the time of the printing. In air quality data, pollutants (such as carbon monoxide, nitrogen monoxide, nitrogen dioxide, or ozone) frequently show high correlations, as they stem from the same chemical process(es). These correlations depend on space (i.e., location) and time (i.e., period). Therefore, a pollutant distribution is not necessarily representative for every location and every period. If a low-cost measurement instrument is calibrated with field data in a multivariate manner, more precisely by collocation next to a reference instrument, the relationships between the different compounds are incorporated into the calibration model. By relocation of the measurement instrument, erroneous results can be produced. A twenty-first century example is the COVID-19 pandemic, where variations in sampling bias in COVID-19 testing have been shown to account for wide variations in both case fatality rates and the age distribution of cases across countries. Statistical corrections for a biased sample If entire segments of the population are excluded from a sample, then there are no adjustments that can produce estimates that are representative of the entire population. But if some groups are underrepresented and the degree of underrepresentation can be quantified, then sample weights can correct the bias. However, the success of the correction is limited to the selection model chosen. If certain variables are missing the methods used to correct the bias could be inaccurate. For example, a hypothetical population might include 10 million men and 10 million women. Suppose that a biased sample of 100 patients included 20 men and 80 women. A researcher could correct for this imbalance by attaching a weight of 2.5 for each male and 0.625 for each female. This would adjust any estimates to achieve the same expected value as a sample that included exactly 50 men and 50 women, unless men and women differed in their likelihood of taking part in the survey. See also Censored regression model Cherry picking (fallacy) File drawer problem Friendship paradox Reporting bias Sampling probability Selection bias Common source bias Spectrum bias Truncated regression model References Sampling (statistics) Misuse of statistics Experimental bias
Sampling bias
[ "Mathematics" ]
2,911
[ "Experimental bias", "Statistical concepts" ]
17,740
https://en.wikipedia.org/wiki/Louis%20Pasteur
Louis Pasteur (, ; 27 December 1822 – 28 September 1895) was a French chemist, pharmacist, and microbiologist renowned for his discoveries of the principles of vaccination, microbial fermentation, and pasteurization, the last of which was named after him. His research in chemistry led to remarkable breakthroughs in the understanding of the causes and preventions of diseases, which laid down the foundations of hygiene, public health and much of modern medicine. Pasteur's works are credited with saving millions of lives through the developments of vaccines for rabies and anthrax. He is regarded as one of the founders of modern bacteriology and has been honored as the "father of bacteriology" and the "father of microbiology" (together with Robert Koch; the latter epithet also attributed to Antonie van Leeuwenhoek). Pasteur was responsible for disproving the doctrine of spontaneous generation. Under the auspices of the French Academy of Sciences, his experiment demonstrated that in sterilized and sealed flasks, nothing ever developed; conversely, in sterilized but open flasks, microorganisms could grow. For this experiment, the academy awarded him the Alhumbert Prize carrying 2,500 francs in 1862. Pasteur is also regarded as one of the fathers of germ theory of diseases, which was a minor medical concept at the time. His many experiments showed that diseases could be prevented by killing or stopping germs, thereby directly supporting the germ theory and its application in clinical medicine. He is best known to the general public for his invention of the technique of treating milk and wine to stop bacterial contamination, a process now called pasteurization. Pasteur also made significant discoveries in chemistry, most notably on the molecular basis for the asymmetry of certain crystals and racemization. Early in his career, his investigation of sodium ammonium tartrate initiated the field of optical isomerism. This work had a profound effect on structural chemistry, with eventual implications for many areas including medicinal chemistry. He was the director of the Pasteur Institute, established in 1887, until his death, and his body was interred in a vault beneath the institute. Although Pasteur made groundbreaking experiments, his reputation became associated with various controversies. Historical reassessment of his notebook revealed that he practiced deception to overcome his rivals. Early life and education Louis Pasteur was born on 27 December 1822, in Dole, Jura, France, to a Catholic family of a poor tanner. He was the third child of Jean-Joseph Pasteur and Jeanne-Etiennette Roqui. The family moved to Marnoz in 1826 and then to Arbois in 1827. Pasteur entered primary school in 1831. He was dyslexic and dysgraphic. He was an average student in his early years, and not particularly academic, as his interests were fishing and sketching. He drew many pastels and portraits of his parents, friends and neighbors. Pasteur attended secondary school at the Collège d'Arbois. In October 1838, he left for Paris to enroll in a boarding school, but became homesick and returned in November. In 1839, he entered the Collège Royal at Besançon to study philosophy and earned his Bachelor of Letters degree in 1840. He was appointed a tutor at the Besançon college while continuing a degree science course with special mathematics. He failed his first examination in 1841. He managed to pass the baccalauréat scientifique (general science) degree from Dijon, where he earned his Bachelor of Science in Mathematics degree (Bachelier ès Sciences Mathématiques) in 1842, but with a mediocre grade in chemistry. Later in 1842, Pasteur took the entrance test for the École Normale Supérieure. During the test, he had to fight fatigue and only felt comfortable with physics and mathematics. He passed the first set of tests, but because his ranking was low, Pasteur decided not to continue and try again next year. He went back to the Parisian boarding school to prepare for the test. He also attended classes at the Lycée Saint-Louis and lectures of Jean-Baptiste Dumas at the Sorbonne. In 1843, he passed the test with a high ranking and entered the École Normale Supérieure. In 1845 he received the licencié ès sciences degree. In 1846, he was appointed professor of physics at the Collège de Tournon (now called Lycée Gabriel-Faure) in Ardèche. But the chemist Antoine Jérôme Balard wanted him back at the École Normale Supérieure as a graduate laboratory assistant (agrégé préparateur). He joined Balard and simultaneously started his research in crystallography and in 1847, he submitted his two theses, one in chemistry and the other in physics: (a) Chemistry Thesis: "Recherches sur la capacité de saturation de l'acide arsénieux. Etudes des arsénites de potasse, de soude et d'ammoniaque."; (b) Physics Thesis: "1. Études des phénomènes relatifs à la polarisation rotatoire des liquides. 2. Application de la polarisation rotatoire des liquides à la solution de diverses questions de chimie." After serving briefly as professor of physics at the Dijon Lycée in 1848, he became professor of chemistry at the University of Strasbourg, where he met and courted Marie Laurent, daughter of the university's rector in 1849. They were married on 29 May 1849, and together had five children, only two of whom survived to adulthood; the other three died of typhoid. Career Pasteur was appointed professor of chemistry at the University of Strasbourg in 1848, and became the chair of chemistry in 1852. In February 1854, so that he would have time to carry out work that could earn him the title of correspondent of the Institute, he got three months' paid leave with the help of a medical certificate of convenience. He extended the leave until 1 August, the date of the start of the exams. "I tell the Minister that I will go and do the examinations so as not to increase the embarrassment of the service. It is also so as not to leave to another a sum of 6 or 700 francs". In this same year 1854, he was named dean of the new faculty of sciences at University of Lille, where he began his studies on fermentation. It was on this occasion that Pasteur uttered his oft-quoted remark: "dans les champs de l'observation, le hasard ne favorise que les esprits préparés" ("In the field of observation, chance favors only the prepared mind"). In 1857, he moved to Paris as the director of scientific studies at the École Normale Supérieure where he took control from 1858 to 1867 and introduced a series of reforms to improve the standard of scientific work. The examinations became more rigid, which led to better results, greater competition, and increased prestige. Many of his decrees, however, were rigid and authoritarian, leading to two serious student revolts. During "the bean revolt" he decreed that a mutton stew, which students had refused to eat, would be served and eaten every Monday. On another occasion he threatened to expel any student caught smoking, and 73 of the 80 students in the school resigned. In 1863, he was appointed professor of geology, physics, and chemistry at the École nationale supérieure des Beaux-Arts, a position he held until his resignation in 1867. In 1867, he became the chair of organic chemistry at the Sorbonne, but he later gave up the position because of poor health. In 1867, the École Normale's laboratory of physiological chemistry was created at Pasteur's request, and he was the laboratory's director from 1867 to 1888. In Paris, he established the Pasteur Institute in 1887, in which he was its director for the rest of his life. Research Molecular asymmetry In Pasteur's early work as a chemist, beginning at the École Normale Supérieure, and continuing at Strasbourg and Lille, he examined the chemical, optical and crystallographic properties of a group of compounds known as tartrates. He resolved a problem concerning the nature of tartaric acid in 1848. A solution of this compound derived from living things rotated the plane of polarization of light passing through it. The problem was that tartaric acid derived by chemical synthesis had no such effect, even though its chemical reactions were identical and its elemental composition was the same. Pasteur noticed that crystals of tartrates had small faces. Then he observed that, in racemic mixtures of tartrates, half of the crystals were right-handed and half were left-handed. In solution, the right-handed compound was dextrorotatory, and the left-handed one was levorotatory. Pasteur determined that optical activity related to the shape of the crystals, and that an asymmetric internal arrangement of the molecules of the compound was responsible for twisting the light. The (2R,3R)- and (2S,3S)- tartrates were isometric, non-superposable mirror images of each other. This was the first time anyone had demonstrated molecular chirality, and also the first explanation of isomerism. Some historians consider Pasteur's work in this area to be his "most profound and most original contributions to science", and his "greatest scientific discovery." Fermentation and germ theory of diseases Pasteur was motivated to investigate fermentation while working at Lille. In 1856 a local wine manufacturer, M. Bigot, whose son was one of Pasteur's students, sought for his advice on the problems of making beetroot alcohol and souring. Pasteur began his research in the topic by repeating and confirming works of Theodor Schwann, who demonstrated a decade earlier that yeast were alive. According to his son-in-law, René Vallery-Radot, in August 1857 Pasteur sent a paper about lactic acid fermentation to the Société des Sciences de Lille, but the paper was read three months later. A memoire was subsequently published on 30 November 1857. In the memoir, he developed his ideas stating that: "I intend to establish that, just as there is an alcoholic ferment, the yeast of beer, which is found everywhere that sugar is decomposed into alcohol and carbonic acid, so also there is a particular ferment, a lactic yeast, always present when sugar becomes lactic acid." Pasteur also wrote about alcoholic fermentation. It was published in full form in 1858. Jöns Jacob Berzelius and Justus von Liebig had proposed the theory that fermentation was caused by decomposition. Pasteur demonstrated that this theory was incorrect, and that yeast was responsible for fermentation to produce alcohol from sugar. He also demonstrated that, when a different microorganism contaminated the wine, lactic acid was produced, making the wine sour. In 1861, Pasteur observed that less sugar fermented per part of yeast when the yeast was exposed to air. The lower rate of fermentation aerobically became known as the Pasteur effect. Pasteur's research also showed that the growth of micro-organisms was responsible for spoiling beverages, such as beer, wine and milk. With this established, he invented a process in which liquids such as milk were heated to a temperature between 60 and 100 °C. This killed most bacteria and moulds already present within them. Pasteur and Claude Bernard completed tests on blood and urine on 20 April 1862. Pasteur patented the process, to fight the "diseases" of wine, in 1865. The method became known as pasteurization, and was soon applied to beer and milk. Beverage contamination led Pasteur to the idea that micro-organisms infecting animals and humans cause disease. He proposed preventing the entry of micro-organisms into the human body, leading Joseph Lister to develop antiseptic methods in surgery. In 1866, Pasteur published Études sur le Vin, about the diseases of wine, and he published Études sur la Bière in 1876, concerning the diseases of beer. In the early 19th century, Agostino Bassi had shown that muscardine was caused by a fungus that infected silkworms. Since 1853, two diseases called pébrine and flacherie had been infecting great numbers of silkworms in southern France, and by 1865 they were causing huge losses to farmers. In 1865, Pasteur went to Alès and worked for five years until 1870. Silkworms with pébrine were covered in corpuscles. In the first three years, Pasteur thought that the corpuscles were a symptom of the disease. In 1870, he concluded that the corpuscles were the cause of pébrine (it is now known that the cause is a microsporidian). Pasteur also showed that the disease was hereditary. Pasteur developed a system to prevent pébrine: after the female moths laid their eggs, the moths were turned into a pulp. The pulp was examined with a microscope, and if corpuscles were observed, the eggs were destroyed. Pasteur concluded that bacteria caused flacherie. The primary cause is currently thought to be viruses. The spread of flacherie could be accidental or hereditary. Hygiene could be used to prevent accidental flacherie. Moths whose digestive cavities did not contain the microorganisms causing flacherie were used to lay eggs, preventing hereditary flacherie. Spontaneous generation Following his fermentation experiments, Pasteur demonstrated that the skin of grapes was the natural source of yeasts, and that sterilized grapes and grape juice never fermented. He drew grape juice from under the skin with sterilized needles, and also covered grapes with sterilized cloth. Both experiments could not produce wine in sterilized containers. His findings and ideas were against the prevailing notion of spontaneous generation. He received a particularly stern criticism from Félix Archimède Pouchet, who was director of the Rouen Museum of Natural History. To settle the debate between the eminent scientists, the French Academy of Sciences offered the Alhumbert Prize carrying 2,500 francs to whoever could experimentally demonstrate for or against the doctrine. Pouchet stated that air everywhere could cause spontaneous generation of living organisms in liquids. In the late 1850s, he performed experiments and claimed that they were evidence of spontaneous generation. Francesco Redi and Lazzaro Spallanzani had provided some evidence against spontaneous generation in the 17th and 18th centuries, respectively. Spallanzani's experiments in 1765 suggested that air contaminated broths with bacteria. In the 1860s, Pasteur repeated Spallanzani's experiments, but Pouchet reported a different result using a different broth. Pasteur performed several experiments to disprove spontaneous generation. He placed boiled liquid in a flask and let hot air enter the flask. Then he closed the flask, and no organisms grew in it. In another experiment, when he opened flasks containing boiled liquid, dust entered the flasks, causing organisms to grow in some of them. The number of flasks in which organisms grew was lower at higher altitudes, showing that air at high altitudes contained less dust and fewer organisms. Pasteur also used swan neck flasks containing a fermentable liquid. Air was allowed to enter the flask via a long curving tube that made dust particles stick to it. Nothing grew in the broths unless the flasks were tilted, making the liquid touch the contaminated walls of the neck. This showed that the living organisms that grew in such broths came from outside, on dust, rather than spontaneously generating within the liquid or from the action of pure air. These were some of the most important experiments disproving the theory of spontaneous generation. Pasteur gave a series of five presentations of his findings before the French Academy of Sciences in 1881, which were published in 1882 as Mémoire Sur les corpuscules organisés qui existent dans l'atmosphère: Examen de la doctrine des générations spontanées (Account of Organized Corpuscles Existing in the Atmosphere: Examining the Doctrine of Spontaneous Generation). Pasteur won the Alhumbert Prize in 1862. He concluded that: Silkworm disease In 1865, Jean-Baptiste Dumas, chemist, senator and former Minister of Agriculture and Commerce, asked Pasteur to study a new disease that was decimating silkworm farms from the south of France and Europe, the pébrine, characterized on a macroscopic scale by black spots and on a microscopic scale by the "Cornalia corpuscles". Pasteur accepted and made five long stays in Alès, between 7 June 1865 and 1869. Initial errors Arriving in Alès, Pasteur familiarized himself with pébrine and also with another disease of the silkworm, known earlier than pebrine: flacherie or dead-flat disease. Contrary, for example, to Quatrefages, who coined the new word pébrine, Pasteur made the mistake of believing that the two diseases were the same and even that most of the diseases of silkworms known up to that time were identical with each other and with pébrine. It was in letters of 30 April and 21 May 1867 to Dumas that he first made the distinction between pébrine and flacherie. He made another mistake: he began by denying the "parasitic" (microbial) nature of pébrine, which several scholars (notably Antoine Béchamp) considered well established. Even a note published on 27 August 1866 by Balbiani, which Pasteur at first seemed to welcome favourably had no effect, at least immediately. "Pasteur is mistaken. He would only change his mind in the course of 1867". Victory over pébrine At a time where Pasteur had not yet understood the cause of the pébrine, he propagated an effective process to stop infections: a sample of chrysalises was chosen, they were crushed and the corpuscles were searched for in the crushed material; if the proportion of corpuscular pupae in the sample was very low, the chamber was considered good for reproduction. This method of sorting "seeds" (eggs) is close to a method that Osimo had proposed a few years earlier, but whose trials had not been conclusive. By this process, Pasteur curbs pébrine and saves many of the silk industry in the Cévennes. Flacherie resists In 1878, at the Congrès international séricicole, Pasteur admitted that "if pébrine is overcome, flacherie still exerts its ravages". He attributed the persistence of flacherie to the fact that the farmers had not followed his advice. In 1884, Balbiani, who disregarded the theoretical value of Pasteur's work on silkworm diseases, acknowledged that his practical process had remedied the ravages of pébrine, but added that this result tended to be counterbalanced by the development of flacherie, which was less well known and more difficult to prevent. Despite Pasteur's success against pébrine, French sericulture had not been saved from damage. (See :fr:Sériciculture in the French Wikipedia.) Immunology and vaccination Chicken cholera Pasteur's first work on vaccine development was on chicken cholera. He received the bacteria samples (later called Pasteurella multocida after him) from Henry Toussaint. Being unable to conduct the experiments himself due to a stroke in 1868, Pasteur relied heavily on his assistants Emile Roux and Charles Chamberland. The work with chicken cholera was initiated in 1877, and by the next year, Roux was able to maintain a stable culture using broths. As documented later by Pasteur in his notebook in March of 1880, in October of 1879, being delayed in returning to the laboratory due to his daughter’s wedding and ill health, he instructed Roux to start a new chicken cholera culture using bacteria from a culture that had sat since July. The two chickens inoculated with this new culture showed some symptoms of infection, but instead of the infections being fatal, as they usually were, the chickens recovered completely. After further incubation of the culture for an additional 8 days, Roux again inoculated the same two chickens. As was also noted by Pasteur in his notebook in March of 1880, and contrary to some accounts, this time the chickens died. Thus, although the attenuated bacteria did not provide immunity, these experiments provided important clues as to how bacteria could be artificially attenuated in the laboratory. As a result, upon Pasteur’s return to the laboratory, the focus of the research was directed at creating a vaccine through attenuation. In February of 1880, Pasteur presented his results to the French Academy of Sciences as "Sur les maladies virulentes et en particulier sur la maladie appelée vulgairement choléra des poules (On virulent diseases, and in particular on the disease commonly called chicken cholera)" and published it in the academy's journal (Comptes-Rendus hebdomadaires des séances de l'Académie des Sciences). He attributed that the bacteria were weakened by contact with oxygen. He explained that bacteria kept in sealed containers never lost their virulence, and only those exposed to air in culture media could be used as vaccine. Pasteur introduced the term "attenuation" for this weakening of virulence as he presented before the academy, saying: In fact, Pasteur's vaccine against chicken cholera did not consistently produce immunity, and has subsequently been proven to be ineffective. Anthrax Following the results with chicken cholera, Pasteur eventually utilized the immunization method developed for chicken cholera to create a vaccine for anthrax, which affected cattle. In 1877, Pasteur had earlier directed his laboratory to culture the bacteria from the blood of infected animals, following the discovery of the bacterium by Robert Koch. When animals were infected with the bacteria, anthrax occurred, proving that the bacteria was the cause of the disease. Many cattle were dying of anthrax in "cursed fields". Pasteur was told that sheep that died from anthrax were buried in the field. Pasteur thought that earthworms might have brought the bacteria to the surface. He found anthrax bacteria in earthworms' excrement, showing that he was correct. He told the farmers not to bury dead animals in the fields. Pasteur's interest in creating a vaccine for anthrax was greatly stimulated when on 12 July 1880, Henri Bouley read before the French Academy of Sciences a report from Henry Toussaint, a veterinary surgeon, who was not a member of the academy. Toussaint had developed anthrax vaccine by killing the bacilli by heating at 55 °C for 10  minutes. He tested his vaccine on eight dogs and 11 sheep, half of which died after inoculation. It was not a great success. Upon hearing the news, Pasteur immediately wrote to the academy that he could not believe that dead vaccine would work and that Toussaint's claim "overturns all the ideas I had on viruses, vaccines, etc." Following Pasteur's criticism, Toussaint switched to carbolic acid (phenol) to kill anthrax bacilli and tested the vaccine on sheep in August 1880. Pasteur thought that this type of killed vaccine should not work because he believed that attenuated bacteria used up nutrients that the bacteria needed to grow. He thought oxidizing bacteria when sitting in culture broth for prolonged periods made them less virulent. However, Pasteur's laboratory found that anthrax bacillus was not easily weakened by culturing in air as it formed spores – unlike chicken cholera bacillus. In early 1881, his laboratory discovered that growing anthrax bacilli at about 42 °C made them unable to produce spores, and he described this method in a speech to the French Academy of Sciences on 28 February. On 21 March, despite inconsistent results, he announced successful vaccination of sheep. To this news, veterinarian Hippolyte Rossignol proposed that the Société d'agriculture de Melun organize an experiment to test Pasteur's vaccine. Pasteur signed an agreement accepting the challenge on 28 April. Pasteur's assistants, Roux and Chamberland, who were assigned the task of conducting the trial, were concerned about the unreliability of the attenuated vaccine, and therefore Chamberland secretly prepared an alternative vaccine using chemical inactivation. Without divulging their method of preparing the vaccine to anyone but Pasteur, Roux and Chamberland performed the public experiment on May at Pouilly-le-Fort. 58 sheep, 2 goats and 10 cattle were used, half of which were given the vaccine on 5 and 17 May; while the other half was untreated. On 31 May, Roux and Chamberland next injected the animals with the fresh virulent culture of anthrax bacillus. The official result was observed and analyzed on 2 June in the presence of over 200 spectators, with Pasteur himself in attendance. The results were as Pasteur had bravely predicted: "I hypothesized that the six vaccinated cows would not become very ill, while the four unvaccinated cows would perish or at least become very ill." However, all vaccinated sheep and goats survived, while unvaccinated ones had died or were dying before the viewers. His report to the French Academy of Sciences on 13 June concludes:Pasteur did not directly disclose how he prepared the vaccines used at Pouilly-le-Fort. Although his report indicated it as a "live vaccine", his laboratory notebooks show that he actually used potassium dichromate-killed vaccine, as developed by Chamberland, quite similar to Toussaint's method. The notion of a weak form of a disease causing immunity to the virulent version was not new; this had been known for a long time for smallpox. Inoculation with smallpox (variolation) was known to result in a much less severe disease, and greatly reduced mortality, in comparison with the naturally acquired disease. Edward Jenner had also studied vaccination using cowpox (vaccinia) to give cross-immunity to smallpox in the late 1790s, and by the early 1800s vaccination had spread to most of Europe. The difference between smallpox vaccination and anthrax or chicken cholera vaccination was that the latter two disease organisms had been artificially weakened, so a naturally weak form of the disease organism did not need to be found. This discovery revolutionized work in infectious diseases, and Pasteur gave these artificially weakened diseases the generic name of "vaccines", in honour of Jenner's discovery. In 1876, Robert Koch had shown that Bacillus anthracis caused anthrax. In his papers published between 1878 and 1880, Pasteur only mentioned Koch's work in a footnote. Koch met Pasteur at the Seventh International Medical Congress in 1881. A few months later, Koch wrote that Pasteur had used impure cultures and made errors. In 1882, Pasteur replied to Koch in a speech, to which Koch responded aggressively. Koch stated that Pasteur tested his vaccine on unsuitable animals and that Pasteur's research was not properly scientific. In 1882, Koch wrote "On the Anthrax Inoculation", in which he refuted several of Pasteur's conclusions about anthrax and criticized Pasteur for keeping his methods secret, jumping to conclusions, and being imprecise. In 1883, Pasteur wrote that he used cultures prepared in a similar way to his successful fermentation experiments and that Koch misinterpreted statistics and ignored Pasteur's work on silkworms. Swine erysipelas In 1882, Pasteur sent his assistant Louis Thuillier to southern France because of an epizootic of swine erysipelas. Thuillier identified the bacillus that caused the disease in March 1883. Pasteur and Thuillier increased the bacillus's virulence after passing it through pigeons. Then they passed the bacillus through rabbits, weakening it and obtaining a vaccine. Pasteur and Thuillier incorrectly described the bacterium as a figure-eight shape. Roux described the bacterium as stick-shaped in 1884. Rabies Pasteur's laboratory produced the first vaccine for rabies using a method deveoped by his assistant Roux, which involved growing the virus in rabbits, and then weakening it by drying the affected nerve tissue. The rabies vaccine was initially created by Emile Roux, a French doctor and a colleague of Pasteur, who had produced a killed vaccine using this method. The vaccine had been tested in 50 dogs before its first human trial. This vaccine was used on 9-year-old Joseph Meister, on 6 July 1885, after the boy was badly mauled by a rabid dog. This was done at some personal risk for Pasteur, since he was not a licensed physician and could have faced prosecution for treating the boy. After consulting with physicians, he decided to go ahead with the treatment. Over 11 days, Meister received 13 inoculations, each inoculation using viruses that had been weakened for a shorter period of time. Three months later he examined Meister and found that he was in good health. Pasteur was hailed as a hero and the legal matter was not pursued. Analysis of his laboratory notebooks shows that Pasteur had treated two people before his vaccination of Meister. One survived but may not actually have had rabies, and the other died of rabies. Pasteur began treatment of Jean-Baptiste Jupille on 20 October 1885, and the treatment was successful. Later in 1885, people, including four children from the United States, went to Pasteur's laboratory to be inoculated. In 1886, he treated 350 people, of which only one developed rabies. The treatment's success laid the foundations for the manufacture of many other vaccines. The first of the Pasteur Institutes was also built on the basis of this achievement. In The Story of San Michele, Axel Munthe writes of some risks Pasteur undertook in the rabies vaccine research: Because of his study in germs, Pasteur encouraged doctors to sanitize their hands and equipment before surgery. Prior to this, few doctors or their assistants practiced these procedures. Ignaz Semmelweis and Joseph Lister had earlier practiced hand sanitizing in medical contexts in the 1860s. Controversies A French national hero at age 55, in 1878 Pasteur discreetly told his family to never reveal his laboratory notebooks to anyone. His family obeyed, and all his documents were held and inherited in secrecy. Being that Pasteur did not allow others in his laboratory to keep notebooks, this secrecy kept many aspects of Pasteur's research unknown until relatively recently. Finally, in 1964 Pasteur's grandson and last surviving male descendant, Pasteur Vallery-Radot, donated the papers to the French national library. Yet the papers were restricted for historical studies until the death of Vallery-Radot in 1971. The documents were given a catalogue number only in 1985. In 1995, the centennial of the death of Louis Pasteur, a historian of science Gerald L. Geison published an analysis of Pasteur's private notebooks in his The Private Science of Louis Pasteur, and declared that Pasteur had given several misleading accounts and played deceptions in his most important discoveries. Max Perutz published a defense of Pasteur in The New York Review of Books. Based on further examinations of Pasteur's documents, French immunologist Patrice Debré concluded in his book Louis Pasteur (1998) that, in spite of his genius, Pasteur had some faults. A book review states that Debré "sometimes finds him unfair, combative, arrogant, unattractive in attitude, inflexible and even dogmatic". Fermentation Scientists before Pasteur had studied fermentation. In the 1830s, Charles Cagniard-Latour, Friedrich Traugott Kützing and Theodor Schwann used microscopes to study yeasts and concluded that yeasts were living organisms. In 1839, Justus von Liebig, Friedrich Wöhler and Jöns Jacob Berzelius stated that yeast was not an organism and was produced when air acted on plant juice. In 1855, Antoine Béchamp, Professor of Chemistry at the University of Montpellier, conducted experiments with sucrose solutions and concluded that water was the factor for fermentation. He changed his conclusion in 1858, stating that fermentation was directly related to the growth of moulds, which required air for growth. He regarded himself as the first to show the role of microorganisms in fermentation. Pasteur started his experiments in 1857 and published his findings in 1858 (April issue of Comptes Rendus Chimie, Béchamp's paper appeared in January issue). Béchamp noted that Pasteur did not bring any novel idea or experiments. On the other hand, Béchamp was probably aware of Pasteur's 1857 preliminary works. With both scientists claiming priority on the discovery, a dispute, extending to several areas, lasted throughout their lives. However, Béchamp was on the losing side, as the BMJ obituary remarked: His name was "associated with bygone controversies as to priority which it would be unprofitable to recall". Béchamp proposed the incorrect theory of microzymes. According to K. L. Manchester, anti-vivisectionists and proponents of alternative medicine promoted Béchamp and microzymes, unjustifiably claiming that Pasteur plagiarized Béchamp. Pasteur thought that succinic acid inverted sucrose. In 1860, Marcellin Berthelot isolated invertase and showed that succinic acid did not invert sucrose. Pasteur believed that fermentation was only due to living cells. He and Berthelot engaged in a long argument subject of vitalism, in which Berthelot was vehemently opposed to any idea of vitalism. Hans Buchner discovered that zymase (not an enzyme, but a mixture of enzymes) catalyzed fermentation, showing that fermentation was catalyzed by enzymes within cells. Eduard Buchner also discovered that fermentation could take place outside living cells. Anthrax vaccine Pasteur publicly claimed his success in developing the anthrax vaccine in 1881. However, his admirer-turned-rival Henry Toussaint was the one who developed the first vaccine. Toussaint isolated the bacteria that caused chicken cholera (later named Pasteurella in honour of Pasteur) in 1879 and gave samples to Pasteur who used them for his own works. On 12 July 1880, Toussaint presented his successful result to the French Academy of Sciences, using an attenuated vaccine against anthrax in dogs and sheep. Pasteur on grounds of jealousy contested the discovery by publicly displaying his vaccination method at Pouilly-le-Fort on 5 May 1881. Pasteur then gave a misleading account of the preparation of the anthrax vaccine used in the experiment. He claimed that he made a "live vaccine", but used potassium dichromate to inactivate anthrax spores, a method similar to Toussaint's. The promotional experiment was a success and helped Pasteur sell his products, getting the benefits and glory. Experimental ethics Pasteur's experiments are often cited as against medical ethics, especially on his vaccination of Meister. He did not have any experience in medical practice, and more importantly, lacked a medical license. This is often cited as a serious threat to his professional and personal reputation. His closest partner Émile Roux, who had medical qualifications, refused to participate in the clinical trial, likely because he considered it unjust. However, Pasteur executed vaccination of the boy under the close watch of practising physicians Jacques-Joseph Grancher, head of the Paris Children's Hospital's paediatric clinic, and Alfred Vulpian, a member of the Commission on Rabies. He was not allowed to hold the syringe, although the inoculations were entirely under his supervision. It was Grancher who was responsible for the injections, and he defended Pasteur before the French National Academy of Medicine in the issue. Pasteur has also been criticized for keeping secrecy of his procedure and not giving proper pre-clinical trials on animals. Pasteur stated that he kept his procedure secret in order to control its quality. He later disclosed his procedures to a small group of scientists. Pasteur wrote that he had successfully vaccinated 50 rabid dogs before using it on Meister. According to Geison, Pasteur's laboratory notebooks show that he had vaccinated only 11 dogs. Meister never showed any symptoms of rabies, but the vaccination has not been proved to be the reason. One source estimates the probability of Meister contracting rabies at 10%. Awards and honours Pasteur was awarded 1,500 francs in 1853 by the Pharmaceutical Society for the synthesis of racemic acid. In 1856 the Royal Society of London presented him the Rumford Medal for his discovery of the nature of racemic acid and its relations to polarized light, and the Copley Medal in 1874 for his work on fermentation. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1869. The French Academy of Sciences awarded Pasteur the 1859 Montyon Prize for experimental physiology in 1860, and the Jecker Prize in 1861 and the Alhumbert Prize in 1862 for his experimental refutation of spontaneous generation. Though he lost elections in 1857 and 1861 for membership to the French Academy of Sciences, he won the 1862 election for membership to the mineralogy section. He was elected to permanent secretary of the physical science section of the academy in 1887 and held the position until 1889. In 1873, Pasteur was elected to the Académie Nationale de Médecine and was made the commander in the Brazilian Order of the Rose. In 1881 he was elected to a seat at the Académie française left vacant by Émile Littré. Pasteur received the Albert Medal from the Royal Society of Arts in 1882. In 1883 he became foreign member of the Royal Netherlands Academy of Arts and Sciences. In 1885, he was elected as a member to the American Philosophical Society. On 8 June 1886, the Ottoman Sultan Abdul Hamid II awarded Pasteur with the Order of the Medjidie (I Class) and 10000 Ottoman liras. He was awarded the Cameron Prize for Therapeutics of the University of Edinburgh in 1889. Pasteur won the Leeuwenhoek Medal from the Royal Netherlands Academy of Arts and Sciences for his contributions to microbiology in 1895. Pasteur was made a Chevalier of the Legion of Honour in 1853, promoted to Officer in 1863, to Commander in 1868, to Grand Officer in 1878 and made a Grand Cross of the Legion of Honor in 1881. Legacy In many localities worldwide, streets are named in his honor. For example, in the US: Palo Alto and Irvine, California, Boston and Polk, Florida, adjacent to the University of Texas Health Science Center at San Antonio; Jonquière, Québec; San Salvador de Jujuy and Buenos Aires (Argentina), Great Yarmouth in Norfolk, in the United Kingdom, Jericho and Wulguru in Queensland, Australia; Phnom Penh in Cambodia; Ho Chi Minh City and Da Nang, Vietnam; Batna in Algeria; Bandung in Indonesia, Tehran in Iran, near the central campus of the Warsaw University in Warsaw, Poland; adjacent to the Odesa State Medical University in Odesa, Ukraine; Milan in Italy and Bucharest, Cluj-Napoca and Timișoara in Romania. The Avenue Pasteur in Saigon, Vietnam, is one of the few streets in that city to retain its French name. Avenue Louis Pasteur in the Longwood Medical and Academic Area in Boston was named in his honor in the French manner with "Avenue" preceding the name of the dedicatee. Both the Institut Pasteur and Université Louis Pasteur were named after Pasteur. The schools Lycée Pasteur in Neuilly-sur-Seine, France, and Lycée Louis Pasteur in Calgary, Alberta, Canada, are named after him. In South Africa, the Louis Pasteur Private Hospital in Pretoria, and Life Louis Pasteur Private Hospital, Bloemfontein, are named after him. Louis Pasteur University Hospital in Košice, Slovakia is also named after Pasteur. A statue of Pasteur is erected at San Rafael High School in San Rafael, California. A bronze bust of him resides on the French Campus of Kaiser Permanente's San Francisco Medical Center in San Francisco. The sculpture was designed by Harriet G. Moore and cast in 1984 by Artworks Foundry. The UNESCO/Institut Pasteur Medal was created on the centenary of Pasteur's death, and is given every two years in his name, "in recognition of outstanding research contributing to a beneficial impact on human health". The French Academician Henri Mondor stated: "Louis Pasteur was neither a physician nor a surgeon, but no one has done as much for medicine and surgery as he has." Pasteur Institute After developing the rabies vaccine, Pasteur proposed an institute for the vaccine. In 1887, fundraising for the Pasteur Institute began, with donations from many countries. The official statute was registered in 1887, stating that the institute's purposes were "the treatment of rabies according to the method developed by M. Pasteur" and "the study of virulent and contagious diseases". The institute was inaugurated on 14 November 1888. He brought together scientists with various specialties. The first five departments were directed by two graduates of the École Normale Supérieure: Émile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Élie Metchnikoff (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Émile Roux (technical microbe research). One year after the inauguration of the institute, Roux set up the first course of microbiology ever taught in the world, then entitled Cours de Microbie Technique (Course of microbe research techniques). Since 1891 the Pasteur Institute had been extended to different countries, and currently there are 32 institutes in 29 countries in various parts of the world. Personal life Pasteur married Marie Pasteur (née Laurent) in 1849. She was the daughter of the rector of the University of Strasbourg, and was Pasteur's scientific assistant. They had five children together, three of whom died as children. Their eldest daughter, Jeanne, was born in 1850. She died from typhoid fever, aged 9, whilst at the boarding school Arbois in 1859. In 1865, 2-year-old Camille died of a liver tumour. Shortly after they decided to bring Cécile home from boarding school, but she too died of typhoid fever on 23 May 1866 at the age of 12. Only Jean Baptiste (b. 1851) and Marie Louise (b. 1858) survived to adulthood. Jean Baptiste would be a soldier in the Franco-Prussian War between France and Prussia. Faith and spirituality His grandson, Louis Pasteur Vallery-Radot, wrote that Pasteur had kept from his Catholic background only a spiritualism without religious practice. However, Catholic observers often said that Pasteur remained an ardent Christian throughout his whole life, and his son-in-law wrote, in a biography of him: The Literary Digest of 18 October 1902 gives this statement from Pasteur that he prayed while he worked: Maurice Vallery-Radot, grandson of the brother of the son-in-law of Pasteur and outspoken Catholic, also holds that Pasteur fundamentally remained Catholic. According to both Pasteur Vallery-Radot and Maurice Vallery-Radot, the following well-known quotation attributed to Pasteur is apocryphal: "The more I know, the more nearly is my faith that of the Breton peasant. Could I but know all I would have the faith of a Breton peasant's wife". According to Maurice Vallery-Radot, the false quotation appeared for the first time shortly after the death of Pasteur. However, despite his belief in God, it has been said that his views were that of a freethinker rather than a Catholic, a spiritual more than a religious man. He was also against mixing science with religion. Death In 1868, Pasteur suffered a severe brain stroke that paralysed the left side of his body, but he recovered. A stroke or uremia in 1894 severely impaired his health. Failing to fully recover, he died on 28 September 1895, near Paris. He was given a state funeral and was buried in the Cathedral of Notre Dame, but his remains were reinterred in the Pasteur Institute in Paris, in a vault covered in depictions of his accomplishments in Byzantine mosaics. Publications Pasteur's principal published works are: See also Infection control Infectious disease Pasteur Institute Pasteurization The Story of Louis Pasteur (a 1936 biographical film) List of things named after Louis Pasteur Statue of Louis Pasteur, Mexico City References Further reading Cédric Grimoult, Pasteur: Le mythe au coeur de l'action (ou le combattant), Paris, Ellipses, coll. "Biographies et mythes historiques", 2021, 332 p. , chapters III (PASTEUR: Microbes are a Menace!) and V (PASTEUR: And the Mad Dog) Reynolds, Moira Davison. How Pasteur Changed History: The Story of Louis Pasteur and the Pasteur Institute (1994) External links The Institut Pasteur – Foundation dedicated to the prevention and treatment of diseases through biological research, education and public health activities The Pasteur Foundation – A US nonprofit organization dedicated to promoting the mission of the Institut Pasteur in Paris. Full archive of newsletters available online containing examples of US Tributes to Louis Pasteur. Pasteur's Papers on the Germ Theory The Life and Work of Louis Pasteur, Pasteur Brewing The Pasteur Galaxy Germ Theory and Its Applications to Medicine and Surgery, 1878 Louis Pasteur (1822–1895) profile, AccessExcellence.org Comptes rendus de l'Académie des sciences Articles published by Pasteur 1822 births 1895 deaths People from Dole, Jura 19th-century French biologists French microbiologists 19th-century French chemists Vaccinologists École Normale Supérieure alumni Conservatoire national des arts et métiers alumni Academic staff of the Lille University of Science and Technology Academic staff of the University of Strasbourg Members of the Académie Française Members of the French Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Honorary members of the Saint Petersburg Academy of Sciences Grand Cross of the Legion of Honour Recipients of the Order of the Medjidie, 1st class Recipients of the Copley Medal Recipients of the Order of Agricultural Merit Leeuwenhoek Medal winners French Roman Catholics Academic staff of the École des Beaux-Arts Members of the Serbian Academy of Sciences and Arts Lycée Saint-Louis alumni Members of the American Philosophical Society Stereochemists Scientists with dyslexia French scientists with disabilities
Louis Pasteur
[ "Chemistry", "Biology" ]
9,854
[ "Vaccination", "Stereochemistry", "Vaccinologists", "Stereochemists" ]
17,744
https://en.wikipedia.org/wiki/Lanthanum
Lanthanum is a chemical element with the symbol La and the atomic number 57. It is a soft, ductile, silvery-white metal that tarnishes slowly when exposed to air. It is the eponym of the lanthanide series, a group of 15 similar elements between lanthanum and lutetium in the periodic table, of which lanthanum is the first and the prototype. Lanthanum is traditionally counted among the rare earth elements. Like most other rare earth elements, its usual oxidation state is +3, although some compounds are known with an oxidation state of +2. Lanthanum has no biological role in humans but is used by some bacteria. It is not particularly toxic to humans but does show some antimicrobial activity. Lanthanum usually occurs together with cerium and the other rare earth elements. Lanthanum was first found by the Swedish chemist Carl Gustaf Mosander in 1839 as an impurity in cerium nitrate – hence the name lanthanum, from the ancient Greek (), meaning 'to lie hidden'. Although it is classified as a rare earth element, lanthanum is the 28th most abundant element in the Earth's crust, almost three times as abundant as lead. In minerals such as monazite and bastnäsite, lanthanum composes about a quarter of the lanthanide content. It is extracted from those minerals by a process of such complexity that pure lanthanum metal was not isolated until 1923. Lanthanum compounds have numerous applications including catalysts, additives in glass, carbon arc lamps for studio lights and projectors, ignition elements in lighters and torches, electron cathodes, scintillators, and gas tungsten arc welding electrodes. Lanthanum carbonate is used as a phosphate binder to treat high levels of phosphate in the blood accompanied by kidney failure. Characteristics Physical Lanthanum is the first element and prototype of the lanthanide series. In the periodic table, it appears to the right of the alkaline earth metal barium and to the left of the lanthanide cerium. Lanthanum is generally considered the first of the f-block elements by authors writing on the subject. The 57 electrons of a lanthanum atom are arranged in the configuration [Xe]5d6s, with three valence electrons outside the noble gas core. In chemical reactions, lanthanum almost always gives up these three valence electrons from the 5d and 6s subshells to form the +3 oxidation state, achieving the stable configuration of the preceding noble gas xenon. Some lanthanum(II) compounds are also known, but they are usually much less stable. Lanthanum monoxide (LaO) produces strong absorption bands in some stellar spectra. Among the lanthanides, lanthanum is exceptional as it has no 4f electrons as a single gas-phase atom. Thus it is only very weakly paramagnetic, unlike the strongly paramagnetic later lanthanides (with the exceptions of the last two, ytterbium and lutetium, where the 4f shell is completely full). However, the 4f shell of lanthanum can become partially occupied in chemical environments and participate in chemical bonding. For example, the melting points of the trivalent lanthanides (all but europium and ytterbium) are related to the extent of hybridisation of the 6s, 5d, and 4f electrons (lowering with increasing 4f involvement), and lanthanum has the second-lowest melting point among them: 920 °C. (Europium and ytterbium have lower melting points because they delocalise about two electrons per atom rather than three.) This chemical availability of f orbitals justifies lanthanum's placement in the f-block despite its anomalous ground-state configuration (which is merely the result of strong interelectronic repulsion making it less profitable to occupy the 4f shell, as it is small and close to the core electrons). The lanthanides become harder as the series is traversed: as expected, lanthanum is a soft metal. Lanthanum has a relatively high resistivity of 615 nΩm at room temperature; in comparison, the value for the good conductor aluminium is only 26.50 nΩm. Lanthanum is the least volatile of the lanthanides. Like most of the lanthanides, lanthanum has a hexagonal crystal structure at room temperature (-La). At 310 °C, lanthanum changes to a face-centered cubic structure (-La), and at 865 °C, it changes to a body-centered cubic structure (-La). Chemical As expected from periodic trends, lanthanum has the largest atomic radius of the lanthanides. Hence, it is the most reactive among them, tarnishing quite rapidly in air, turning completely dark after several hours and can readily burn to form lanthanum(III) oxide, , which is almost as basic as calcium oxide. A centimeter-sized sample of lanthanum will corrode completely in a year as its oxide spalls off like iron rust, instead of forming a protective oxide coating like aluminium, scandium, yttrium, and lutetium. Lanthanum reacts with the halogens at room temperature to form the trihalides, and upon warming will form binary compounds with the nonmetals nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon and arsenic. Lanthanum reacts slowly with water to form lanthanum(III) hydroxide, . In dilute sulfuric acid, lanthanum readily forms the aquated tripositive ion : This is colorless in aqueous solution since has no d or f electrons. Lanthanum is the strongest and hardest base among the rare earth elements, which is again expected from its being the largest of them. Some lanthanum(II) compounds are also known, but they are much less stable. Therefore, in officially naming compounds of lanthanum its oxidation number always is to be mentioned. Isotopes Naturally occurring lanthanum is made up of two isotopes, the stable and the primordial long-lived radioisotope . is by far the most abundant, making up 99.910% of natural lanthanum: it is produced in the s-process (slow neutron capture, which occurs in low- to medium-mass stars) and the r-process (rapid neutron capture, which occurs in core-collapse supernovae). It is the only stable isotope of lanthanum. The very rare isotope is one of the few primordial odd–odd nuclei, with a long half-life of It is one of the proton-rich p-nuclei which cannot be produced in the s- or r-processes. , along with the even rarer , is produced in the ν-process, where neutrinos interact with stable nuclei. All other lanthanum isotopes are synthetic: With the exception of with a half-life of about 60,000 years, all of them have half-lives less than two days, and most have half-lives less than a minute. The isotopes and occur as fission products of uranium. Compounds Lanthanum oxide is a white solid that can be prepared by direct reaction of its constituent elements. Due to the large size of the ion, adopts a hexagonal 7-coordinate structure that changes to the 6-coordinate structure of scandium oxide () and yttrium oxide () at high temperature. When it reacts with water, lanthanum hydroxide is formed: a lot of heat is evolved in the reaction and a hissing sound is heard. Lanthanum hydroxide will react with atmospheric carbon dioxide to form the basic carbonate. Lanthanum fluoride is insoluble in water and can be used as a qualitative test for the presence of . The heavier halides are all very soluble deliquescent compounds. The anhydrous halides are produced by direct reaction of their elements, as heating the hydrates causes hydrolysis: for example, heating hydrated produces . Lanthanum reacts exothermically with hydrogen to produce the dihydride , a black, pyrophoric, brittle, conducting compound with the calcium fluoride structure. This is a non-stoichiometric compound, and further absorption of hydrogen is possible, with a concomitant loss of electrical conductivity, until the more salt-like is reached. Like and , is probably an electride compound. Due to the large ionic radius and great electropositivity of , there is not much covalent contribution to its bonding and hence it has a limited coordination chemistry, like yttrium and the other lanthanides. Lanthanum oxalate does not dissolve very much in alkali-metal oxalate solutions, and decomposes around 500 °C. Oxygen is the most common donor atom in lanthanum complexes, which are mostly ionic and often have high coordination numbers over is the most characteristic, forming square antiprismatic and dodecadeltahedral structures. These high-coordinate species, reaching up to coordination number 12 with the use of chelating ligands such as in , often have a low degree of symmetry because of stereo-chemical factors. Lanthanum chemistry tends not to involve due to the electron configuration of the element: thus its organometallic chemistry is quite limited. The best characterized organolanthanum compounds are the cyclopentadienyl complex , which is produced by reacting anhydrous with in tetrahydrofuran, and its methyl-substituted derivatives. History In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs, later named cerite. Thirty years later, the fifteen-year-old Wilhelm Hisinger, from the family owning the mine, sent a sample of it to Carl Scheele, who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide which they named ceria after the dwarf planet Ceres, which had been discovered two years earlier. Ceria was simultaneously independently isolated in Germany by Martin Heinrich Klaproth. Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander, who lived in the same house as Berzelius and studied under him: he separated out two other oxides which he named lanthana and didymia. He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid. That same year, Axel Erdmann, a student also at the Karolinska Institute, discovered lanthanum in a new mineral from Låven island located in a Norwegian fjord. Finally, Mosander explained his delay, saying that he had extracted a second element from cerium, and this he called didymium. Although he did not realise it, didymium too was a mixture, and in 1885 it was separated into praseodymium and neodymium. Since lanthanum's properties differed only slightly from those of cerium, and occurred along with it in its salts, he named it from the Ancient Greek [] (lit. to lie hidden). Relatively pure lanthanum metal was first isolated in 1923. Occurrence and production Lanthanum makes up 39 mg/kg of the Earth's crust, behind neodymium at 41.5 mg/kg and cerium at 66.5 mg/kg. Despite being among the so-called "rare earth metals", lanthanum is thus not rare at all, but it is historically so-named because it is rarer than "common earths" such as lime and magnesia, and at the time it was recognized only a few deposits were known. Lanthanum is also ruefully considered a 'rare earth' metal because the process to mine it is difficult, time-consuming, and expensive. Lanthanum is rarely the dominant lanthanide found in the rare earth minerals, and in their chemical formulae it is usually preceded by cerium. Rare examples of La-dominant minerals are monazite-(La) and lanthanite-(La). The ion is similarly sized to the early lanthanides of the cerium group (those up to samarium and europium) that immediately follow in the periodic table, and hence it tends to occur along with them in phosphate, silicate and carbonate minerals, such as monazite () and bastnäsite (), where M refers to all the rare earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y). Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, evolving carbon dioxide, hydrogen fluoride, and silicon tetrafluoride: the product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution. The procedure for monazite, which usually contains all the rare earths as well as thorium, is more involved. Monazite, because of its magnetic properties, can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that, the solution is treated with ammonium oxalate to convert rare earths to their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in . Lanthanum is separated as a double salt with ammonium nitrate by crystallization. This salt is relatively less soluble than other rare earth double salts and therefore stays in the residue. Care must be taken when handling some of the residues as they contain , the daughter of , which is a strong gamma emitter. Lanthanum is relatively easy to extract as it has only one neighbouring lanthanide, cerium, which can be removed by making use of its ability to be oxidised to the +4 state; thereafter, lanthanum may be separated out by the historical method of fractional crystallization of , or by ion-exchange techniques when higher purity is desired. Lanthanum metal is obtained from its oxide by heating it with ammonium chloride or fluoride and hydrofluoric acid at 300–400 °C to produce the chloride or fluoride: + + + This is followed by reduction with alkali or alkaline earth metals in vacuum or argon atmosphere: + + Also, pure lanthanum can be produced by electrolysis of molten mixture of anhydrous and or at elevated temperatures. Applications The first historical application of lanthanum was in gas lantern mantles. Carl Auer von Welsbach used a mixture of lanthanum oxide and zirconium oxide, which he called Actinophor and patented in 1886. The original mantles gave a green-tinted light and were not very successful, and his first company, which established a factory in Atzgersdorf in 1887, failed in 1889. Modern uses of lanthanum include: One material used for anodic material of nickel–metal hydride batteries is . Due to high cost to extract the other lanthanides, a mischmetal with more than 50% of lanthanum is used instead of pure lanthanum. The compound is an intermetallic component of the type. NiMH batteries can be found in many models of the Toyota Prius sold in the US. These larger nickel-metal hydride batteries require massive quantities of lanthanum for the production. The 2008 Toyota Prius NiMH battery requires of lanthanum. As engineers push the technology to increase fuel efficiency, twice that amount of lanthanum could be required per vehicle. Hydrogen sponge alloys can contain lanthanum. These alloys are capable of storing up to 400 times their own volume of hydrogen gas in a reversible adsorption process. Heat energy is released every time they do so; therefore these alloys have possibilities in energy conservation systems. Mischmetal, a pyrophoric alloy used in lighter flints, contains 25% to 45% lanthanum. Lanthanum oxide and the boride are used in electronic vacuum tubes as hot cathode materials with strong emissivity of electrons. Crystals of are used in high-brightness, extended-life, thermionic electron emission sources for electron microscopes and Hall-effect thrusters. Lanthanum trifluoride () is an essential component of a heavy fluoride glass named ZBLAN. This glass has superior transmittance in the infrared range and is therefore used for fiber-optical communication systems. Cerium-doped lanthanum bromide and lanthanum chloride are the recent inorganic scintillators, which have a combination of high light yield, best energy resolution, and fast response. Their high yield converts into superior energy resolution; moreover, the light output is very stable and quite high over a very wide range of temperatures, making it particularly attractive for high-temperature applications. These scintillators are already widely used commercially in detectors of neutrons or gamma rays. Carbon arc lamps use a mixture of rare earth elements to improve the light quality. This application, especially by the motion picture industry for studio lighting and projection, consumed about 25% of the rare-earth compounds produced until the phase out of carbon arc lamps. Lanthanum(III) oxide () improves the alkali resistance of glass and is used in making special optical glasses, such as infrared-absorbing glass, as well as camera and telescope lenses, because of the high refractive index and low dispersion of rare-earth glasses. Lanthanum oxide is also used as a grain-growth additive during the liquid-phase sintering of silicon nitride and zirconium diboride. Small amounts of lanthanum added to steel improves its malleability, resistance to impact, and ductility, whereas addition of lanthanum to molybdenum decreases its hardness and sensitivity to temperature variations. Small amounts of lanthanum are present in many pool products to remove the phosphates that feed algae. Lanthanum oxide additive to tungsten is used in gas tungsten arc welding electrodes, as a substitute for radioactive thorium. Various compounds of lanthanum and other rare-earth elements (oxides, chlorides, triflates, etc.) are components of various catalysis, such as petroleum cracking catalysts. Lanthanum-barium radiometric dating is used to estimate age of rocks and ores, though the technique has limited popularity. Lanthanum carbonate was approved as a medication (Fosrenol, Shire Pharmaceuticals) to absorb excess phosphate in cases of hyperphosphatemia seen in end-stage kidney disease. Lanthanum fluoride is used in phosphor lamp coatings. Mixed with europium fluoride, it is also applied in the crystal membrane of fluoride ion-selective electrodes. Like horseradish peroxidase, lanthanum is used as an electron-dense tracer in molecular biology. Lanthanum-modified bentonite (or phoslock) is used to remove phosphates from water in lake treatments. Lanthanum telluride () is considered to be applied in the field of radioisotope power system (nuclear power plant) due to its significant conversion capabilities. The transmuted elements and isotopes in the segment will not react with the material itself, thus presenting no harm to the safety of the power plant. Though iodine, which can be generated during transmutation, is suspected to react with segment, the quantity of iodine is small enough to pose no threat to the power system. Biological role Lanthanum has no known biological role in humans. The element is very poorly absorbed after oral administration and when injected its elimination is very slow. Lanthanum carbonate (Fosrenol) was approved as a phosphate binder to absorb excess phosphate in cases of end stage renal disease. While lanthanum has pharmacological effects on several receptors and ion channels, its specificity for the GABA receptor is unique among trivalent cations. Lanthanum acts at the same modulatory site on the GABA receptor as zinc, a known negative allosteric modulator. The lanthanum cation is a positive allosteric modulator at native and recombinant GABA receptors, increasing open channel time and decreasing desensitization in a subunit configuration dependent manner. Lanthanum is a cofactor for the methanol dehydrogenase of the methanotrophic bacterium Methylacidiphilum fumariolicum SolV, although the great chemical similarity of the lanthanides means that it may be substituted with cerium, praseodymium, or neodymium without ill effects, and with the smaller samarium, europium, or gadolinium giving no side effects other than slower growth. Precautions Lanthanum has a low to moderate level of toxicity and should be handled with care. The injection of lanthanum solutions produces hyperglycemia, low blood pressure, degeneration of the spleen and hepatic alterations. The application in carbon arc light led to the exposure of people to rare earth element oxides and fluorides, which sometimes led to pneumoconiosis. As the ion is similar in size to the ion, it is sometimes used as an easily traced substitute for the latter in medical studies. Lanthanum, like the other lanthanides, is known to affect human metabolism, lowering cholesterol levels, blood pressure, appetite, and risk of blood coagulation. When injected into the brain, it acts as a painkiller, similarly to morphine and other opiates, though the mechanism behind this is still unknown. Lanthanum meant for ingestion, typically as a chewable tablet or oral powder, can interfere with gastrointestinal (GI) imaging by creating opacities throughout the GI tract; if chewable tablets are swallowed whole, they will dissolve but present initially as coin-shaped opacities in the stomach, potentially confused with ingested metal objects such as coins or batteries. Prices The price for a (metric) ton [1000 kg] of Lanthanum oxide 99% (FOB China in USD/Mt) is given by the Institute of Rare Earths Elements and Strategic Metals (IREESM) as below $2,000 for most of the period from early 2001 to September 2010 (at $10,000 in the short term in 2008); it rose steeply to $140,000 in mid-2011 and fell back just as rapidly to $38,000 by early 2012. The average price for the last six months (April–September 2022) is given by the IREESM as follows: Lanthanum Oxide - 99.9%min FOB China - 1308 EUR/mt and for Lanthanum Metal - 99%min FOB China - 3706 EUR/mt. Notes References Bibliography Further reading Chemical elements Chemical elements with double hexagonal close-packed structure Lanthanides Reducing agents GABAA receptor positive allosteric modulators
Lanthanum
[ "Physics", "Chemistry" ]
4,962
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
17,895
https://en.wikipedia.org/wiki/Leap%20year
A leap year (also known as an intercalary year or bissextile year) is a calendar year that contains an additional day (or, in the case of a lunisolar calendar, a month) compared to a common year. The 366th day (or 13th month) is added to keep the calendar year synchronised with the astronomical year or seasonal year. Since astronomical events and seasons do not repeat in a whole number of days, calendars having a constant number of days each year will unavoidably drift over time with respect to the event that the year is supposed to track, such as seasons. By inserting ("intercalating") an additional day—a leap day—or month—a leap month—into some years, the drift between a civilization's dating system and the physical properties of the Solar System can be corrected. An astronomical year lasts slightly less than 365 days. The historic Julian calendar has three common years of 365 days followed by a leap year of 366 days, by extending February to 29 days rather than the common 28. The Gregorian calendar, the world's most widely used civil calendar, makes a further adjustment for the small error in the Julian algorithm. Each leap year has 366 days instead of 365. This extra leap day occurs in each year that is a multiple of 4, except for years evenly divisible by 100 but not by 400. In the lunisolar Hebrew calendar, Adar Aleph, a 13th lunar month, is added seven times every 19 years to the twelve lunar months in its common years to keep its calendar year from drifting through the seasons. In the Solar Hijri and Bahá'í calendars, a leap day is added when needed to ensure that the following year begins on the March equinox. The term leap year probably comes from the fact that a fixed date in the Gregorian calendar normally advances one day of the week from one year to the next, but the day of the week in the 12 months following the leap day (from 1March through 28February of the following year) will advance two days due to the extra day, thus leaping over one day in the week. For example, since 1March was a Friday in 2024, it will be a Saturday in 2025, a Sunday in 2026, and a Monday in 2027, but will then "leap" over Tuesday to fall on a Wednesday in 2028. The length of a day is also occasionally corrected by inserting a leap second into Coordinated Universal Time (UTC) because of variations in Earth's rotation period. Unlike leap days, leap seconds are not introduced on a regular schedule because variations in the length of the day are not entirely predictable. Leap years can present a problem in computing, known as the leap year bug, when a year is not correctly identified as a leap year or when 29February is not handled correctly in logic that accepts or manipulates dates. Julian calendar On , by edict, Julius Caesar reformed the historic Roman calendar to make it a consistent solar calendar (rather than one which was neither strictly lunar nor strictly solar), thus removing the need for frequent intercalary months. His rule for leap years was a simple one: add a leap day every 4 years. This algorithm is close to reality: a Julian year lasts 365.25days, a mean tropical year about 365.2422 days, a difference of only . Consequently, even this Julian calendar drifts out of 'true' by about 3 days every 400 years. The Julian calendar continued in use unaltered for about 1600 years until the Catholic Church became concerned about the widening divergence between the March Equinox and 21 March, as explained at Gregorian calendar, below. Prior to Caesar's creation of what would be the Julian calendar, February was already the shortest month of the year for Romans. In the Roman calendar (after the reform of Numa Pompilius that added January and February), all months except February had an odd number of days29 or 31. This was because of a Roman superstition that even numbers were unlucky. When Caesar changed the calendar to follow the solar year closely, he made all months have 30 or 31 days, leaving February unchanged except in leap years. Gregorian calendar In the Gregorian calendar, the standard calendar in most of the world, almost every fourth year is a leap year. Each leap year, the month of February has 29 days instead of 28. Adding one extra day in the calendar every four years compensates for the fact that a period of 365 days is shorter than a tropical year by almost six hours. However, this correction is excessive and the Gregorian reform modified the Julian calendar's scheme of leap years as follows: Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100, but these centurial years are leap years if they are exactly divisible by 400. For example, the years 1700, 1800, and 1900 are not leap years, but the years 1600 and 2000 are. Whereas the Julian calendar year incorrectly summarised Earth's tropical year as 365.25 days, the Gregorian calendar makes these exceptions to follow a calendar year of 365.2425 days. This more closely resembles a mean tropical year of 365.2422 days. Over a period of four centuries, the accumulated error of adding a leap day every four years amounts to about three extra days. The Gregorian calendar therefore omits three leap days every 400 years, which is the length of its leap cycle. This is done by omitting 29 February in the three century years (multiples of 100) that are not multiples of 400. The years 2000 and 2400 are leap years, but not 1700, 1800, 1900, 2100, 2200, and 2300. By this rule, an entire leap cycle is 400 years, which totals 146,097 days, and the average number of days per year is 365 +  −  +  = 365 +  = 365.2425. This rule could be applied to years before the Gregorian reform to create a proleptic Gregorian calendar, though the result would not match any historical records. The Gregorian calendar was designed to keep the vernal equinox on or close to 21 March, so that the date of Easter (celebrated on the Sunday after the ecclesiastical full moon that falls on or after 21 March) remains close to the vernal equinox. The "Accuracy" section of the "Gregorian calendar" article discusses how well the Gregorian calendar achieves this objective, and how well it approximates the tropical year. Leap day in the Julian and Gregorian calendars The intercalary day that usually occurs every 4 years is called leap day and is created by adding an extra day to February. This day is added to the calendar in leap years as a corrective measure because the Earth does not orbit the Sun in precisely 365 days. Since about the 15th century, this extra day has been 29 February, but when the Julian calendar was introduced, the leap day was handled differently in two respects. First, leap day fell February and not at the end: 24 February was doubled to create, strangely to modern eyes, two days both dated 24 February. Second, the leap day was simply not counted so that a leap year still had 365 days. Early Roman practice The early Roman calendar was a lunisolar one that consisted of 12 months, for a total of 355 days. In addition, a 27- or 28-day intercalary month, the , was sometimes inserted into February, at the first or second day after the (23 February), to resynchronise the lunar and solar cycles. The remaining days of Februarius were discarded. This intercalary month, named or , contained 27 days. The religious festivals that were normally celebrated in the last 5 days of February were moved to the last 5 days of Intercalaris. The lunisolar calendar was abandoned about 450 BC by the , who implemented the Roman Republican calendar, used until 46 BC. The days of these calendars were counted down (inclusively) to the next named day, so 24 February was ["the sixth day before the calends of March"] often abbreviated The Romans counted days inclusively in their calendars, so this was the fifth day before 1 March when counted in the modern exclusive manner (i.e., not including both the starting and ending day). Because only 22 or 23 days were effectively added, not a full lunation, the calends and ides of the Roman Republican calendar were no longer associated with the new moon and full moon. Julian reform In Caesar's revised calendar, there was just one intercalary daynowadays called the leap dayto be inserted every fourth year, and this too was done after 23 February. To create the intercalary day, the existing (sixth day (inclusive: i.e. what we would call the fifth day before) before the (first day) of March, i.e. what we would call 24 February) was doubled, producing [a second sixth day before the Kalends. This ("twice sixth") was rendered in later languages as "bissextile": the "bissextile day" is the leap day, and a "bissextile year" is a year which includes a leap day. This second instance of the sixth day before the Kalends of March was inserted in calendars between the "normal" fifth and sixth days. By legal fiction, the Romans treated both the first "sixth day" and the additional "sixth day" before the Kalends of March as one day. Thus a child born on either of those days in a leap year would have its first birthday on the following sixth day before the Kalends of March. In a leap year in the original Julian calendar, there were indeed two days both numbered 24 February. This practice continued for another fifteen to seventeen centuries, even after most countries had adopted the Gregorian calendar. For legal purposes, the two days of the were considered to be a single day, with the second sixth being intercalated; but in common practice by the year 238, when Censorinus wrote, the intercalary day was followed by the last five days of February, a. d. VI, V, IV, III, and (the days numbered 24, 25, 26, 27, and 28 from the beginning of February in a common year), so that the intercalated day was the first of the doubled pair. Thus the intercalated day was effectively inserted between the 23rd and 24th days of February. All later writers, including Macrobius about 430, Bede in 725, and other medieval computists (calculators of Easter), continued to state that the bissextum (bissextile day) occurred before the last five days of February. In England, the Church and civil society continued the Roman practice whereby the leap day was simply not counted, so that a leap year was only reckoned as 365 days. Henry III's 1236 instructed magistrates to treat the leap day and the day before as one day. The practical application of the rule is obscure. It was regarded as in force in the time of the famous lawyer Sir Edward Coke (1552–1634) because he cites it in his Institutes of the Lawes of England. However, Coke merely quotes the Act with a short translation and does not give practical examples. 29 February Replacement (by 29 February) of the awkward practice of having two days with the same date appears to have evolved by custom and practice; the etymological origin of the term "bissextile" seems to have been lost. In England in the fifteenth century, "29 February" appears increasingly often in legal documentsalthough the records of the proceedings of the House of Commons of England continued to use the old system until the middle of the sixteenth century. It was not until the passage of the Calendar (New Style) Act 1750 that 29 February was formally recognised in British law. Liturgical practices In the liturgical calendar of the Christian churches, the placement of the leap day is significant because of the date of the feast of Saint Matthias, which is defined as the sixth day before 1 March (counting inclusively). The Church of England's Book of Common Prayer was still using the "two days with the same date" system in its 1542 edition; it first included a calendar which used entirely consecutive day counting from 1662 and showed leap day as falling on 29 February. In the 1680s, the Church of England declared 25 February to be the feast of St Matthias. Until 1970, the Roman Catholic Church always celebrated the feast of Saint Matthias on , so if the days were numbered from the beginning of the month, it was named 24 February in common years, but the presence of the in a bissextile year immediately before shifted the latter day to 25 February in leap years, with the Vigil of St. Matthias shifting from 23 February to the leap day of 24 February. This shift did not take place in pre-Reformation Norway and Iceland; Pope Alexander III ruled that either practice was lawful. Other feasts normally falling on 25–28 February in common years are also shifted to the following day in a leap year (although they would be on the same day according to the Roman notation). The practice is still observed by those who use the older calendars. In the Eastern Orthodox Church, the feast of St. John Cassian is celebrated on 29 February, but he is instead commemorated at Compline on 28 February in non-leap years. The feast of St. Matthias is celebrated in August, so leap years do not affect his commemoration, and, while the feast of the First and Second Findings of the Head of John the Baptist is celebrated on 24 February, the Orthodox church calculates days from the beginning of the current month, rather than counting down days to the Kalends of the following month, this is not affected. Thus, only the feast of St. John Cassian and any movable feasts associated with the Lenten or Pre-Lenten cycles are affected. Folk traditions In Ireland and Britain, it is a tradition that women may propose marriage only in leap years. While it has been claimed that the tradition was initiated by Saint Patrick or Brigid of Kildare in 5th century Ireland, this is dubious, as the tradition has not been attested before the 19th century. Supposedly, a 1288 law by Queen Margaret of Scotland (then age five and living in Norway), required that fines be levied if a marriage proposal was refused by the man; compensation was deemed to be a pair of leather gloves, a single rose, £1, and a kiss. In some places the tradition was tightened to restricting female proposals to the modern leap day, 29 February, or to the medieval (bissextile) leap day, 24 February. According to Felten: "A play from the turn of the 17th century, 'The Maydes Metamorphosis,' has it that 'this is leape year/women wear breeches.' A few hundred years later, breeches wouldn't do at all: Women looking to take advantage of their opportunity to pitch woo were expected to wear a scarlet petticoatfair warning, if you will." In Finland, the tradition is that if a man refuses a woman's proposal on leap day, he should buy her the fabrics for a skirt. In France, since 1980, a satirical newspaper titled La Bougie du Sapeur is published only on leap year, on 29 February. In Greece, marriage in a leap year is considered unlucky. One in five engaged couples in Greece will plan to avoid getting married in a leap year. In February 1988 the town of Anthony, Texas, declared itself the "leap year capital of the world", and an international leapling birthday club was started. Birthdays A person born on February 29 may be called a "leapling" or a "leaper". In common years, they celebrate their birthdays on 28 February or 1 March. Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon may be exploited for dramatic effect when a person is declared to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. For example, in Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic (the pirate apprentice) discovers that he is bound to serve the pirates until his 21st birthday (that is, when he turns 88 years old, since 1900 was not a leap year) rather than until his 21st year. For legal purposes, legal birthdays depend on how local laws count time intervals. Taiwan The Civil Code of Taiwan since 10 October 1929, implies that the legal birthday of a leapling is 28 February in common years: Hong Kong Since 1990 non-retroactively, Hong Kong considers the legal birthday of a leapling 1 March in common years: UK In the UK 1 March is considered to be a leapling's legal birthday. Revised Julian calendar The Revised Julian calendar adds an extra day to February in years that are multiples of four, except for years that are multiples of 100 that do not leave a remainder of 200 or 600 when divided by 900. This rule agrees with the rule for the Gregorian calendar until 2799. The first year that dates in the Revised Julian calendar will not agree with those in the Gregorian calendar will be 2800, because it will be a leap year in the Gregorian calendar but not in the Revised Julian calendar. This rule gives an average year length of 365.242222 days. This is a very good approximation to the mean tropical year, but because the vernal equinox year is slightly longer, the Revised Julian calendar, for the time being, does not do as good a job as the Gregorian calendar at keeping the vernal equinox on or close to 21 March. Baháʼí calendar The Baháʼí calendar is a solar calendar composed of 19 months of 19 days each (361 days). Years begin at Naw-Rúz, on the vernal equinox, on or about 21 March. A period of "Intercalary Days", called Ayyam-i-Ha, is inserted before the 19th month. This period normally has 4 days, but an extra day is added when needed to ensure that the following year starts on the vernal equinox. This is calculated and known years in advance. Bengali, Indian and Thai calendars The Revised Bengali Calendar of Bangladesh and the Indian National Calendar organise their leap years so that every leap day is close to 29 February in the Gregorian calendar and vice versa. This makes it easy to convert dates to or from Gregorian. The Thai solar calendar uses the Buddhist Era (BE) but has been synchronised with the Gregorian since AD 1941. Chinese calendar The Chinese calendar is lunisolar, so a leap year has an extra month, often called an embolismic month after the Greek word for it. In the Chinese calendar, the leap month is added according to a rule which ensures that month 11 is always the month that contains the northern winter solstice. The intercalary month takes the same number as the preceding month; for example, if it follows the second month (二月) then it is simply called "leap second month" i.e. . Hebrew calendar The Hebrew calendar is lunisolar with an embolismic month. This extra month is called Adar Rishon (first Adar) and is added before Adar, which then becomes Adar Sheini (second Adar). According to the Metonic cycle, this is done seven times every nineteen years (specifically, in years 3, 6, 8, 11, 14, 17, and 19). This is to ensure that Passover () is always in the spring as required by the Torah (Pentateuch) in many verses relating to Passover. In addition, the Hebrew calendar has postponement rules that postpone the start of the year by one or two days. These postponement rules reduce the number of different combinations of year length and starting days of the week from 28 to 14, and regulate the location of certain religious holidays in relation to the Sabbath. In particular, the first day of the Hebrew year can never be Sunday, Wednesday, or Friday. This rule is known in Hebrew as "" (), i.e., "Rosh [ha-Shanah, first day of the year] is not Sunday, Wednesday, or Friday" (as the Hebrew word is written by three Hebrew letters signifying Sunday, Wednesday, and Friday). Accordingly, the first day of Passover is never Monday, Wednesday, or Friday. This rule is known in Hebrew as "" (), which has a double meaning — "Passover is not a legend", but also "Passover is not Monday, Wednesday, or Friday" (as the Hebrew word is written by three Hebrew letters signifying Monday, Wednesday, and Friday). One reason for this rule is that Yom Kippur, the holiest day in the Hebrew calendar and the tenth day of the Hebrew year, now must never be adjacent to the weekly Sabbath (which is Saturday), i.e., it must never fall on Friday or Sunday, in order not to have two adjacent Sabbath days. However, Yom Kippur can still be on Saturday. A second reason is that Hoshana Rabbah, the 21st day of the Hebrew year, will never be on Saturday. These rules for the Feasts do not apply to the years from the Creation to the deliverance of the Hebrews from Egypt under Moses. It was at that time (cf. Exodus 13) that the God of Abraham, Isaac and Jacob gave the Hebrews their "Law" including the days to be kept holy and the feast days and Sabbaths. Years consisting of 12 months have between 353 and 355 days. In a ("in order") 354-day year, months have alternating 30 and 29 day lengths. In a ("lacking") year, the month of Kislev is reduced to 29 days. In a ("filled") year, the month of Marcheshvan is increased to 30 days. 13-month years follow the same pattern, with the addition of the 30-day Adar Alef, giving them between 383 and 385 days. Islamic calendars The observed and calculated versions of the lunar Islamic calendar do not have regular leap days, even though both have lunar months containing 29 or 30 days, generally in alternating order. However, the tabular Islamic calendar used by Islamic astronomers during the Middle Ages and still used by some Muslims does have a regular leap day added to the last month of the lunar year in 11 years of a 30-year cycle. This additional day is found at the end of the last month, Dhu al-Hijjah, which is also the month of the Hajj. The Solar Hijri calendar is the modern Iranian calendar. It is an observational calendar that starts on the spring equinox (Northern Hemisphere) and adds a single intercalated day to the last month (Esfand) once every 4 or 5 years; the first leap year occurs as the fifth year of the typical 33-year cycle and the remaining leap years occur every 4 years through the remainder of the 33-year cycle. This system has less periodic deviation or jitter from its mean year than the Gregorian calendar and operates on the simple rule that New Year's Day must fall in the 24 hours of the vernal equinox. The 33-year period is not completely regular; every so often the 33-year cycle will be broken by a cycle of 29 years. The Hijri-Shamsi calendar, also adopted by the Ahmadiyya Community, is based on solar calculations and is similar to the Gregorian calendar in its structure with the exception that its epoch is the Hijra. Coptic and Ethiopian calendars The Coptic calendar has 13 months, 12 of 30 days each, and one at the end of the year of 5 days, or 6 days in leap years. The Coptic Leap Year follows the same rules as the Julian Calendar so that the extra month always has 6 days in the year before a Julian Leap Year. The Ethiopian calendar has 12 months of 30 days plus 5 or 6 epagomenal days, which comprise a 13th month. See also Century leap year Calendar reform includes proposals that have not (yet) been adopted. Leap second Leap week calendar Leap year bug Sansculottides Zeller's congruence 30 February Leap year starting on Monday Leap year starting on Tuesday Leap year starting on Wednesday Leap year starting on Thursday Leap year starting on Friday Leap year starting on Saturday Leap year starting on Sunday Orbital period Notes Sources References External links Famous Leapers Leap Day Campaign: Galileo Day History Behind Leap Year National Geographic Society Calendars Types of year Units of time
Leap year
[ "Physics", "Mathematics" ]
5,144
[ "Calendars", "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
17,939
https://en.wikipedia.org/wiki/Light
Light, visible light, or visible radiation is electromagnetic radiation that can be perceived by the human eye. Visible light spans the visible spectrum and is usually defined as having wavelengths in the range of 400–700 nanometres (nm), corresponding to frequencies of 750–420 terahertz. The visible band sits adjacent to the infrared (with longer wavelengths and lower frequencies) and the ultraviolet (with shorter wavelengths and higher frequencies), called collectively optical radiation. In physics, the term "light" may refer more broadly to electromagnetic radiation of any wavelength, whether visible or not. In this sense, gamma rays, X-rays, microwaves and radio waves are also light. The primary properties of light are intensity, propagation direction, frequency or wavelength spectrum, and polarization. Its speed in vacuum, , is one of the fundamental constants of nature. Like all types of electromagnetic radiation, visible light propagates by massless elementary particles called photons that represents the quanta of electromagnetic field, and can be analyzed as both waves and particles. The study of light, known as optics, is an important research area in modern physics. The main source of natural light on Earth is the Sun. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight. Electromagnetic spectrum and visible light Generally, electromagnetic radiation (EMR) is classified by wavelength into radio waves, microwaves, infrared, the visible spectrum that we perceive as light, ultraviolet, X-rays and gamma rays. The designation "radiation" excludes static electric, magnetic and near fields. The behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which leads to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina, which change triggers the sensation of vision. There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it. Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nm and the internal lens below 400 nm. Furthermore, the rods and cones located in the retina of the human eye cannot detect the very short (below 360 nm) ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses (such as insects and shrimp) are able to detect ultraviolet, by quantum photon-absorption mechanisms, in much the same chemical way that humans detect visible light. Various sources define visible light as narrowly as 420–680 nm to as broadly as 380–800 nm. Under ideal laboratory conditions, people can see infrared up to at least 1,050 nm; children and young adults may perceive ultraviolet wavelengths down to about 310–313 nm. Plant growth is also affected by the colour spectrum of light, a process known as photomorphogenesis. Speed of light The speed of light in vacuum is defined to be exactly (approximately 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum. Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Rømer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit. However, its size was not known at that time. If Rømer had known the diameter of the Earth's orbit, he would have calculated a speed of . Another more accurate measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel and the rate of rotation, Fizeau was able to calculate the speed of light as . Léon Foucault carried out an experiment which used rotating mirrors to obtain a value of in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mount Wilson to Mount San Antonio in California. The precise measurements yielded a speed of . The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example, the speed of light in water is about 3/4 of that in vacuum. Two independent teams of physicists were said to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Massachusetts and the other at the Harvard–Smithsonian Center for Astrophysics, also in Cambridge. However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light. Optics The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light. A transparent object allows light to transmit or pass through. Conversely, an opaque object does not allow light to transmit through and instead reflecting or absorbing the light it receives. Most objects do not reflect or transmit light specularly and to some degree scatters the incoming light, which is called glossiness. Surface scatterance is caused by the surface roughness of the reflecting surfaces, and internal scatterance is caused by the difference of refractive index between the particles and medium inside the object. Like transparent objects, translucent objects allow light to transmit through, but translucent objects also scatter certain wavelength of light via internal scatterance. Refraction Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law: where θ1 is the angle between the ray and the surface normal in the first medium, θ2 is the angle between the ray and the surface normal in the second medium and n1 and n2 are the indices of refraction, n = 1 in a vacuum and n > 1 in a transparent substance. When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction. The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation. Light sources There are many sources of light. A body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around . Solar radiation peaks in the visible region of the electromagnetic spectrum when plotted in wavelength units, and roughly 44% of the radiation that reaches the ground is visible. Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared and only a fraction in the visible spectrum. The peak of the black-body spectrum is in the deep infrared, at about 10 micrometre wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one and finally a blue-white colour as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colours can be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue colour in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals emitting a wavelength band around 425 nm and is not seen in stars or pure thermal radiation). Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.) and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser. Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means and boats moving through water can disturb plankton which produce a glowing wake. Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode-ray tube television sets and computer monitors. Certain other mechanisms can produce light: Electroluminescence Scintillation Sonoluminescence Triboluminescence When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include: Particle–antiparticle annihilation Radioactive decay Measurement Light is measured with two main alternative sets of units: radiometry consists of measurements of light power at all wavelengths, while photometry measures light with wavelength weighted with respect to a standardized model of human brightness perception. Photometry is useful, for example, to quantify Illumination (lighting) intended for human use. The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum and the cumulative response peaks at a wavelength of around 555 nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both. Light pressure Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by c, the speed of light. Due to the magnitude of c, the effect of light pressure is negligible for everyday objects. For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U.S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers. However, in nanometre-scale applications such as nanoelectromechanical systems (NEMS), the effect of light pressure is more significant and exploiting light pressure to drive NEMS mechanisms and to flip nanometre-scale physical switches in integrated circuits is an active area of research. At larger scales, light pressure can cause asteroids to spin faster, acting on their irregular shapes as on the vanes of a windmill. The possibility of making solar sails that would accelerate spaceships in space is also under investigation. Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum. This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) is directly caused by light pressure. As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backwardacting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Usually light momentum is aligned with its direction of motion. However, for example in evanescent waves momentum is transverse to direction of propagation. Historical theories about light, in chronological order Classical Greece and Hellenism In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that goddess Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun. In about 300 BC, Euclid wrote Optica, in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. If the beam from the eye travels infinitely fast this is not a problem. In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote that "The light & heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." (from On the nature of the Universe). Despite being similar to later particle theories, Lucretius's views were not generally accepted. Ptolemy (c. second century) wrote about the refraction of light in his book Optics. Classical India In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries AD developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous. The Vishnu Purana refers to sunlight as "the seven rays of the sun". The Indian Buddhists, such as Dignāga in the fifth century and Dharmakirti in the seventh century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy. Descartes René Descartes (1596–1650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Roger Bacon, Robert Grosseteste and Johannes Kepler. In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves. Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media. Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes's theory of light is regarded as the start of modern physical optics. Particle theory Pierre Gassendi (1592–1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age and preferred his view to Descartes's theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether. Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the eighteenth century. The particle theory of light led Pierre-Simon Laplace to argue that a body could be so massive that light could not escape from it. In other words, it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in The large scale structure of space-time, by Stephen Hawking and George F. R. Ellis. The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time the polarization was considered as the proof of the particle theory. Wave theory To explain the origin of colours, Robert Hooke (1635–1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 work Micrographia ("Observation IX"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629–1695) worked out a mathematical wave theory of light in 1678 and published it in his Treatise on Light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the luminiferous aether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium. The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young). Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colours were caused by different wavelengths of light and explained colour vision in terms of three-coloured receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory. In 1816 André-Marie Ampère gave Augustin-Jean Fresnel an idea that the polarization of light can be explained by the wave theory if light were a transverse wave. Later, Fresnel independently worked out his own wave theory of light and presented it to the Académie des Sciences in 1817. Siméon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favor of the wave theory, helping to overturn Newton's corpuscular theory. By the year 1821, Fresnel was able to show via mathematical methods that polarization could be explained by the wave theory of light if and only if light was entirely transverse, with no longitudinal vibration whatsoever. The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance luminiferous aether proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the Michelson–Morley experiment. Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850. His result supported the wave theory, and the classical particle theory was finally abandoned (only to partly re-emerge in the twentieth century as photons in quantum theory). Electromagnetic theory In 1845, Michael Faraday discovered that the plane of polarization of linearly polarized light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation. This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along magnetic field lines. Faraday proposed in 1847 that light was a high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether. Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behavior of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging and wireless communications. In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines). Quantum theory In 1900 Max Planck, attempting to explain black-body radiation, suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these light quanta particles photons. Eventually quantum mechanics came to picture light as (in some sense) both a particle and a wave, and (in another sense) as a phenomenon which is neither a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, under some approximations light can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles) and sometimes another macroscopic metaphor (waves). As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both. In 1924–1925, Satyendra Nath Bose showed that light followed different statistics from that of classical particles. With Einstein, they generalized this result for a whole set of integer spin particles called bosons (after Bose) that follow Bose–Einstein statistics. The photon is a massless boson of spin 1. In 1927, Paul Dirac quantized the electromagnetic field. Pascual Jordan and Vladimir Fock generalized this process to treat many-body systems as excitations of quantum fields, a process with the misnomer of second quantization. And at the end of the 1940s a full theory of quantum electrodynamics was developed using quantum fields based on the works of Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. Quantum optics John R. Klauder, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light (see degree of coherence). This led to the introduction of the coherent state as a concept which addressed variations between laser light, thermal light, exotic squeezed states, etc. as it became understood that light cannot be fully described just referring to the electromagnetic fields describing the waves in the classical picture. In 1977, H. Jeff Kimble et al. demonstrated a single atom emitting one photon at a time, further compelling evidence that light consists of photons. Previously unknown quantum states of light with characteristics unlike classical states, such as squeezed light were subsequently discovered. Development of short and ultrashort laser pulses—created by Q switching and modelocking techniques—opened the way to the study of what became known as ultrafast processes. Applications for solid state research (e.g. Raman spectroscopy) were found, and mechanical forces of light on matter were studied. The latter led to levitating and positioning clouds of atoms or even small biological samples in an optical trap or optical tweezers by laser beam. This, along with Doppler cooling and Sisyphus cooling, was the crucial technology needed to achieve the celebrated Bose–Einstein condensation. Other remarkable results are the demonstration of quantum entanglement, quantum teleportation, and quantum logic gates. The latter are of much interest in quantum information theory, a subject which partly emerged from quantum optics, partly from theoretical computer science. Use for light on Earth Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates and vampire squid use it to hide themselves from prey. See also Ballistic photon Colour temperature Fermat's principle Huygens' principle Journal of Luminescence Light beam – in particular about light beams visible from the side Light Fantastic (TV series) Light mill List of light sources Luminescence: The Journal of Biological and Chemical Luminescence Spectroscopy Notes References External links Electromagnetic radiation
Light
[ "Physics" ]
6,241
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic radiation", "Electromagnetic spectrum", "Waves", "Radiation", "Light" ]
17,945
https://en.wikipedia.org/wiki/Lie%20group
In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold, such that group multiplication and taking inverses are both differentiable. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the additional properties it must have to be thought of as a "transformation" in the abstract sense, for instance multiplication and the taking of inverses (to allow division), or equivalently, the concept of addition and subtraction. Combining these two ideas, one obtains a continuous group where multiplying points and their inverses is continuous. If the multiplication and taking of inverses are smooth (differentiable) as well, one obtains a Lie group. Lie groups provide a natural model for the concept of continuous symmetry, a celebrated example of which is the circle group. Rotating a circle is an example of a continuous symmetry. For any rotation of the circle, there exists the same symmetry, and concatenation of such rotations makes them into the circle group, an archetypal example of a Lie group. Lie groups are widely used in many parts of modern mathematics and physics. Lie groups were first found by studying matrix subgroups contained in or , the groups of invertible matrices over or . These are now called the classical groups, as the concept has been extended far beyond these origins. Lie groups are named after Norwegian mathematician Sophus Lie (1842–1899), who laid the foundations of the theory of continuous transformation groups. Lie's original motivation for introducing Lie groups was to model the continuous symmetries of differential equations, in much the same way that finite groups are used in Galois theory to model the discrete symmetries of algebraic equations. History Sophus Lie considered the winter of 1873–1874 as the birth date of his theory of continuous groups. Thomas Hawkins, however, suggests that it was "Lie's prodigious research activity during the four-year period from the fall of 1869 to the fall of 1873" that led to the theory's creation. Some of Lie's early ideas were developed in close collaboration with Felix Klein. Lie met with Klein every day from October 1869 through 1872: in Berlin from the end of October 1869 to the end of February 1870, and in Paris, Göttingen and Erlangen in the subsequent two years. Lie stated that all of the principal results were obtained by 1884. But during the 1870s all his papers (except the very first note) were published in Norwegian journals, which impeded recognition of the work throughout the rest of Europe. In 1884 a young German mathematician, Friedrich Engel, came to work with Lie on a systematic treatise to expose his theory of continuous groups. From this effort resulted the three-volume Theorie der Transformationsgruppen, published in 1888, 1890, and 1893. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse. Lie's ideas did not stand in isolation from the rest of mathematics. In fact, his interest in the geometry of differential equations was first motivated by the work of Carl Gustav Jacobi, on the theory of partial differential equations of first order and on the equations of classical mechanics. Much of Jacobi's work was published posthumously in the 1860s, generating enormous interest in France and Germany. Lie's idée fixe was to develop a theory of symmetries of differential equations that would accomplish for them what Évariste Galois had done for algebraic equations: namely, to classify them in terms of group theory. Lie and other mathematicians showed that the most important equations for special functions and orthogonal polynomials tend to arise from group theoretical symmetries. In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. However, the hope that Lie theory would unify the entire field of ordinary differential equations was not fulfilled. Symmetry methods for ODEs continue to be studied, but do not dominate the subject. There is a differential Galois theory, but it was developed by others, such as Picard and Vessiot, and it provides a theory of quadratures, the indefinite integrals required to express solutions. Additional impetus to consider continuous groups came from ideas of Bernhard Riemann, on the foundations of geometry, and their further development in the hands of Klein. Thus three major themes in 19th century mathematics were combined by Lie in creating his new theory: The idea of symmetry, as exemplified by Galois through the algebraic notion of a group; Geometric theory and the explicit solutions of differential equations of mechanics, worked out by Poisson and Jacobi; The new understanding of geometry that emerged in the works of Plücker, Möbius, Grassmann and others, and culminated in Riemann's revolutionary vision of the subject. Although today Sophus Lie is rightfully recognized as the creator of the theory of continuous groups, a major stride in the development of their structure theory, which was to have a profound influence on subsequent development of mathematics, was made by Wilhelm Killing, who in 1888 published the first paper in a series entitled Die Zusammensetzung der stetigen endlichen Transformationsgruppen (The composition of continuous finite transformation groups). The work of Killing, later refined and generalized by Élie Cartan, led to classification of semisimple Lie algebras, Cartan's theory of symmetric spaces, and Hermann Weyl's description of representations of compact and semisimple Lie groups using highest weights. In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris. Weyl brought the early period of the development of the theory of Lie groups to fruition, for not only did he classify irreducible representations of semisimple Lie groups and connect the theory of groups with quantum mechanics, but he also put Lie's theory itself on firmer footing by clearly enunciating the distinction between Lie's infinitesimal groups (i.e., Lie algebras) and the Lie groups proper, and began investigations of topology of Lie groups. The theory of Lie groups was systematically reworked in modern mathematical language in a monograph by Claude Chevalley. Overview Lie groups are smooth differentiable manifolds and as such can be studied using differential calculus, in contrast with the case of more general topological groups. One of the key ideas in the theory of Lie groups is to replace the global object, the group, with its local or linearized version, which Lie himself called its "infinitesimal group" and which has since become known as its Lie algebra. Lie groups play an enormous role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various "geometries" by specifying an appropriate transformation group that leaves certain geometric properties invariant. Thus Euclidean geometry corresponds to the choice of the group E(3) of distance-preserving transformations of the Euclidean space , conformal geometry corresponds to enlarging the group to the conformal group, whereas in projective geometry one is interested in the properties invariant under the projective group. This idea later led to the notion of a G-structure, where G is a Lie group of "local" symmetries of a manifold. Lie groups (and their associated Lie algebras) play a major role in modern physics, with the Lie group typically playing the role of a symmetry of a physical system. Here, the representations of the Lie group (or of its Lie algebra) are especially important. Representation theory is used extensively in particle physics. Groups whose representations are of particular importance include the rotation group SO(3) (or its double cover SU(2)), the special unitary group SU(3) and the Poincaré group. On a "global" level, whenever a Lie group acts on a geometric object, such as a Riemannian or a symplectic manifold, this action provides a measure of rigidity and yields a rich algebraic structure. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry and facilitates analysis on the manifold. Linear actions of Lie groups are especially important, and are studied in representation theory. In the 1940s–1950s, Ellis Kolchin, Armand Borel, and Claude Chevalley realised that many foundational results concerning Lie groups can be developed completely algebraically, giving rise to the theory of algebraic groups defined over an arbitrary field. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, as well as in algebraic geometry. The theory of automorphic forms, an important branch of modern number theory, deals extensively with analogues of Lie groups over adele rings; p-adic Lie groups play an important role, via their connections with Galois representations in number theory. Definitions and examples A real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication means that μ is a smooth mapping of the product manifold into G. The two requirements can be combined to the single requirement that the mapping be a smooth mapping of the product manifold into G. First examples The 2×2 real invertible matrices form a group under multiplication, called general linear group of degree 2 and denoted by or by : This is a four-dimensional noncompact real Lie group; it is an open subset of . This group is disconnected; it has two connected components corresponding to the positive and negative values of the determinant. The rotation matrices form a subgroup of , denoted by . It is a Lie group in its own right: specifically, a one-dimensional compact connected Lie group which is diffeomorphic to the circle. Using the rotation angle as a parameter, this group can be parametrized as follows: Addition of the angles corresponds to multiplication of the elements of , and taking the opposite angle corresponds to inversion. Thus both multiplication and inversion are differentiable maps. The affine group of one dimension is a two-dimensional matrix Lie group, consisting of real, upper-triangular matrices, with the first diagonal entry being positive and the second diagonal entry being 1. Thus, the group consists of matrices of the form Non-example We now present an example of a group with an uncountable number of elements that is not a Lie group under a certain topology. The group given by with a fixed irrational number, is a subgroup of the torus that is not a Lie group when given the subspace topology. If we take any small neighborhood of a point in , for example, the portion of in is disconnected. The group winds repeatedly around the torus without ever reaching a previous point of the spiral and thus forms a dense subgroup of . The group can, however, be given a different topology, in which the distance between two points is defined as the length of the shortest path in the group joining to . In this topology, is identified homeomorphically with the real line by identifying each element with the number in the definition of . With this topology, is just the group of real numbers under addition and is therefore a Lie group. The group is an example of a "Lie subgroup" of a Lie group that is not closed. See the discussion below of Lie subgroups in the section on basic concepts. Matrix Lie groups Let denote the group of invertible matrices with entries in . Any closed subgroup of is a Lie group; Lie groups of this sort are called matrix Lie groups. Since most of the interesting examples of Lie groups can be realized as matrix Lie groups, some textbooks restrict attention to this class, including those of Hall, Rossmann, and Stillwell. Restricting attention to matrix Lie groups simplifies the definition of the Lie algebra and the exponential map. The following are standard examples of matrix Lie groups. The special linear groups over and , and , consisting of matrices with determinant one and entries in or The unitary groups and special unitary groups, and , consisting of complex matrices satisfying (and also in the case of ) The orthogonal groups and special orthogonal groups, and , consisting of real matrices satisfying (and also in the case of ) All of the preceding examples fall under the heading of the classical groups. Related concepts A complex Lie group is defined in the same way using complex manifolds rather than real ones (example: ), and holomorphic maps. Similarly, using an alternate metric completion of , one can define a p-adic Lie group over the p-adic numbers, a topological group which is also an analytic p-adic manifold, such that the group operations are analytic. In particular, each point has a p-adic neighborhood. Hilbert's fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples. The answer to this question turned out to be negative: in 1952, Gleason, Montgomery and Zippin showed that if G is a topological manifold with continuous group operations, then there exists exactly one analytic structure on G which turns it into a Lie group (see also Hilbert–Smith conjecture). If the underlying manifold is allowed to be infinite-dimensional (for example, a Hilbert manifold), then one arrives at the notion of an infinite-dimensional Lie group. It is possible to define analogues of many Lie groups over finite fields, and these give most of the examples of finite simple groups. The language of category theory provides a concise definition for Lie groups: a Lie group is a group object in the category of smooth manifolds. This is important, because it allows generalization of the notion of a Lie group to Lie supergroups. This categorical point of view leads also to a different generalization of Lie groups, namely Lie groupoids, which are groupoid objects in the category of smooth manifolds with a further requirement. Topological definition A Lie group can be defined as a (Hausdorff) topological group that, near the identity element, looks like a transformation group, with no reference to differentiable manifolds. First, we define an immersely linear Lie group to be a subgroup G of the general linear group such that for some neighborhood V of the identity element e in G, the topology on V is the subspace topology of and V is closed in . G has at most countably many connected components. (For example, a closed subgroup of ; that is, a matrix Lie group satisfies the above conditions.) Then a Lie group is defined as a topological group that (1) is locally isomorphic near the identities to an immersely linear Lie group and (2) has at most countably many connected components. Showing the topological definition is equivalent to the usual one is technical (and the beginning readers should skip the following) but is done roughly as follows: Given a Lie group G in the usual manifold sense, the Lie group–Lie algebra correspondence (or a version of Lie's third theorem) constructs an immersed Lie subgroup such that share the same Lie algebra; thus, they are locally isomorphic. Hence, satisfies the above topological definition. Conversely, let be a topological group that is a Lie group in the above topological sense and choose an immersely linear Lie group that is locally isomorphic to . Then, by a version of the closed subgroup theorem, is a real-analytic manifold and then, through the local isomorphism, G acquires a structure of a manifold near the identity element. One then shows that the group law on G can be given by formal power series; so the group operations are real-analytic and itself is a real-analytic manifold. The topological definition implies the statement that if two Lie groups are isomorphic as topological groups, then they are isomorphic as Lie groups. In fact, it states the general principle that, to a large extent, the topology of a Lie group together with the group law determines the geometry of the group. More examples of Lie groups Lie groups occur in abundance throughout mathematics and physics. Matrix groups or algebraic groups are (roughly) groups of matrices (for example, orthogonal and symplectic groups), and these give most of the more common examples of Lie groups. Dimensions one and two The only connected Lie groups with dimension one are the real line (with the group operation being addition) and the circle group of complex numbers with absolute value one (with the group operation being multiplication). The group is often denoted as , the group of unitary matrices. In two dimensions, if we restrict attention to simply connected groups, then they are classified by their Lie algebras. There are (up to isomorphism) only two Lie algebras of dimension two. The associated simply connected Lie groups are (with the group operation being vector addition) and the affine group in dimension one, described in the previous subsection under "first examples". Additional examples The group SU(2) is the group of unitary matrices with determinant . Topologically, is the -sphere ; as a group, it may be identified with the group of unit quaternions. The Heisenberg group is a connected nilpotent Lie group of dimension , playing a key role in quantum mechanics. The Lorentz group is a 6-dimensional Lie group of linear isometries of the Minkowski space. The Poincaré group is a 10-dimensional Lie group of affine isometries of the Minkowski space. The exceptional Lie groups of types G2, F4, E6, E7, E8 have dimensions 14, 52, 78, 133, and 248. Along with the A–B–C–D series of simple Lie groups, the exceptional groups complete the list of simple Lie groups. The symplectic group consists of all matrices preserving a symplectic form on . It is a connected Lie group of dimension . Constructions There are several standard ways to form new Lie groups from old ones: The product of two Lie groups is a Lie group. Any topologically closed subgroup of a Lie group is a Lie group. This is known as the closed subgroup theorem or Cartan's theorem. The quotient of a Lie group by a closed normal subgroup is a Lie group. The universal cover of a connected Lie group is a Lie group. For example, the group is the universal cover of the circle group . In fact any covering of a differentiable manifold is also a differentiable manifold, but by specifying universal cover, one guarantees a group structure (compatible with its other structures). Related notions Some examples of groups that are not Lie groups (except in the trivial sense that any group having at most countably many elements can be viewed as a 0-dimensional Lie group, with the discrete topology), are: Infinite-dimensional groups, such as the additive group of an infinite-dimensional real vector space, or the space of smooth functions from a manifold to a Lie group , . These are not Lie groups as they are not finite-dimensional manifolds. Some totally disconnected groups, such as the Galois group of an infinite extension of fields, or the additive group of the p-adic numbers. These are not Lie groups because their underlying spaces are not real manifolds. (Some of these groups are "p-adic Lie groups".) In general, only topological groups having similar local properties to Rn for some positive integer n can be Lie groups (of course they must also have a differentiable structure). Basic concepts The Lie algebra associated with a Lie group To every Lie group we can associate a Lie algebra whose underlying vector space is the tangent space of the Lie group at the identity element and which completely captures the local structure of the group. Informally we can think of elements of the Lie algebra as elements of the group that are "infinitesimally close" to the identity, and the Lie bracket of the Lie algebra is related to the commutator of two such infinitesimal elements. Before giving the abstract definition we give a few examples: The Lie algebra of the vector space Rn is just Rn with the Lie bracket given by     [A, B] = 0. (In general the Lie bracket of a connected Lie group is always 0 if and only if the Lie group is abelian.) The Lie algebra of the general linear group GL(n, C) of invertible matrices is the vector space M(n, C) of square matrices with the Lie bracket given by     [A, B] = AB − BA. If G is a closed subgroup of GL(n, C) then the Lie algebra of G can be thought of informally as the matrices m of M(n, C) such that 1 + εm is in G, where ε is an infinitesimal positive number with ε2 = 0 (of course, no such real number ε exists). For example, the orthogonal group O(n, R) consists of matrices A with AAT = 1, so the Lie algebra consists of the matrices m with (1 + εm)(1 + εm)T = 1, which is equivalent to m + mT = 0 because ε2 = 0. The preceding description can be made more rigorous as follows. The Lie algebra of a closed subgroup G of GL(n, C), may be computed as where exp(tX) is defined using the matrix exponential. It can then be shown that the Lie algebra of G is a real vector space that is closed under the bracket operation, . The concrete definition given above for matrix groups is easy to work with, but has some minor problems: to use it we first need to represent a Lie group as a group of matrices, but not all Lie groups can be represented in this way, and it is not even obvious that the Lie algebra is independent of the representation we use. To get around these problems we give the general definition of the Lie algebra of a Lie group (in 4 steps): Vector fields on any smooth manifold M can be thought of as derivations X of the ring of smooth functions on the manifold, and therefore form a Lie algebra under the Lie bracket [X, Y] = XY − YX, because the Lie bracket of any two derivations is a derivation. If G is any group acting smoothly on the manifold M, then it acts on the vector fields, and the vector space of vector fields fixed by the group is closed under the Lie bracket and therefore also forms a Lie algebra. We apply this construction to the case when the manifold M is the underlying space of a Lie group G, with G acting on G = M by left translations Lg(h) = gh. This shows that the space of left invariant vector fields (vector fields satisfying Lg*Xh = Xgh for every h in G, where Lg* denotes the differential of Lg) on a Lie group is a Lie algebra under the Lie bracket of vector fields. Any tangent vector at the identity of a Lie group can be extended to a left invariant vector field by left translating the tangent vector to other points of the manifold. Specifically, the left invariant extension of an element v of the tangent space at the identity is the vector field defined by v^g = Lg*v. This identifies the tangent space TeG at the identity with the space of left invariant vector fields, and therefore makes the tangent space at the identity into a Lie algebra, called the Lie algebra of G, usually denoted by a Fraktur Thus the Lie bracket on is given explicitly by [v, w] = [v^, w^]e. This Lie algebra is finite-dimensional and it has the same dimension as the manifold G. The Lie algebra of G determines G up to "local isomorphism", where two Lie groups are called locally isomorphic if they look the same near the identity element. Problems about Lie groups are often solved by first solving the corresponding problem for the Lie algebras, and the result for groups then usually follows easily. For example, simple Lie groups are usually classified by first classifying the corresponding Lie algebras. We could also define a Lie algebra structure on Te using right invariant vector fields instead of left invariant vector fields. This leads to the same Lie algebra, because the inverse map on G can be used to identify left invariant vector fields with right invariant vector fields, and acts as −1 on the tangent space Te. The Lie algebra structure on Te can also be described as follows: the commutator operation (x, y) → xyx−1y−1 on G × G sends (e, e) to e, so its derivative yields a bilinear operation on TeG. This bilinear operation is actually the zero map, but the second derivative, under the proper identification of tangent spaces, yields an operation that satisfies the axioms of a Lie bracket, and it is equal to twice the one defined through left-invariant vector fields. Homomorphisms and isomorphisms If G and H are Lie groups, then a Lie group homomorphism f : G → H is a smooth group homomorphism. In the case of complex Lie groups, such a homomorphism is required to be a holomorphic map. However, these requirements are a bit stringent; every continuous homomorphism between real Lie groups turns out to be (real) analytic. The composition of two Lie homomorphisms is again a homomorphism, and the class of all Lie groups, together with these morphisms, forms a category. Moreover, every Lie group homomorphism induces a homomorphism between the corresponding Lie algebras. Let be a Lie group homomorphism and let be its derivative at the identity. If we identify the Lie algebras of G and H with their tangent spaces at the identity elements, then is a map between the corresponding Lie algebras: which turns out to be a Lie algebra homomorphism (meaning that it is a linear map which preserves the Lie bracket). In the language of category theory, we then have a covariant functor from the category of Lie groups to the category of Lie algebras which sends a Lie group to its Lie algebra and a Lie group homomorphism to its derivative at the identity. Two Lie groups are called isomorphic if there exists a bijective homomorphism between them whose inverse is also a Lie group homomorphism. Equivalently, it is a diffeomorphism which is also a group homomorphism. Observe that, by the above, a continuous homomorphism from a Lie group to a Lie group is an isomorphism of Lie groups if and only if it is bijective. Lie group versus Lie algebra isomorphisms Isomorphic Lie groups necessarily have isomorphic Lie algebras; it is then reasonable to ask how isomorphism classes of Lie groups relate to isomorphism classes of Lie algebras. The first result in this direction is Lie's third theorem, which states that every finite-dimensional, real Lie algebra is the Lie algebra of some (linear) Lie group. One way to prove Lie's third theorem is to use Ado's theorem, which says every finite-dimensional real Lie algebra is isomorphic to a matrix Lie algebra. Meanwhile, for every finite-dimensional matrix Lie algebra, there is a linear group (matrix Lie group) with this algebra as its Lie algebra. On the other hand, Lie groups with isomorphic Lie algebras need not be isomorphic. Furthermore, this result remains true even if we assume the groups are connected. To put it differently, the global structure of a Lie group is not determined by its Lie algebra; for example, if Z is any discrete subgroup of the center of G then G and G/Z have the same Lie algebra (see the table of Lie groups for examples). An example of importance in physics are the groups SU(2) and SO(3). These two groups have isomorphic Lie algebras, but the groups themselves are not isomorphic, because SU(2) is simply connected but SO(3) is not. On the other hand, if we require that the Lie group be simply connected, then the global structure is determined by its Lie algebra: two simply connected Lie groups with isomorphic Lie algebras are isomorphic. (See the next subsection for more information about simply connected Lie groups.) In light of Lie's third theorem, we may therefore say that there is a one-to-one correspondence between isomorphism classes of finite-dimensional real Lie algebras and isomorphism classes of simply connected Lie groups. Simply connected Lie groups A Lie group is said to be simply connected if every loop in can be shrunk continuously to a point in . This notion is important because of the following result that has simple connectedness as a hypothesis: Theorem: Suppose and are Lie groups with Lie algebras and and that is a Lie algebra homomorphism. If is simply connected, then there is a unique Lie group homomorphism such that , where is the differential of at the identity. Lie's third theorem says that every finite-dimensional real Lie algebra is the Lie algebra of a Lie group. It follows from Lie's third theorem and the preceding result that every finite-dimensional real Lie algebra is the Lie algebra of a unique simply connected Lie group. An example of a simply connected group is the special unitary group SU(2), which as a manifold is the 3-sphere. The rotation group SO(3), on the other hand, is not simply connected. (See Topology of SO(3).) The failure of SO(3) to be simply connected is intimately connected to the distinction between integer spin and half-integer spin in quantum mechanics. Other examples of simply connected Lie groups include the special unitary group SU(n), the spin group (double cover of rotation group) Spin(n) for , and the compact symplectic group Sp(n). Methods for determining whether a Lie group is simply connected or not are discussed in the article on fundamental groups of Lie groups. Exponential map The exponential map from the Lie algebra of the general linear group to is defined by the matrix exponential, given by the usual power series: for matrices . If is a closed subgroup of , then the exponential map takes the Lie algebra of into ; thus, we have an exponential map for all matrix groups. Every element of that is sufficiently close to the identity is the exponential of a matrix in the Lie algebra. The definition above is easy to use, but it is not defined for Lie groups that are not matrix groups, and it is not clear that the exponential map of a Lie group does not depend on its representation as a matrix group. We can solve both problems using a more abstract definition of the exponential map that works for all Lie groups, as follows. For each vector in the Lie algebra of (i.e., the tangent space to at the identity), one proves that there is a unique one-parameter subgroup such that . Saying that is a one-parameter subgroup means simply that is a smooth map into and that for all and . The operation on the right hand side is the group multiplication in . The formal similarity of this formula with the one valid for the exponential function justifies the definition This is called the exponential map, and it maps the Lie algebra into the Lie group . It provides a diffeomorphism between a neighborhood of 0 in and a neighborhood of in . This exponential map is a generalization of the exponential function for real numbers (because is the Lie algebra of the Lie group of positive real numbers with multiplication), for complex numbers (because is the Lie algebra of the Lie group of non-zero complex numbers with multiplication) and for matrices (because with the regular commutator is the Lie algebra of the Lie group of all invertible matrices). Because the exponential map is surjective on some neighbourhood of , it is common to call elements of the Lie algebra infinitesimal generators of the group . The subgroup of generated by is the identity component of . The exponential map and the Lie algebra determine the local group structure of every connected Lie group, because of the Baker–Campbell–Hausdorff formula: there exists a neighborhood of the zero element of , such that for we have where the omitted terms are known and involve Lie brackets of four or more elements. In case and commute, this formula reduces to the familiar exponential law . The exponential map relates Lie group homomorphisms. That is, if is a Lie group homomorphism and the induced map on the corresponding Lie algebras, then for all we have In other words, the following diagram commutes, (In short, exp is a natural transformation from the functor Lie to the identity functor on the category of Lie groups.) The exponential map from the Lie algebra to the Lie group is not always onto, even if the group is connected (though it does map onto the Lie group for connected groups that are either compact or nilpotent). For example, the exponential map of is not surjective. Also, the exponential map is neither surjective nor injective for infinite-dimensional (see below) Lie groups modelled on C∞ Fréchet space, even from arbitrary small neighborhood of 0 to corresponding neighborhood of 1. Lie subgroup A Lie subgroup of a Lie group is a Lie group that is a subset of and such that the inclusion map from to is an injective immersion and group homomorphism. According to Cartan's theorem, a closed subgroup of admits a unique smooth structure which makes it an embedded Lie subgroup of —i.e. a Lie subgroup such that the inclusion map is a smooth embedding. Examples of non-closed subgroups are plentiful; for example take to be a torus of dimension 2 or greater, and let be a one-parameter subgroup of irrational slope, i.e. one that winds around in G. Then there is a Lie group homomorphism with . The closure of will be a sub-torus in . The exponential map gives a one-to-one correspondence between the connected Lie subgroups of a connected Lie group and the subalgebras of the Lie algebra of . Typically, the subgroup corresponding to a subalgebra is not a closed subgroup. There is no criterion solely based on the structure of which determines which subalgebras correspond to closed subgroups. Representations One important aspect of the study of Lie groups is their representations, that is, the way they can act (linearly) on vector spaces. In physics, Lie groups often encode the symmetries of a physical system. The way one makes use of this symmetry to help analyze the system is often through representation theory. Consider, for example, the time-independent Schrödinger equation in quantum mechanics, . Assume the system in question has the rotation group SO(3) as a symmetry, meaning that the Hamiltonian operator commutes with the action of SO(3) on the wave function . (One important example of such a system is the hydrogen atom, which has a spherically symmetric potential.) This assumption does not necessarily mean that the solutions are rotationally invariant functions. Rather, it means that the space of solutions to is invariant under rotations (for each fixed value of ). This space, therefore, constitutes a representation of SO(3). These representations have been classified and the classification leads to a substantial simplification of the problem, essentially converting a three-dimensional partial differential equation to a one-dimensional ordinary differential equation. The case of a connected compact Lie group K (including the just-mentioned case of SO(3)) is particularly tractable. In that case, every finite-dimensional representation of K decomposes as a direct sum of irreducible representations. The irreducible representations, in turn, were classified by Hermann Weyl. The classification is in terms of the "highest weight" of the representation. The classification is closely related to the classification of representations of a semisimple Lie algebra. One can also study (in general infinite-dimensional) unitary representations of an arbitrary Lie group (not necessarily compact). For example, it is possible to give a relatively simple explicit description of the representations of the group SL(2, R) and the representations of the Poincaré group. Classification Lie groups may be thought of as smoothly varying families of symmetries. Examples of symmetries include rotation about an axis. What must be understood is the nature of 'small' transformations, for example, rotations through tiny angles, that link nearby transformations. The mathematical object capturing this structure is called a Lie algebra (Lie himself called them "infinitesimal groups"). It can be defined because Lie groups are smooth manifolds, so have tangent spaces at each point. The Lie algebra of any compact Lie group (very roughly: one for which the symmetries form a bounded set) can be decomposed as a direct sum of an abelian Lie algebra and some number of simple ones. The structure of an abelian Lie algebra is mathematically uninteresting (since the Lie bracket is identically zero); the interest is in the simple summands. Hence the question arises: what are the simple Lie algebras of compact groups? It turns out that they mostly fall into four infinite families, the "classical Lie algebras" An, Bn, Cn and Dn, which have simple descriptions in terms of symmetries of Euclidean space. But there are also just five "exceptional Lie algebras" that do not fall into any of these families. E8 is the largest of these. Lie groups are classified according to their algebraic properties (simple, semisimple, solvable, nilpotent, abelian), their connectedness (connected or simply connected) and their compactness. A first key result is the Levi decomposition, which says that every simply connected Lie group is the semidirect product of a solvable normal subgroup and a semisimple subgroup. Connected compact Lie groups are all known: they are finite central quotients of a product of copies of the circle group S1 and simple compact Lie groups (which correspond to connected Dynkin diagrams). Any simply connected solvable Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Solvable groups are too messy to classify except in a few small dimensions. Any simply connected nilpotent Lie group is isomorphic to a closed subgroup of the group of invertible upper triangular matrices with 1s on the diagonal of some rank, and any finite-dimensional irreducible representation of such a group is 1-dimensional. Like solvable groups, nilpotent groups are too messy to classify except in a few small dimensions. Simple Lie groups are sometimes defined to be those that are simple as abstract groups, and sometimes defined to be connected Lie groups with a simple Lie algebra. For example, SL(2, R) is simple according to the second definition but not according to the first. They have all been classified (for either definition). Semisimple Lie groups are Lie groups whose Lie algebra is a product of simple Lie algebras. They are central extensions of products of simple Lie groups. The identity component of any Lie group is an open normal subgroup, and the quotient group is a discrete group. The universal cover of any connected Lie group is a simply connected Lie group, and conversely any connected Lie group is a quotient of a simply connected Lie group by a discrete normal subgroup of the center. Any Lie group G can be decomposed into discrete, simple, and abelian groups in a canonical way as follows. Write Gcon for the connected component of the identity Gsol for the largest connected normal solvable subgroup Gnil for the largest connected normal nilpotent subgroup so that we have a sequence of normal subgroups . Then G/Gcon is discrete Gcon/Gsol is a central extension of a product of simple connected Lie groups. Gsol/Gnil is abelian. A connected abelian Lie group is isomorphic to a product of copies of R and the circle group S1. Gnil/1 is nilpotent, and therefore its ascending central series has all quotients abelian. This can be used to reduce some problems about Lie groups (such as finding their unitary representations) to the same problems for connected simple groups and nilpotent and solvable subgroups of smaller dimension. The diffeomorphism group of a Lie group acts transitively on the Lie group Every Lie group is parallelizable, and hence an orientable manifold (there is a bundle isomorphism between its tangent bundle and the product of itself with the tangent space at the identity) Infinite-dimensional Lie groups Lie groups are often defined to be finite-dimensional, but there are many groups that resemble Lie groups, except for being infinite-dimensional. The simplest way to define infinite-dimensional Lie groups is to model them locally on Banach spaces (as opposed to Euclidean space in the finite-dimensional case), and in this case much of the basic theory is similar to that of finite-dimensional Lie groups. However this is inadequate for many applications, because many natural examples of infinite-dimensional Lie groups are not Banach manifolds. Instead one needs to define Lie groups modeled on more general locally convex topological vector spaces. In this case the relation between the Lie algebra and the Lie group becomes rather subtle, and several results about finite-dimensional Lie groups no longer hold. The literature is not entirely uniform in its terminology as to exactly which properties of infinite-dimensional groups qualify the group for the prefix Lie in Lie group. On the Lie algebra side of affairs, things are simpler since the qualifying criteria for the prefix Lie in Lie algebra are purely algebraic. For example, an infinite-dimensional Lie algebra may or may not have a corresponding Lie group. That is, there may be a group corresponding to the Lie algebra, but it might not be nice enough to be called a Lie group, or the connection between the group and the Lie algebra might not be nice enough (for example, failure of the exponential map to be onto a neighborhood of the identity). It is the "nice enough" that is not universally defined. Some of the examples that have been studied include: The group of diffeomorphisms of a manifold. Quite a lot is known about the group of diffeomorphisms of the circle. Its Lie algebra is (more or less) the Witt algebra, whose central extension the Virasoro algebra (see Virasoro algebra from Witt algebra for a derivation of this fact) is the symmetry algebra of two-dimensional conformal field theory. Diffeomorphism groups of compact manifolds of larger dimension are regular Fréchet Lie groups; very little about their structure is known. The diffeomorphism group of spacetime sometimes appears in attempts to quantize gravity. The group of smooth maps from a manifold to a finite-dimensional Lie group is an example of a gauge group (with operation of pointwise multiplication), and is used in quantum field theory and Donaldson theory. If the manifold is a circle these are called loop groups, and have central extensions whose Lie algebras are (more or less) Kac–Moody algebras. There are infinite-dimensional analogues of general linear groups, orthogonal groups, and so on. One important aspect is that these may have simpler topological properties: see for example Kuiper's theorem. In M-theory, for example, a 10-dimensional SU(N) gauge theory becomes an 11-dimensional theory when N becomes infinite. See also Adjoint representation of a Lie group Haar measure Homogeneous space List of Lie group topics Representations of Lie groups Symmetry in quantum mechanics Lie point symmetry, about the application of Lie groups to the study of differential equations. Notes Explanatory notes Citations References . . Chapters 1–3 , Chapters 4–6 , Chapters 7–9 . . Borel's review . . . The 2003 reprint corrects several typographical mistakes. . . External links Journal of Lie Theory Manifolds Symmetry
Lie group
[ "Physics", "Mathematics" ]
9,098
[ "Lie groups", "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology", "Algebraic structures", "Manifolds", "Geometry", "Symmetry" ]
17,973
https://en.wikipedia.org/wiki/Liquid%20crystal
Liquid crystal (LC) is a state of matter whose properties are between those of conventional liquids and those of solid crystals. For example, a liquid crystal can flow like a liquid, but its molecules may be oriented in a common direction as in a solid. There are many types of LC phases, which can be distinguished by their optical properties (such as textures). The contrasting textures arise due to molecules within one area of material ("domain") being oriented in the same direction but different areas having different orientations. An LC material may not always be in an LC state of matter (just as water may be ice or water vapor). Liquid crystals can be divided into three main types: thermotropic, lyotropic, and metallotropic. Thermotropic and lyotropic liquid crystals consist mostly of organic molecules, although a few minerals are also known. Thermotropic LCs exhibit a phase transition into the LC phase as temperature changes. Lyotropic LCs exhibit phase transitions as a function of both temperature and concentration of molecules in a solvent (typically water). Metallotropic LCs are composed of both organic and inorganic molecules; their LC transition additionally depends on the inorganic-organic composition ratio. Examples of LCs exist both in the natural world and in technological applications. Lyotropic LCs abound in living systems; many proteins and cell membranes are LCs, as well as the tobacco mosaic virus . LCs in the mineral world include solutions of soap and various related detergents, and some clays. Widespread liquid-crystal displays (LCD) use liquid crystals. History In 1888, Austrian botanical physiologist Friedrich Reinitzer, working at the Karl-Ferdinands-Universität, examined the physico-chemical properties of various derivatives of cholesterol which now belong to the class of materials known as cholesteric liquid crystals. Previously, other researchers had observed distinct color effects when cooling cholesterol derivatives just above the freezing point, but had not associated it with a new phenomenon. Reinitzer perceived that color changes in a derivative cholesteryl benzoate were not the most peculiar feature. He found that cholesteryl benzoate does not melt in the same manner as other compounds, but has two melting points. At it melts into a cloudy liquid, and at it melts again and the cloudy liquid becomes clear. The phenomenon is reversible. Seeking help from a physicist, on March 14, 1888, he wrote to Otto Lehmann, at that time a in Aachen. They exchanged letters and samples. Lehmann examined the intermediate cloudy fluid, and reported seeing crystallites. Reinitzer's Viennese colleague von Zepharovich also indicated that the intermediate "fluid" was crystalline. The exchange of letters with Lehmann ended on April 24, with many questions unanswered. Reinitzer presented his results, with credits to Lehmann and von Zepharovich, at a meeting of the Vienna Chemical Society on May 3, 1888. By that time, Reinitzer had discovered and described three important features of cholesteric liquid crystals (the name coined by Otto Lehmann in 1904): the existence of two melting points, the reflection of circularly polarized light, and the ability to rotate the polarization direction of light. After his accidental discovery, Reinitzer did not pursue studying liquid crystals further. The research was continued by Lehmann, who realized that he had encountered a new phenomenon and was in a position to investigate it: In his postdoctoral years he had acquired expertise in crystallography and microscopy. Lehmann started a systematic study, first of cholesteryl benzoate, and then of related compounds which exhibited the double-melting phenomenon. He was able to make observations in polarized light, and his microscope was equipped with a hot stage (sample holder equipped with a heater) enabling high temperature observations. The intermediate cloudy phase clearly sustained flow, but other features, particularly the signature under a microscope, convinced Lehmann that he was dealing with a solid. By the end of August 1889 he had published his results in the Zeitschrift für Physikalische Chemie. Lehmann's work was continued and significantly expanded by the German chemist Daniel Vorländer, who from the beginning of the 20th century until he retired in 1935, had synthesized most of the liquid crystals known. However, liquid crystals were not popular among scientists and the material remained a pure scientific curiosity for about 80 years. After World War II, work on the synthesis of liquid crystals was restarted at university research laboratories in Europe. George William Gray, a prominent researcher of liquid crystals, began investigating these materials in England in the late 1940s. His group synthesized many new materials that exhibited the liquid crystalline state and developed a better understanding of how to design molecules that exhibit the state. His book Molecular Structure and the Properties of Liquid Crystals became a guidebook on the subject. One of the first U.S. chemists to study liquid crystals was Glenn H. Brown, starting in 1953 at the University of Cincinnati and later at Kent State University. In 1965, he organized the first international conference on liquid crystals, in Kent, Ohio, with about 100 of the world's top liquid crystal scientists in attendance. This conference marked the beginning of a worldwide effort to perform research in this field, which soon led to the development of practical applications for these unique materials. Liquid crystal materials became a focus of research in the development of flat panel electronic displays beginning in 1962 at RCA Laboratories. When physical chemist Richard Williams applied an electric field to a thin layer of a nematic liquid crystal at 125 °C, he observed the formation of a regular pattern that he called domains (now known as Williams Domains). This led his colleague George H. Heilmeier to perform research on a liquid crystal-based flat panel display to replace the cathode ray vacuum tube used in televisions. But the para-azoxyanisole that Williams and Heilmeier used exhibits the nematic liquid crystal state only above 116 °C, which made it impractical to use in a commercial display product. A material that could be operated at room temperature was clearly needed. In 1966, Joel E. Goldmacher and Joseph A. Castellano, research chemists in Heilmeier group at RCA, discovered that mixtures made exclusively of nematic compounds that differed only in the number of carbon atoms in the terminal side chains could yield room-temperature nematic liquid crystals. A ternary mixture of Schiff base compounds resulted in a material that had a nematic range of 22–105 °C. Operation at room temperature enabled the first practical display device to be made. The team then proceeded to prepare numerous mixtures of nematic compounds many of which had much lower melting points. This technique of mixing nematic compounds to obtain wide operating temperature range eventually became the industry standard and is still used to tailor materials to meet specific applications. In 1969, Hans Keller succeeded in synthesizing a substance that had a nematic phase at room temperature, N-(4-methoxybenzylidene)-4-butylaniline (MBBA), which is one of the most popular subjects of liquid crystal research. The next step to commercialization of liquid-crystal displays was the synthesis of further chemically stable substances (cyanobiphenyls) with low melting temperatures by George Gray. That work with Ken Harrison and the UK MOD (RRE Malvern), in 1973, led to design of new materials resulting in rapid adoption of small area LCDs within electronic products. These molecules are rod-shaped, some created in the laboratory and some appearing spontaneously in nature. Since then, two new types of LC molecules have been synthesized: disc-shaped (by Sivaramakrishna Chandrasekhar in India in 1977) and cone or bowl shaped (predicted by Lui Lam in China in 1982 and synthesized in Europe in 1985). In 1991, when liquid crystal displays were already well established, Pierre-Gilles de Gennes working at the Université Paris-Sud received the Nobel Prize in physics "for discovering that methods developed for studying order phenomena in simple systems can be generalized to more complex forms of matter, in particular to liquid crystals and polymers". Design of liquid crystalline materials A large number of chemical compounds are known to exhibit one or several liquid crystalline phases. Despite significant differences in chemical composition, these molecules have some common features in chemical and physical properties. There are three types of thermotropic liquid crystals: discotic, conic (bowlic), and rod-shaped molecules. Discotics are disc-like molecules consisting of a flat core of adjacent aromatic rings, whereas the core in a conic LC is not flat, but is shaped like a rice bowl (a three-dimensional object). This allows for two dimensional columnar ordering, for both discotic and conic LCs. Rod-shaped molecules have an elongated, anisotropic geometry which allows for preferential alignment along one spatial direction. The molecular shape should be relatively thin, flat or conic, especially within rigid molecular frameworks. The molecular length should be at least 1.3 nm, consistent with the presence of long alkyl group on many room-temperature liquid crystals. The structure should not be branched or angular, except for the conic LC. A low melting point is preferable in order to avoid metastable, monotropic liquid crystalline phases. Low-temperature mesomorphic behavior in general is technologically more useful, and alkyl terminal groups promote this. An extended, structurally rigid, highly anisotropic shape seems to be the main criterion for liquid crystalline behavior, and as a result many liquid crystalline materials are based on benzene rings. Liquid-crystal phases The various liquid-crystal phases (called mesophases together with plastic crystal phases) can be characterized by the type of ordering. One can distinguish positional order (whether molecules are arranged in any sort of ordered lattice) and orientational order (whether molecules are mostly pointing in the same direction). Liquid crystals are characterized by orientational order, but only partial or completely absent positional order. In contrast, materials with positional order but no orientational order are known as plastic crystals. Most thermotropic LCs will have an isotropic phase at high temperature: heating will eventually drive them into a conventional liquid phase characterized by random and isotropic molecular ordering and fluid-like flow behavior. Under other conditions (for instance, lower temperature), a LC might inhabit one or more phases with significant anisotropic orientational structure and short-range orientational order while still having an ability to flow. The ordering of liquid crystals extends up to the entire domain size, which may be on the order of micrometers, but usually not to the macroscopic scale as often occurs in classical crystalline solids. However some techniques, such as the use of boundaries or an applied electric field, can be used to enforce a single ordered domain in a macroscopic liquid crystal sample. The orientational ordering in a liquid crystal might extend along only one dimension, with the material being essentially disordered in the other two directions. Thermotropic liquid crystals Thermotropic phases are those that occur in a certain temperature range. If the temperature rise is too high, thermal motion will destroy the delicate cooperative ordering of the LC phase, pushing the material into a conventional isotropic liquid phase. At too low temperature, most LC materials will form a conventional crystal. Many thermotropic LCs exhibit a variety of phases as temperature is changed. For instance, a particular type of LC molecule (called a mesogen) may exhibit various smectic phases followed by the nematic phase and finally the isotropic phase as temperature is increased. An example of a compound displaying thermotropic LC behavior is para-azoxyanisole. Nematic phase The simplest liquid crystal phase is the nematic. In a nematic phase, organic molecules lack a crystalline positional order, but do self-align with their long axes roughly parallel. The molecules are free to flow and their center of mass positions are randomly distributed as in a liquid, but their orientation is constrained to form a long-range directional order. The word nematic comes from the Greek (), which means "thread". This term originates from the disclinations: thread-like topological defects observed in nematic phases. Nematics also exhibit so-called "hedgehog" topological defects. In two dimensions, there are topological defects with topological charges and . Due to hydrodynamics, the defect moves considerably faster than the defect. When placed close to each other, the defects attract; upon collision, they annihilate. Most nematic phases are uniaxial: they have one axis (called a directrix) that is longer and preferred, with the other two being equivalent (can be approximated as cylinders or rods). However, some liquid crystals are biaxial nematic, meaning that in addition to orienting their long axis, they also orient along a secondary axis. Nematic crystals have fluidity similar to that of ordinary (isotropic) liquids but they can be easily aligned by an external magnetic or electric field. Aligned nematics have the optical properties of uniaxial crystals and this makes them extremely useful in liquid-crystal displays (LCD). Nematic phases are also known in non-molecular systems: at high magnetic fields, electrons flow in bundles or stripes to create an "electronic nematic" form of matter. Smectic phases The smectic phases, which are found at lower temperatures than the nematic, form well-defined layers that can slide over one another in a manner similar to that of soap. The word "smectic" originates from the Latin word "smecticus", meaning cleaning, or having soap-like properties. The smectics are thus positionally ordered along one direction. In the Smectic A phase, the molecules are oriented along the layer normal, while in the Smectic C phase they are tilted away from it. These phases are liquid-like within the layers. There are many different smectic phases, all characterized by different types and degrees of positional and orientational order. Beyond organic molecules, Smectic ordering has also been reported to occur within colloidal suspensions of 2-D materials or nanosheets. One example of smectic LCs is [[p,p-dinonylazobenzene|p,p-dinonylazobenzene]]. Chiral phases or twisted nematics The chiral nematic phase exhibits chirality (handedness). This phase is often called the cholesteric phase because it was first observed for cholesterol derivatives. Only chiral molecules can give rise to such a phase. This phase exhibits a twisting of the molecules perpendicular to the director, with the molecular axis parallel to the director. The finite twist angle between adjacent molecules is due to their asymmetric packing, which results in longer-range chiral order. In the smectic C* phase (an asterisk denotes a chiral phase), the molecules have positional ordering in a layered structure (as in the other smectic phases), with the molecules tilted by a finite angle with respect to the layer normal. The chirality induces a finite azimuthal twist from one layer to the next, producing a spiral twisting of the molecular axis along the layer normal, hence they are also called twisted nematics. The chiral pitch, p, refers to the distance over which the LC molecules undergo a full 360° twist (but note that the structure of the chiral nematic phase repeats itself every half-pitch, since in this phase directors at 0° and ±180° are equivalent). The pitch, p, typically changes when the temperature is altered or when other molecules are added to the LC host (an achiral LC host material will form a chiral phase if doped with a chiral material), allowing the pitch of a given material to be tuned accordingly. In some liquid crystal systems, the pitch is of the same order as the wavelength of visible light. This causes these systems to exhibit unique optical properties, such as Bragg reflection and low-threshold laser emission, and these properties are exploited in a number of optical applications. For the case of Bragg reflection only the lowest-order reflection is allowed if the light is incident along the helical axis, whereas for oblique incidence higher-order reflections become permitted. Cholesteric liquid crystals also exhibit the unique property that they reflect circularly polarized light when it is incident along the helical axis and elliptically polarized if it comes in obliquely. Blue phases Blue phases are liquid crystal phases that appear in the temperature range between a chiral nematic phase and an isotropic liquid phase. Blue phases have a regular three-dimensional cubic structure of defects with lattice periods of several hundred nanometers, and thus they exhibit selective Bragg reflections in the wavelength range of visible light corresponding to the cubic lattice. It was theoretically predicted in 1981 that these phases can possess icosahedral symmetry similar to quasicrystals. Although blue phases are of interest for fast light modulators or tunable photonic crystals, they exist in a very narrow temperature range, usually less than a few kelvins. Recently the stabilization of blue phases over a temperature range of more than 60 K including room temperature (260–326 K) has been demonstrated. Blue phases stabilized at room temperature allow electro-optical switching with response times of the order of 10−4 s. In May 2008, the first blue phase mode LCD panel had been developed. Blue phase crystals, being a periodic cubic structure with a bandgap in the visible wavelength range, can be considered as 3D photonic crystals. Producing ideal blue phase crystals in large volumes is still problematic, since the produced crystals are usually polycrystalline (platelet structure) or the single crystal size is limited (in the micrometer range). Recently, blue phases obtained as ideal 3D photonic crystals in large volumes have been stabilized and produced with different controlled crystal lattice orientations. Discotic phases Disk-shaped LC molecules can orient themselves in a layer-like fashion known as the discotic nematic phase. If the disks pack into stacks, the phase is called a discotic columnar. The columns themselves may be organized into rectangular or hexagonal arrays. Chiral discotic phases, similar to the chiral nematic phase, are also known. Conic phases Conic LC molecules, like in discotics, can form columnar phases. Other phases, such as nonpolar nematic, polar nematic, stringbean, donut and onion phases, have been predicted. Conic phases, except nonpolar nematic, are polar phases. Lyotropic liquid crystals A lyotropic liquid crystal consists of two or more components that exhibit liquid-crystalline properties in certain concentration ranges. In the lyotropic phases, solvent molecules fill the space around the compounds to provide fluidity to the system. In contrast to thermotropic liquid crystals, these lyotropics have another degree of freedom of concentration that enables them to induce a variety of different phases. A compound that has two immiscible hydrophilic and hydrophobic parts within the same molecule is called an amphiphilic molecule. Many amphiphilic molecules show lyotropic liquid-crystalline phase sequences depending on the volume balances between the hydrophilic part and hydrophobic part. These structures are formed through the micro-phase segregation of two incompatible components on a nanometer scale. Soap is an everyday example of a lyotropic liquid crystal. The content of water or other solvent molecules changes the self-assembled structures. At very low amphiphile concentration, the molecules will be dispersed randomly without any ordering. At slightly higher (but still low) concentration, amphiphilic molecules will spontaneously assemble into micelles or vesicles. This is done so as to 'hide' the hydrophobic tail of the amphiphile inside the micelle core, exposing a hydrophilic (water-soluble) surface to aqueous solution. These spherical objects do not order themselves in solution, however. At higher concentration, the assemblies will become ordered. A typical phase is a hexagonal columnar phase, where the amphiphiles form long cylinders (again with a hydrophilic surface) that arrange themselves into a roughly hexagonal lattice. This is called the middle soap phase. At still higher concentration, a lamellar phase (neat soap phase) may form, wherein extended sheets of amphiphiles are separated by thin layers of water. For some systems, a cubic (also called viscous isotropic) phase may exist between the hexagonal and lamellar phases, wherein spheres are formed that create a dense cubic lattice. These spheres may also be connected to one another, forming a bicontinuous cubic phase. The objects created by amphiphiles are usually spherical (as in the case of micelles), but may also be disc-like (bicelles), rod-like, or biaxial (all three micelle axes are distinct). These anisotropic self-assembled nano-structures can then order themselves in much the same way as thermotropic liquid crystals do, forming large-scale versions of all the thermotropic phases (such as a nematic phase of rod-shaped micelles). For some systems, at high concentrations, inverse phases are observed. That is, one may generate an inverse hexagonal columnar phase (columns of water encapsulated by amphiphiles) or an inverse micellar phase (a bulk liquid crystal sample with spherical water cavities). A generic progression of phases, going from low to high amphiphile concentration, is: Discontinuous cubic phase (micellar cubic phase) Hexagonal phase (hexagonal columnar phase) (middle phase) Lamellar phase Bicontinuous cubic phase Reverse hexagonal columnar phase Inverse cubic phase (Inverse micellar phase) Even within the same phases, their self-assembled structures are tunable by the concentration: for example, in lamellar phases, the layer distances increase with the solvent volume. Since lyotropic liquid crystals rely on a subtle balance of intermolecular interactions, it is more difficult to analyze their structures and properties than those of thermotropic liquid crystals. Similar phases and characteristics can be observed in immiscible diblock copolymers. Metallotropic liquid crystals Liquid crystal phases can also be based on low-melting inorganic phases like ZnCl2 that have a structure formed of linked tetrahedra and easily form glasses. The addition of long chain soap-like molecules leads to a series of new phases that show a variety of liquid crystalline behavior both as a function of the inorganic-organic composition ratio and of temperature. This class of materials has been named metallotropic. Laboratory analysis of mesophases Thermotropic mesophases are detected and characterized by two major methods, the original method was use of thermal optical microscopy, in which a small sample of the material was placed between two crossed polarizers; the sample was then heated and cooled. As the isotropic phase would not significantly affect the polarization of the light, it would appear very dark, whereas the crystal and liquid crystal phases will both polarize the light in a uniform way, leading to brightness and color gradients. This method allows for the characterization of the particular phase, as the different phases are defined by their particular order, which must be observed. The second method, differential scanning calorimetry (DSC), allows for more precise determination of phase transitions and transition enthalpies. In DSC, a small sample is heated in a way that generates a very precise change in temperature with respect to time. During phase transitions, the heat flow required to maintain this heating or cooling rate will change. These changes can be observed and attributed to various phase transitions, such as key liquid crystal transitions. Lyotropic mesophases are analyzed in a similar fashion, though these experiments are somewhat more complex, as the concentration of mesogen is a key factor. These experiments are run at various concentrations of mesogen in order to analyze that impact. Biological liquid crystals Lyotropic liquid-crystalline phases are abundant in living systems, the study of which is referred to as lipid polymorphism. Accordingly, lyotropic liquid crystals attract particular attention in the field of biomimetic chemistry. In particular, biological membranes and cell membranes are a form of liquid crystal. Their constituent molecules (e.g. phospholipids) are perpendicular to the membrane surface, yet the membrane is flexible. These lipids vary in shape (see page on lipid polymorphism). The constituent molecules can inter-mingle easily, but tend not to leave the membrane due to the high energy requirement of this process. Lipid molecules can flip from one side of the membrane to the other, this process being catalyzed by flippases and floppases (depending on the direction of movement). These liquid crystal membrane phases can also host important proteins such as receptors freely "floating" inside, or partly outside, the membrane, e.g. CTP:phosphocholine cytidylyltransferase (CCT). Many other biological structures exhibit liquid-crystal behavior. For instance, the concentrated protein solution that is extruded by a spider to generate silk is, in fact, a liquid crystal phase. The precise ordering of molecules in silk is critical to its renowned strength. DNA and many polypeptides, including actively-driven cytoskeletal filaments, can also form liquid crystal phases. Monolayers of elongated cells have also been described to exhibit liquid-crystal behavior, and the associated topological defects have been associated with biological consequences, including cell death and extrusion. Together, these biological applications of liquid crystals form an important part of current academic research. Mineral liquid crystals Examples of liquid crystals can also be found in the mineral world, most of them being lyotropic. The first discovered was vanadium(V) oxide, by Zocher in 1925. Since then, few others have been discovered and studied in detail. The existence of a true nematic phase in the case of the smectite clays family was raised by Langmuir in 1938, but remained an open question for a very long time and was only confirmed recently. With the rapid development of nanosciences, and the synthesis of many new anisotropic nanoparticles, the number of such mineral liquid crystals is increasing quickly, with, for example, carbon nanotubes and graphene. A lamellar phase was even discovered, H3Sb3P2O14, which exhibits hyperswelling up to ~250 nm for the interlamellar distance. Pattern formation in liquid crystals Anisotropy of liquid crystals is a property not observed in other fluids. This anisotropy makes flows of liquid crystals behave more differentially than those of ordinary fluids. For example, injection of a flux of a liquid crystal between two close parallel plates (viscous fingering) causes orientation of the molecules to couple with the flow, with the resulting emergence of dendritic patterns. This anisotropy is also manifested in the interfacial energy (surface tension) between different liquid crystal phases. This anisotropy determines the equilibrium shape at the coexistence temperature, and is so strong that usually facets appear. When temperature is changed one of the phases grows, forming different morphologies depending on the temperature change. Since growth is controlled by heat diffusion, anisotropy in thermal conductivity favors growth in specific directions, which has also an effect on the final shape. Theoretical treatment of liquid crystals Microscopic theoretical treatment of fluid phases can become quite complicated, owing to the high material density, meaning that strong interactions, hard-core repulsions, and many-body correlations cannot be ignored. In the case of liquid crystals, anisotropy in all of these interactions further complicates analysis. There are a number of fairly simple theories, however, that can at least predict the general behavior of the phase transitions in liquid crystal systems. Director As we already saw above, the nematic liquid crystals are composed of rod-like molecules with the long axes of neighboring molecules aligned approximately to one another. To describe this anisotropic structure, a dimensionless unit vector n called the director, is introduced to represent the direction of preferred orientation of molecules in the neighborhood of any point. Because there is no physical polarity along the director axis, n and -n''' are fully equivalent. Order parameter The description of liquid crystals involves an analysis of order. A second rank symmetric traceless tensor order parameter, the Q tensor is used to describe the orientational order of the most general biaxial nematic liquid crystal. However, to describe the more common case of uniaxial nematic liquid crystals, a scalar order parameter is sufficient. To make this quantitative, an orientational order parameter is usually defined based on the average of the second Legendre polynomial: where is the angle between the liquid-crystal molecular axis and the local director (which is the 'preferred direction' in a volume element of a liquid crystal sample, also representing its local optical axis). The brackets denote both a temporal and spatial average. This definition is convenient, since for a completely random and isotropic sample, S = 0, whereas for a perfectly aligned sample S=1. For a typical liquid crystal sample, S is on the order of 0.3 to 0.8, and generally decreases as the temperature is raised. In particular, a sharp drop of the order parameter to 0 is observed when the system undergoes a phase transition from an LC phase into the isotropic phase. The order parameter can be measured experimentally in a number of ways; for instance, diamagnetism, birefringence, Raman scattering, NMR and EPR can be used to determine S. The order of a liquid crystal could also be characterized by using other even Legendre polynomials (all the odd polynomials average to zero since the director can point in either of two antiparallel directions). These higher-order averages are more difficult to measure, but can yield additional information about molecular ordering. A positional order parameter is also used to describe the ordering of a liquid crystal. It is characterized by the variation of the density of the center of mass of the liquid crystal molecules along a given vector. In the case of positional variation along the z-axis the density is often given by: The complex positional order parameter is defined as and the average density. Typically only the first two terms are kept and higher order terms are ignored since most phases can be described adequately using sinusoidal functions. For a perfect nematic and for a smectic phase will take on complex values. The complex nature of this order parameter allows for many parallels between nematic to smectic phase transitions and conductor to superconductor transitions. Onsager hard-rod model A simple model which predicts lyotropic phase transitions is the hard-rod model proposed by Lars Onsager. This theory considers the volume excluded from the center-of-mass of one idealized cylinder as it approaches another. Specifically, if the cylinders are oriented parallel to one another, there is very little volume that is excluded from the center-of-mass of the approaching cylinder (it can come quite close to the other cylinder). If, however, the cylinders are at some angle to one another, then there is a large volume surrounding the cylinder which the approaching cylinder's center-of-mass cannot enter (due to the hard-rod repulsion between the two idealized objects). Thus, this angular arrangement sees a decrease in the net positional entropy of the approaching cylinder (there are fewer states available to it). The fundamental insight here is that, whilst parallel arrangements of anisotropic objects lead to a decrease in orientational entropy, there is an increase in positional entropy. Thus in some case greater positional order will be entropically favorable. This theory thus predicts that a solution of rod-shaped objects will undergo a phase transition, at sufficient concentration, into a nematic phase. Although this model is conceptually helpful, its mathematical formulation makes several assumptions that limit its applicability to real systems. An extension of Onsager Theory was proposed by Flory to account for non entropic effects. Maier–Saupe mean field theory This statistical theory, proposed by Alfred Saupe and Wilhelm Maier, includes contributions from an attractive intermolecular potential from an induced dipole moment between adjacent rod-like liquid crystal molecules. The anisotropic attraction stabilizes parallel alignment of neighboring molecules, and the theory then considers a mean-field average of the interaction. Solved self-consistently, this theory predicts thermotropic nematic-isotropic phase transitions, consistent with experiment. Maier-Saupe mean field theory is extended to high molecular weight liquid crystals by incorporating the bending stiffness of the molecules and using the method of path integrals in polymer science. McMillan's model McMillan's model, proposed by William McMillan, is an extension of the Maier–Saupe mean field theory used to describe the phase transition of a liquid crystal from a nematic to a smectic A phase. It predicts that the phase transition can be either continuous or discontinuous depending on the strength of the short-range interaction between the molecules. As a result, it allows for a triple critical point where the nematic, isotropic, and smectic A phase meet. Although it predicts the existence of a triple critical point, it does not successfully predict its value. The model utilizes two order parameters that describe the orientational and positional order of the liquid crystal. The first is simply the average of the second Legendre polynomial and the second order parameter is given by: The values zi, θi, and d'' are the position of the molecule, the angle between the molecular axis and director, and the layer spacing. The postulated potential energy of a single molecule is given by: Here constant α quantifies the strength of the interaction between adjacent molecules. The potential is then used to derive the thermodynamic properties of the system assuming thermal equilibrium. It results in two self-consistency equations that must be solved numerically, the solutions of which are the three stable phases of the liquid crystal. Elastic continuum theory In this formalism, a liquid crystal material is treated as a continuum; molecular details are entirely ignored. Rather, this theory considers perturbations to a presumed oriented sample. The distortions of the liquid crystal are commonly described by the Frank free energy density. One can identify three types of distortions that could occur in an oriented sample: (1) twists of the material, where neighboring molecules are forced to be angled with respect to one another, rather than aligned; (2) splay of the material, where bending occurs perpendicular to the director; and (3) bend of the material, where the distortion is parallel to the director and molecular axis. All three of these types of distortions incur an energy penalty. They are distortions that are induced by the boundary conditions at domain walls or the enclosing container. The response of the material can then be decomposed into terms based on the elastic constants corresponding to the three types of distortions. Elastic continuum theory is an effective tool for modeling liquid crystal devices and lipid bilayers. External influences on liquid crystals Scientists and engineers are able to use liquid crystals in a variety of applications because external perturbation can cause significant changes in the macroscopic properties of the liquid crystal system. Both electric and magnetic fields can be used to induce these changes. The magnitude of the fields, as well as the speed at which the molecules align are important characteristics industry deals with. Special surface treatments can be used in liquid crystal devices to force specific orientations of the director. Electric and magnetic field effects The ability of the director to align along an external field is caused by the electric nature of the molecules. Permanent electric dipoles result when one end of a molecule has a net positive charge while the other end has a net negative charge. When an external electric field is applied to the liquid crystal, the dipole molecules tend to orient themselves along the direction of the field. Even if a molecule does not form a permanent dipole, it can still be influenced by an electric field. In some cases, the field produces slight re-arrangement of electrons and protons in molecules such that an induced electric dipole results. While not as strong as permanent dipoles, orientation with the external field still occurs. The response of any system to an external electrical field is where , and are the components of the electric field, electric displacement field and polarization density. The electric energy per volume stored in the system is (summation over the doubly appearing index ). In nematic liquid crystals, the polarization, and electric displacement both depend linearly on the direction of the electric field. The polarization should be even in the director since liquid crystals are invariants under reflexions of . The most general form to express is (summation over the index ) with and the electric permittivity parallel and perpendicular to the director . Then density of energy is (ignoring the constant terms that do not contribute to the dynamics of the system) (summation over ). If is positive, then the minimum of the energy is achieved when and are parallel. This means that the system will favor aligning the liquid crystal with the externally applied electric field. If is negative, then the minimum of the energy is achieved when and are perpendicular (in nematics the perpendicular orientation is degenerated, making possible the emergence of vortices). The difference is called dielectrical anisotropy and is an important parameter in liquid crystal applications. There are both and commercial liquid crystals. 5CB and E7 liquid crystal mixture are two liquid crystals commonly used. MBBA is a common liquid crystal. The effects of magnetic fields on liquid crystal molecules are analogous to electric fields. Because magnetic fields are generated by moving electric charges, permanent magnetic dipoles are produced by electrons moving about atoms. When a magnetic field is applied, the molecules will tend to align with or against the field. Electromagnetic radiation, e.g. UV-Visible light, can influence light-responsive liquid crystals which mainly carry at least a photo-switchable unit. Surface preparations In the absence of an external field, the director of a liquid crystal is free to point in any direction. It is possible, however, to force the director to point in a specific direction by introducing an outside agent to the system. For example, when a thin polymer coating (usually a polyimide) is spread on a glass substrate and rubbed in a single direction with a cloth, it is observed that liquid crystal molecules in contact with that surface align with the rubbing direction. The currently accepted mechanism for this is believed to be an epitaxial growth of the liquid crystal layers on the partially aligned polymer chains in the near surface layers of the polyimide. Several liquid crystal chemicals also align to a 'command surface' which is in turn aligned by electric field of polarized light. This process is called photoalignment. Fréedericksz transition The competition between orientation produced by surface anchoring and by electric field effects is often exploited in liquid crystal devices. Consider the case in which liquid crystal molecules are aligned parallel to the surface and an electric field is applied perpendicular to the cell. At first, as the electric field increases in magnitude, no change in alignment occurs. However at a threshold magnitude of electric field, deformation occurs. Deformation occurs where the director changes its orientation from one molecule to the next. The occurrence of such a change from an aligned to a deformed state is called a Fréedericksz transition and can also be produced by the application of a magnetic field of sufficient strength. The Fréedericksz transition is fundamental to the operation of many liquid crystal displays because the director orientation (and thus the properties) can be controlled easily by the application of a field. Effect of chirality As already described, chiral liquid-crystal molecules usually give rise to chiral mesophases. This means that the molecule must possess some form of asymmetry, usually a stereogenic center. An additional requirement is that the system not be racemic: a mixture of right- and left-handed molecules will cancel the chiral effect. Due to the cooperative nature of liquid crystal ordering, however, a small amount of chiral dopant in an otherwise achiral mesophase is often enough to select out one domain handedness, making the system overall chiral. Chiral phases usually have a helical twisting of the molecules. If the pitch of this twist is on the order of the wavelength of visible light, then interesting optical interference effects can be observed. The chiral twisting that occurs in chiral LC phases also makes the system respond differently from right- and left-handed circularly polarized light. These materials can thus be used as polarization filters. It is possible for chiral LC molecules to produce essentially achiral mesophases. For instance, in certain ranges of concentration and molecular weight, DNA will form an achiral line hexatic phase. An interesting recent observation is of the formation of chiral mesophases from achiral LC molecules. Specifically, bent-core molecules (sometimes called banana liquid crystals) have been shown to form liquid crystal phases that are chiral. In any particular sample, various domains will have opposite handedness, but within any given domain, strong chiral ordering will be present. The appearance mechanism of this macroscopic chirality is not yet entirely clear. It appears that the molecules stack in layers and orient themselves in a tilted fashion inside the layers. These liquid crystals phases may be ferroelectric or anti-ferroelectric, both of which are of interest for applications. Chirality can also be incorporated into a phase by adding a chiral dopant, which may not form LCs itself. Twisted-nematic or super-twisted nematic mixtures often contain a small amount of such dopants. Applications of liquid crystals Liquid crystals find wide use in liquid crystal displays, which rely on the optical properties of certain liquid crystalline substances in the presence or absence of an electric field. In a typical device, a liquid crystal layer (typically 4 μm thick) sits between two polarizers that are crossed (oriented at 90° to one another). The liquid crystal alignment is chosen so that its relaxed phase is a twisted one (see Twisted nematic field effect). This twisted phase reorients light that has passed through the first polarizer, allowing its transmission through the second polarizer (and reflected back to the observer if a reflector is provided). The device thus appears transparent. When an electric field is applied to the LC layer, the long molecular axes tend to align parallel to the electric field thus gradually untwisting in the center of the liquid crystal layer. In this state, the LC molecules do not reorient light, so the light polarized at the first polarizer is absorbed at the second polarizer, and the device loses transparency with increasing voltage. In this way, the electric field can be used to make a pixel switch between transparent or opaque on command. Color LCD systems use the same technique, with color filters used to generate red, green, and blue pixels. Chiral smectic liquid crystals are used in ferroelectric LCDs which are fast-switching binary light modulators. Similar principles can be used to make other liquid crystal based optical devices. Liquid crystal tunable filters are used as electro-optical devices, e.g., in hyperspectral imaging. Thermotropic chiral LCs whose pitch varies strongly with temperature can be used as crude liquid crystal thermometers, since the color of the material will change as the pitch is changed. Liquid crystal color transitions are used on many aquarium and pool thermometers as well as on thermometers for infants or baths. Other liquid crystal materials change color when stretched or stressed. Thus, liquid crystal sheets are often used in industry to look for hot spots, map heat flow, measure stress distribution patterns, and so on. Liquid crystal in fluid form is used to detect electrically generated hot spots for failure analysis in the semiconductor industry. Liquid crystal lenses converge or diverge the incident light by adjusting the refractive index of liquid crystal layer with applied voltage or temperature. Generally, the liquid crystal lenses generate a parabolic refractive index distribution by arranging molecular orientations. Therefore, a plane wave is reshaped into a parabolic wavefront by a liquid crystal lens. The focal length of liquid crystal lenses could be continuously tunable when the external electric field can be properly tuned. Liquid crystal lenses are a kind of adaptive optics. Imaging systems can benefit from focusing correction, image plane adjustment, or changing the range of depth-of-field or depth of focus. The liquid crystal lense is one of the candidates to develop vision correction devices for myopia and presbyopia (e.g., tunable eyeglass and smart contact lenses). Being an optical phase modulator, a liquid crystal lens feature space-variant optical path length (i.e., optical path length as the function of its pupil coordinate). In different imaging system, the required function of optical path length varies from one to another. For example, to converge a plane wave into a diffraction limited spot, for a physically-planar liquid crystal structure, the refractive index of liquid crystal layer should be spherical or paraboloidal under paraxial approximation. As for projecting images or sensing objects, it may be expected to have the liquid crystal lens with aspheric distribution of optical path length across its aperture of interest. Liquid crystal lenses with electrically tunable refractive index (by addressing the different magnitude of electric field on liquid crystal layer) have potentials to achieve arbitrary function of optical path length for modulating incoming wavefront; current liquid crystal freeform optical elements were extended from liquid crystal lens with same optical mechanisms. The applications of liquid crystals lenses includes pico-projectors, prescriptions lenses (eyeglasses or contact lenses), smart phone camera, augmented reality, virtual reality etc. Liquid crystal lasers use a liquid crystal in the lasing medium as a distributed feedback mechanism instead of external mirrors. Emission at a photonic bandgap created by the periodic dielectric structure of the liquid crystal gives a low-threshold high-output device with stable monochromatic emission. Polymer dispersed liquid crystal (PDLC) sheets and rolls are available as adhesive backed Smart film which can be applied to windows and electrically switched between transparent and opaque to provide privacy. Many common fluids, such as soapy water, are in fact liquid crystals. Soap forms a variety of LC phases depending on its concentration in water. Liquid crystal films have revolutionized the world of technology. Currently they are used in the most diverse devices, such as digital clocks, mobile phones, calculating machines and televisions. The use of liquid crystal films in optical memory devices, with a process similar to the recording and reading of CDs and DVDs may be possible. Liquid crystals are also used as basic technology to imitate quantum computers, using electric fields to manipulate the orientation of the liquid crystal molecules, to store data and to encode a different value for every different degree of misalignment with other molecules. See also References External links Definitions of basic terms relating to low-molar-mass and polymer liquid crystals (IUPAC Recommendations 2001) An intelligible introduction to liquid crystals from Case Western Reserve University Liquid Crystal Physics tutorial from the Liquid Crystals Group, University of Colorado Liquid Crystals & Photonics Group – Ghent University (Belgium) , good tutorial Simulation of light propagation in liquid crystals, free program Liquid Crystals Interactive Online Liquid Crystal Institute Kent State University Liquid Crystals a journal by Taylor&Francis Molecular Crystals and Liquid Crystals a journal by Taylor & Francis Hot-spot detection techniques for ICs What are liquid crystals? from Chalmers University of Technology, Sweden Progress in liquid crystal chemistry Thematic series in the Open Access Beilstein Journal of Organic Chemistry DoITPoMS Teaching and Learning Package- "Liquid Crystals" Bowlic liquid crystal from San Jose State University Phase calibration of a Spatial Light Modulator Soft matter Optical materials Phase transitions Phases of matter
Liquid crystal
[ "Physics", "Chemistry", "Materials_science" ]
10,071
[ "Physical phenomena", "Phase transitions", "Soft matter", "Critical phenomena", "Phases of matter", "Materials", "Optical materials", "Condensed matter physics", "Statistical mechanics", "Matter" ]
18,009
https://en.wikipedia.org/wiki/Lift%20%28force%29
When a fluid flows around an object, the fluid exerts a force on the object. Lift is the component of this force that is perpendicular to the oncoming flow direction. It contrasts with the drag force, which is the component of the force parallel to the flow direction. Lift conventionally acts in an upward direction in order to counter the force of gravity, but it is defined to act perpendicular to the flow and therefore can act in any direction. If the surrounding fluid is air, the force is called an aerodynamic force. In water or any other liquid, it is called a hydrodynamic force. Dynamic lift is distinguished from other kinds of lift in fluids. Aerostatic lift or buoyancy, in which an internal fluid is lighter than the surrounding fluid, does not require movement and is used by balloons, blimps, dirigibles, boats, and submarines. Planing lift, in which only the lower portion of the body is immersed in a liquid flow, is used by motorboats, surfboards, windsurfers, sailboats, and water-skis. Overview A fluid flowing around the surface of a solid object applies a force on it. It does not matter whether the object is moving through a stationary fluid (e.g. an aircraft flying through the air) or whether the object is stationary and the fluid is moving (e.g. a wing in a wind tunnel) or whether both are moving (e.g. a sailboat using the wind to move forward). Lift is the component of this force that is perpendicular to the oncoming flow direction. Lift is always accompanied by a drag force, which is the component of the surface force parallel to the flow direction. Lift is mostly associated with the wings of fixed-wing aircraft, although it is more widely generated by many other streamlined bodies such as propellers, kites, helicopter rotors, racing car wings, maritime sails, wind turbines, and by sailboat keels, ship's rudders, and hydrofoils in water. Lift is also used by flying and gliding animals, especially by birds, bats, and insects, and even in the plant world by the seeds of certain trees. While the common meaning of the word "lift" assumes that lift opposes weight, lift can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is cruising in straight and level flight, the lift opposes gravity. However, when an aircraft is climbing, descending, or banking in a turn the lift is tilted with respect to the vertical. Lift may also act as downforce on the wing of a fixed-wing aircraft at the top of an aerobatic loop, and on the horizontal stabiliser of an aircraft. Lift may also be largely horizontal, for instance on a sailing ship. The lift discussed in this article is mainly in relation to airfoils; marine hydrofoils and propellers share the same physical principles and work in the same way, despite differences between air and water such as density, compressibility, and viscosity. The flow around a lifting airfoil is a fluid mechanics phenomenon that can be understood on essentially two levels: There are mathematical theories, which are based on established laws of physics and represent the flow accurately, but which require solving equations. And there are physical explanations without math, which are less rigorous. Correctly explaining lift in these qualitative terms is difficult because the cause-and-effect relationships involved are subtle. A comprehensive explanation that captures all of the essential aspects is necessarily complex. There are also many simplified explanations, but all leave significant parts of the phenomenon unexplained, while some also have elements that are simply incorrect. Simplified physical explanations of lift on an airfoil An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag. Most simplified explanations follow one of two basic approaches, based either on Newton's laws of motion or on Bernoulli's principle. Explanation based on flow deflection and Newton's laws An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law, the air must exert an equal and opposite (upward) force on the airfoil, which is lift. As the airflow approaches the airfoil it is curving upward, but as it passes the airfoil it changes direction and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then Newton's third law requires the air to exert an upward force on the airfoil; thus a reaction force, lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing. The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the airfoil accounts for much of the downward-turning action. This explanation is correct but it is incomplete. It does not explain how the airfoil can impart downward turning to a much deeper swath of the flow than it actually touches. Furthermore, it does not mention that the lift force is exerted by pressure differences, and does not explain how those pressure differences are sustained. Controversy regarding the Coandă effect Some versions of the flow-deflection explanation of lift cite the Coandă effect as the reason the flow is able to follow the convex upper surface of the airfoil. The conventional definition in the aerodynamics field is that the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow. More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some popular references to explain why airflow remains attached to the top side of an airfoil. This is a controversial use of the term "Coandă effect"; the flow following the upper surface simply reflects an absence of boundary-layer separation, thus it is not an example of the Coandă effect. Regardless of whether this broader definition of the "Coandă effect" is applicable, calling it the "Coandă effect" does not provide an explanation, it just gives the phenomenon a name. The ability of a fluid flow to follow a curved path is not dependent on shear forces, viscosity of the fluid, or the presence of a boundary layer. Air flowing around an airfoil, adhering to both upper and lower surfaces, and generating lift, is accepted as a phenomenon in inviscid flow. Explanations based on an increase in flow speed and Bernoulli's principle There are two common versions of this explanation, one based on "equal transit time", and one based on "obstruction" of the airflow. False explanation based on equal transit-time The "equal transit time" explanation starts by arguing that the flow over the upper surface is faster than the flow over the lower surface because the path length over the upper surface is longer and must be traversed in equal transit time. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure. It is concluded that the reduced pressure over the upper surface results in upward lift. While it is true that the flow speeds up, a serious flaw in this explanation is that it does not correctly explain what causes the flow to speed up. The longer-path-length explanation is incorrect. No difference in path length is needed, and even when there is a difference, it is typically much too small to explain the observed speed difference. This is because the assumption of equal transit time is wrong when applied to a body generating lift. There is no physical principle that requires equal transit time in all situations and experimental results confirm that for a body generating lift the transit times are not equal. In fact, the air moving past the top of an airfoil generating lift moves much faster than equal transit time predicts. The much higher flow speed over the upper surface can be clearly seen in this animated flow visualization. Obstruction of the airflow Like the equal transit time explanation, the "obstruction" or "streamtube pinching" explanation argues that the flow over the upper surface is faster than the flow over the lower surface, but gives a different reason for the difference in speed. It argues that the curved upper surface acts as more of an obstacle to the flow, forcing the streamlines to pinch closer together, making the streamtubes narrower. When streamtubes become narrower, conservation of mass requires that flow speed must increase. Reduced upper-surface pressure and upward lift follow from the higher speed by Bernoulli's principle, just as in the equal transit time explanation. Sometimes an analogy is made to a venturi nozzle, claiming the upper surface of the wing acts like a venturi nozzle to constrict the flow. One serious flaw in the obstruction explanation is that it does not explain how streamtube pinching comes about, or why it is greater over the upper surface than the lower surface. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense, but it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction or obstruction do not predict experimental results. Another flaw is that conservation of mass is not a satisfying physical reason why the flow would speed up. Effectively explaining the acceleration of an object requires identifying the force that accelerates it. Issues common to both versions of the Bernoulli-based explanation A serious flaw common to all the Bernoulli-based explanations is that they imply that a speed difference can arise from causes other than a pressure difference, and that the speed difference then leads to a pressure difference, by Bernoulli's principle. This implied one-way causation is a misconception. The real relationship between pressure and flow speed is a mutual interaction. As explained below under a more comprehensive physical explanation, producing a lift force requires maintaining pressure differences in both the vertical and horizontal directions. The Bernoulli-only explanations do not explain how the pressure differences in the vertical direction are sustained. That is, they leave out the flow-deflection part of the interaction. Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift. Basic attributes of lift Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed. Pressure differences Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside. These pressure differences arise in conjunction with the curved airflow. When a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. This direct relationship between curved streamlines and pressure differences, sometimes called the streamline curvature theorem, was derived from Newton's second law by Leonhard Euler in 1754: The left side of this equation represents the pressure difference perpendicular to the fluid flow. On the right side of the equation, ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞), the pressure difference is zero. Angle of attack The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil generates zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles, a symmetrical airfoil generates a lift force roughly proportional to the angle of attack. As the angle of attack increases, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled. Airfoil shape The maximum lift force that can be generated by an airfoil at a given airspeed depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases the maximum lift at a given airspeed. Cambered airfoils generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward. When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upward. This explains how a plane can fly upside down. Flow conditions The ambient flow conditions which affect lift include the fluid density, viscosity and speed of flow. Density is affected by temperature, and by the medium's acoustic velocity – i.e. by compressibility effects. Air speed and density Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In calculations it is convenient to quantify lift in terms of a lift coefficient based on these factors. Boundary layer and profile drag No matter how smooth the surface of an airfoil seems, any surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions relative to their original velocities. The result is that when the air is viewed as a continuous material, it is seen to be unable to slide along the surface, and the air's velocity relative to the airfoil decreases to nearly zero at the surface (i.e., the air molecules "stick" to the surface instead of sliding along it), something known as the no-slip condition. Because the air at the surface has near-zero velocity but the air away from the surface is moving, there is a thin boundary layer in which air close to the surface is subjected to a shearing motion. The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin friction drag. Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin friction drag. Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin friction drag. The total of the skin friction drag and the viscosity-related pressure drag is usually called the profile drag. Stalling An airfoil's maximum lift at a given airspeed is limited by boundary-layer separation. As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall, or stalling. At angles of attack above the stall, lift is significantly reduced, though it does not drop to zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 1.5 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps and leading-edge devices deployed. Bluff bodies The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, in addition to a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding. Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations. For instance, the flow around a circular cylinder generates a Kármán vortex street: vortices being shed in an alternating fashion from the cylinder's sides. The oscillatory nature of the flow produces a fluctuating lift force on the cylinder, even though the net (mean) force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number, which depends on the Reynolds number of the flow. For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys. In the Magnus effect, a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow. A more comprehensive physical explanation As described above under "Simplified physical explanations of lift on an airfoil", there are two main popular explanations: one based on downward deflection of the flow (Newton's laws), and one based on pressure differences accompanied by changes in flow speed (Bernoulli's principle). Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and pressure differences (including changes in flow speed associated with the pressure differences), and requires looking at the flow in more detail. Lift at the airfoil surface The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift. The net force exerted by the air occurs as a pressure difference over the airfoil's surfaces. Pressure in a fluid is always positive in an absolute sense, so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift. The pressure difference which results in lift acts directly on the airfoil surfaces; however, understanding how the pressure difference is produced requires understanding what the flow does over a wider area. The wider flow around the airfoil An airfoil affects the speed and direction of the flow over a wide area, producing a pattern called a velocity field. When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward leaving the air far behind the airfoil in the same state as the oncoming flow far ahead. The flow above the upper surface is sped up, while the flow below the airfoil is slowed down. Together with the upward deflection of air in front and the downward deflection of the air immediately behind, this establishes a net circulatory component of the flow. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows. The pressure is also affected over a wide area, in a pattern of non-uniform pressure called a pressure field. When an airfoil produces lift, there is a diffuse region of low pressure above the airfoil, and usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this pressure field. Mutual interaction of pressure differences and changes in flow velocity The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure field around an airfoil figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region. According to Newton's second law, a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the accompanying pressure field diagram indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have sufficient positive camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it. Some explanations that refer to the "Coandă effect" suggest that viscosity plays a key role in the downward turning, but this is false. (see above under "Controversy regarding the Coandă effect"). The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation. The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil is slowed down as it enters and then sped back up as it leaves. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle, which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed. Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction. A pressure difference can exist only if something is there for it to push against. In aerodynamic flow, the pressure difference pushes against the air's inertia, as the air is accelerated by the pressure difference. This is why the air's mass is part of the calculation, and why lift depends on air density. Sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure in a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that flow along the surface follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction. How simpler explanations fall short Producing a lift force requires both downward turning of the flow and changes in flow speed consistent with Bernoulli's principle. Each of the simplified explanations given above in Simplified physical explanations of lift on an airfoil falls short by trying to explain lift in terms of only one or the other, thus explaining only part of the phenomenon and leaving other parts unexplained. Quantifying lift Pressure integration When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the airfoil surface. where: S is the projected (planform) area of the airfoil, measured normal to the mean airflow; n is the normal unit vector pointing into the wing; k is the vertical unit vector, normal to the freestream direction. The above lift equation neglects the skin friction forces, which are small compared to the pressure forces. By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag Dp (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j, we obtain the side force Y. The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth. Lift coefficient Lift depends on the size of the wing, being approximately proportional to the wing area. It is often convenient to quantify the lift of a given airfoil by its lift coefficient , which defines its overall lift in terms of a unit area of the wing. If the value of for a wing at a specified angle of attack is given, then the lift produced for specific flow conditions can be determined: where is the lift force is the air density is the velocity or true airspeed is the planform (projected) wing area is the lift coefficient at the desired angle of attack, Mach number, and Reynolds number Mathematical theories of lift Mathematical theories of lift are based on continuum fluid mechanics, assuming that air flows as a continuous fluid. Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles: Conservation of momentum, which is a consequence of Newton's laws of motion, especially Newton's second law which relates the net force on an element of air to its rate of momentum change, Conservation of mass, including the assumption that the airfoil's surface is impermeable for the air flowing around, and Conservation of energy, which says that energy is neither created nor destroyed. Because an airfoil affects the flow in a wide area around it, the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoil. To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of computational fluid dynamics (CFD). Determining the net aerodynamic force from a CFD solution requires "adding up" (integrating) the forces due to pressure and shear determined by the CFD over every surface element of the airfoil as described under "pressure integration". The Navier–Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy, and requires use of the Reynolds-averaged Navier–Stokes equations (RANS). Simpler but less accurate theories have also been developed. Navier–Stokes (NS) equations These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity, the Fourier heat conduction law, an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid. In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift with high accuracy in any situation in ordinary atmospheric flight. However, airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful computer. So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly. Reynolds-averaged Navier–Stokes (RANS) equations These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense). A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. The amount of computation required is a minuscule fraction (billionths) of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is adequate for practical aircraft design. Lift predicted by RANS is usually within a few percent of the actual lift. Inviscid-flow equations (Euler or potential) The Euler equations are the NS equations without the viscosity, heat conduction, and turbulence effects. As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they do not lend themselves to exact analytic solutions. Further simplification is available through potential flow theory, which reduces the number of unknowns to be determined, and makes analytic solutions possible in some cases, as described below. Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10–20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift. In potential-flow theory, the flow is assumed to be irrotational, i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential. A flow represented in this way is called potential flow. In potential-flow theory, the flow is assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation (Laplace's equation) to be solved for the potential is linear, which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping, a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil. A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation. Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See "Circulation and the Kutta–Joukowski theorem" below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut, a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition, which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition. Linearized potential flow This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small. The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC. Circulation and the Kutta–Joukowski theorem When an airfoil generates lift, several components of the overall velocity field contribute to a net circulation of air around it: the upward flow ahead of the airfoil, the accelerated flow above, the decelerated flow below, and the downward flow behind. The circulation can be understood as the total amount of "spinning" (or vorticity) of an inviscid fluid around the airfoil. The Kutta–Joukowski theorem relates the lift per unit width of span of a two-dimensional airfoil to this circulation component of the flow. It is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil. Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect. The Kutta–Joukowski model does not predict how much circulation or lift a two-dimensional airfoil produces. Calculating the lift per unit span using Kutta–Joukowski requires a known value for the circulation. In particular, if the Kutta condition is met, in which the rear stagnation point moves to the airfoil trailing edge and attaches there for the duration of flight, the lift can be calculated theoretically through the conformal mapping method. The lift generated by a conventional airfoil is dictated by both its design and the flight conditions, such as forward velocity, angle of attack and air density. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps. In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation. Three-dimensional flow The flow around a three-dimensional wing involves significant additional issues, especially relating to the wing tips. For a wing of low aspect ratio, such as a typical delta wing, two-dimensional theories may provide a poor model and three-dimensional flow effects can dominate. Even for wings of high aspect ratio, the three-dimensional effects associated with finite span can affect the whole span, not just close to the tips. Wing tips and spanwise distribution The vertical pressure gradient at the wing tips causes air to flow sideways, out from under the wing then up and back over the upper surface. This reduces the pressure gradient at the wing tip, therefore also reducing lift. The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right. This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right. There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow. The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag. The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet. Horseshoe vortex system The wingtip flow leaving the wing creates a tip vortex. As the main vortex sheet passes downstream from the trailing edge, it rolls up at its outer edges, merging with the tip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake. In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, called 'bound vorticity', which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907. Given the distribution of bound vorticity and the vorticity in the wake, the Biot–Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system. In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing. Because the velocity is deduced from the vorticity in such theories, some authors describe the situation to imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex", for example. But attributing mechanical cause-and-effect between the vorticity and the velocity in this way is not consistent with the physics. The velocity perturbations in the flow around a wing are in fact produced by the pressure field. Manifestations of lift in the farfield Integrated force/momentum balance in lifting flows The flow around a lifting airfoil must satisfy Newton's second law regarding conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume, where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral), is equal to the integrated time rate of change (material derivative) of the momentum of fluid parcels passing through the interior of the control volume. For a steady flow, this can be expressed in the form of the net surface integral of the flux of momentum through the boundary. The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure. The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest. For the free-air case (no ground plane), the force exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom. For a square or circle, the momentum fluxes and pressure differences account for half the lift each. For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half. The results of all of the control-volume analyses described above are consistent with the Kutta–Joukowski theorem described above. Both the tall rectangle and circle control volumes have been used in derivations of the theorem. Lift reacted by overpressure on the ground under an airplane An airfoil produces a pressure field in the surrounding air, as explained under "The wider flow around the airfoil" above. The pressure differences associated with this field die off gradually, becoming very small at large distances, but never disappearing altogether. Below the airplane, the pressure field persists as a positive pressure disturbance that reaches the ground, forming a pattern of slightly-higher-than-ambient pressure on the ground, as shown on the right. Although the pressure differences are very small far below the airplane, they are spread over a wide area and add up to a substantial force. For steady, level flight, the integrated force due to the pressure differences is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and thus there is no integrated accumulation of vertical momentum in the atmosphere, as was noted by Lanchester early in the development of modern aerodynamics. See also Drag coefficient Flow separation Fluid dynamics Foil (fluid mechanics) Küssner effect Lift-to-drag ratio Lifting-line theory Spoiler (automotive) Footnotes References Further reading Introduction to Flight, John D. Anderson, Jr., McGraw-Hill, – Dr. Anderson is Curator of Aerodynamics at the Smithsonian Institution's National Air & Space Museum and Professor Emeritus at the University of Maryland. Understanding Flight, by David Anderson and Scott Eberhardt, McGraw-Hill, – A physicist and an aeronautical engineer explain flight in non-technical terms and specifically address the equal-transit-time myth. They attribute airfoil circulation to the Coanda effect, which is controversial. Aerodynamics, Clancy, L. J. (1975), Section 4.8, Pitman Publishing Limited, London . Aerodynamics, Aeronautics, and Flight Mechanics, McCormick, Barnes W., (1979), Chapter 3, John Wiley & Sons, Inc., New York . Fundamentals of Flight, Richard S. Shevell, Prentice-Hall International Editions, – This is a text for a one-semester undergraduate course in mechanical or aeronautical engineering. Its sections on theory of flight are understandable with a passing knowledge of calculus and physics. – Experiments under superfluidity conditions, resulting in the vanishing of lift in inviscid flow since the Kutta condition is no longer satisfied. "Aerodynamics at the Particle Level", Charles A. Crummer (2005, revised 2012) – A treatment of aerodynamics emphasizing the particle nature of air, as opposed to the fluid approximation commonly used. "Flight without Bernoulli" Chris Waltham Vol. 36, Nov. 1998 The Physics Teacher – using a physical model based on Newton's second law, the author presents a rigorous fluid dynamical treatment of flight. Bernoulli, Newton, and Dynamic Lift Norman F. Smith School Science and Mathematics vol 73 Part I: Bernoulli, Newton, and Dynamic Lift Part II* Part II Bernoulli, Newton, and Dynamic Lift Part I* External links Discussion of the apparent "conflict" between the various explanations of lift NASA tutorial, with animation, describing lift NASA FoilSim II 1.5 beta. Lift simulator Explanation of Lift with animation of fluid flow around an airfoil A treatment of why and how wings generate lift that focuses on pressure Physics of Flight – reviewed . Online paper by Prof. Dr. Klaus Weltner How do Wings Work? Holger Babinsky Bernoulli Or Newton: Who's Right About Lift? Plane and Pilot magazine One Minute Physics How Does a Wing actually work? (YouTube video) How wings really work, University of Cambridge Holger Babinsky (referred by "One Minute Physics How Does a Wing actually work?" YouTube video) From Summit to Seafloor – Lifted Weight as a Function of Altitude and Depth by Rolf Steinegger Joukowski Transform Interactive WebApp How Planes Fly YouTube video presentation by Krzysztof Fidkowski, associate professor of Aerospace Engineering at the University of Michigan Aerodynamics Classical mechanics Force
Lift (force)
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
9,945
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Aerodynamics", "Mechanics", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter", "Fluid dynamics" ]
18,079
https://en.wikipedia.org/wiki/Leonardo%20da%20Vinci
Leonardo di ser Piero da Vinci (15 April 1452 – 2 May 1519) was an Italian polymath of the High Renaissance who was active as a painter, draughtsman, engineer, scientist, theorist, sculptor, and architect. While his fame initially rested on his achievements as a painter, he has also become known for his notebooks, in which he made drawings and notes on a variety of subjects, including anatomy, astronomy, botany, cartography, painting, and palaeontology. Leonardo is widely regarded to have been a genius who epitomised the Renaissance humanist ideal, and his collective works comprise a contribution to later generations of artists matched only by that of his younger contemporary Michelangelo. Born out of wedlock to a successful notary and a lower-class woman in, or near, Vinci, he was educated in Florence by the Italian painter and sculptor Andrea del Verrocchio. He began his career in the city, but then spent much time in the service of Ludovico Sforza in Milan. Later, he worked in Florence and Milan again, as well as briefly in Rome, all while attracting a large following of imitators and students. Upon the invitation of Francis I, he spent his last three years in France, where he died in 1519. Since his death, there has not been a time where his achievements, diverse interests, personal life, and empirical thinking have failed to incite interest and admiration, making him a frequent namesake and subject in culture. Leonardo is identified as one of the greatest painters in the history of Western art and is often credited as the founder of the High Renaissance. Despite having many lost works and fewer than 25 attributed major works – including numerous unfinished works – he created some of the most influential paintings in the Western canon. The Mona Lisa is his best known work and is the world's most famous individual painting. The Last Supper is the most reproduced religious painting of all time and his Vitruvian Man drawing is also regarded as a cultural icon. In 2017, Salvator Mundi, attributed in whole or part to Leonardo, was sold at auction for , setting a new record for the most expensive painting ever sold at public auction. Revered for his technological ingenuity, he conceptualised flying machines, a type of armoured fighting vehicle, concentrated solar power, a ratio machine that could be used in an adding machine, and the double hull. Relatively few of his designs were constructed or were even feasible during his lifetime, as the modern scientific approaches to metallurgy and engineering were only in their infancy during the Renaissance. Some of his smaller inventions, however, entered the world of manufacturing unheralded, such as an automated bobbin winder and a machine for testing the tensile strength of wire. He made substantial discoveries in anatomy, civil engineering, hydrodynamics, geology, optics, and tribology, but he did not publish his findings and they had little to no direct influence on subsequent science. Biography Early life (1452–1472) Birth and background Leonardo da Vinci, properly named Leonardo di ser Piero da Vinci ("Leonardo, son of ser Piero from Vinci"), was born on 15 April 1452 in, or close to, the Tuscan hill town of Vinci, 20 miles from Florence. He was born out of wedlock to Piero da Vinci (Ser Piero da Vinci d'Antonio di ser Piero di ser Guido; 1426–1504), a Florentine legal notary, and Caterina di Meo Lippi (), from the lower class. It remains uncertain where Leonardo was born; the traditional account, from a local oral tradition recorded by the historian Emanuele Repetti, is that he was born in Anchiano, a country hamlet that would have offered sufficient privacy for the illegitimate birth, though it is still possible he was born in a house in Florence that Ser Piero almost certainly had. Leonardo's parents both married separately the year after his birth. Caterina – who later appears in Leonardo's notes as only "Caterina" or "Catelina" – is usually identified as the Caterina Buti del Vacca, who married the local artisan Antonio di Piero Buti del Vacca, nicknamed . Having been betrothed to her the previous year, Ser Piero married Albiera Amadori and after her death in 1464, went on to have three subsequent marriages. From all the marriages, Leonardo eventually had 16 half-siblings (of whom 11 survived infancy) who were much younger than he (the last was born when Leonardo was 46 years old) and with whom he had very little contact. Very little is known about Leonardo's childhood and much is shrouded in myth, partially because of his biography in the frequently apocryphal Lives of the Most Excellent Painters, Sculptors, and Architects (1550) by 16th-century art historian Giorgio Vasari. Tax records indicate that by at least 1457 he lived in the household of his paternal grandfather, Antonio da Vinci, but it is possible that he spent the years before then in the care of his mother in Vinci, either Anchiano or Campo Zeppi in the parish of San Pantaleone. He is thought to have been close to his uncle, Francesco da Vinci, but his father was probably in Florence most of the time. Ser Piero, who was the descendant of a long line of notaries, established an official residence in Florence by at least 1469 and had a successful career. Despite his family history, Leonardo only received a basic and informal education in (vernacular) writing, reading, and mathematics; possibly because his artistic talents were recognised early, so his family decided to focus their attention there. Later in life, Leonardo recorded his earliest memory, now in the Codex Atlanticus. While writing on the flight of birds, he recalled as an infant when a kite came to his cradle and opened his mouth with its tail; commentators still debate whether the anecdote was an actual memory or a fantasy. Verrocchio's workshop In the mid-1460s, Leonardo's family moved to Florence, which at the time was the centre of Christian Humanist thought and culture. Around the age of 14, he became a garzone (studio boy) in the workshop of Andrea del Verrocchio, who was the leading Florentine painter and sculptor of his time. This was about the time of the death of Verrocchio's master, the great sculptor Donatello. Leonardo became an apprentice by the age of 17 and remained in training for seven years. Other famous painters apprenticed in the workshop or associated with it include Ghirlandaio, Perugino, Botticelli, and Lorenzo di Credi. Leonardo was exposed to both theoretical training and a wide range of technical skills, including drafting, chemistry, metallurgy, metal working, plaster casting, leather working, mechanics, and woodwork, as well as the artistic skills of drawing, painting, sculpting, and modelling. Leonardo was a contemporary of Botticelli, Ghirlandaio and Perugino, who were all slightly older than he was. He would have met them at the workshop of Verrocchio or at the Platonic Academy of the Medici. Florence was ornamented by the works of artists such as Donatello's contemporaries Masaccio, whose figurative frescoes were imbued with realism and emotion, and Ghiberti, whose Gates of Paradise, gleaming with gold leaf, displayed the art of combining complex figure compositions with detailed architectural backgrounds. Piero della Francesca had made a detailed study of perspective, and was the first painter to make a scientific study of light. These studies and Leon Battista Alberti's treatise De pictura were to have a profound effect on younger artists and in particular on Leonardo's own observations and artworks. Much of the painting in Verrocchio's workshop was done by his assistants. According to Vasari, Leonardo collaborated with Verrocchio on his The Baptism of Christ (), painting the young angel holding Jesus's robe with skill so far superior to his master's that Verrocchio purportedly put down his brush and never painted again (the latter claim probably being apocryphal). The new technique of oil paint was applied to areas of the mostly tempera work, including the landscape, the rocks seen through the brown mountain stream, and much of Jesus's figure, indicating Leonardo's hand. Additionally, Leonardo may have been a model for two works by Verrocchio: the bronze statue of David in the Bargello and the archangel Raphael in Tobias and the Angel. Vasari tells a story of Leonardo as a very young man: a local peasant made himself a round buckler shield and requested that Ser Piero have it painted for him. Leonardo, inspired by the story of Medusa, responded with a painting of a monster spitting fire that was so terrifying that his father bought a different shield to give to the peasant and sold Leonardo's to a Florentine art dealer for 100 ducats, who in turn sold it to the Duke of Milan. First Florentine period (1472 – c. 1482) By 1472, at the age of 20, Leonardo qualified as a master in the Guild of Saint Luke, the guild of artists and doctors of medicine, but even after his father set him up in his own workshop, his attachment to Verrocchio was such that he continued to collaborate and live with him. Leonardo's earliest known dated work is a 1473 pen-and-ink drawing of the Arno valley (see below). According to Vasari, the young Leonardo was the first to suggest making the Arno river a navigable channel between Florence and Pisa. In January 1478, Leonardo received an independent commission to paint an altarpiece for the Chapel of Saint Bernard in the Palazzo Vecchio, an indication of his independence from Verrocchio's studio. An anonymous early biographer, known as Anonimo Gaddiano, claims that in 1480 Leonardo was living with the Medici and often worked in the garden of the Piazza San Marco, Florence, where a Neoplatonic academy of artists, poets and philosophers organised by the Medici met. In March 1481, he received a commission from the monks of San Donato in Scopeto for The Adoration of the Magi. Neither of these initial commissions were completed, being abandoned when Leonardo went to offer his services to Duke of Milan Ludovico Sforza. Leonardo wrote Sforza a letter which described the diverse things that he could achieve in the fields of engineering and weapon design, and mentioned that he could paint. He brought with him a silver string instrument – either a lute or lyre – in the form of a horse's head. With Alberti, Leonardo visited the home of the Medici and through them came to know the older Humanist philosophers of whom Marsiglio Ficino, proponent of Neoplatonism; Cristoforo Landino, writer of commentaries on Classical writings, and John Argyropoulos, teacher of Greek and translator of Aristotle were the foremost. Also associated with the Platonic Academy of the Medici was Leonardo's contemporary, the brilliant young poet and philosopher Pico della Mirandola. In 1482, Leonardo was sent as an ambassador by Lorenzo de' Medici to Ludovico il Moro, who ruled Milan between 1479 and 1499. First Milanese period (c. 1482–1499) Leonardo worked in Milan from 1482 until 1499. He was commissioned to paint the Virgin of the Rocks for the Confraternity of the Immaculate Conception and The Last Supper for the monastery of Santa Maria delle Grazie. In the spring of 1485, Leonardo travelled to Hungary (on behalf of Sforza) to meet king Matthias Corvinus, and was commissioned by him to paint a Madonna. In 1490 he was called as a consultant, together with Francesco di Giorgio Martini, for the building site of the cathedral of Pavia and was struck by the equestrian statue of Regisole, of which he left a sketch. Leonardo was employed on many other projects for Sforza, such as preparation of floats and pageants for special occasions; a drawing of, and wooden model for, a competition to design the cupola for Milan Cathedral; and a model for a huge equestrian monument to Ludovico's predecessor Francesco Sforza. This would have surpassed in size the only two large equestrian statues of the Renaissance, Donatello's Gattamelata in Padua and Verrocchio's Bartolomeo Colleoni in Venice, and became known as the Gran Cavallo. Leonardo completed a model for the horse and made detailed plans for its casting, but in November 1494, Ludovico gave the metal to his brother-in-law to be used for a cannon to defend the city from Charles VIII of France. Contemporary correspondence records that Leonardo and his assistants were commissioned by the Duke of Milan to paint the Sala delle Asse in the Sforza Castle, 1498. The project became a trompe-l'œil decoration that made the great hall appear to be a pergola created by the interwoven limbs of sixteen mulberry trees, whose canopy included an intricate labyrinth of leaves and knots on the ceiling. Second Florentine period (1500–1508) When Ludovico Sforza was overthrown by France in 1500, Leonardo fled Milan for Venice, accompanied by his assistant Salaì and friend, the mathematician Luca Pacioli. In Venice, Leonardo was employed as a military architect and engineer, devising methods to defend the city from naval attack. On his return to Florence in 1500, he and his household were guests of the Servite monks at the monastery of Santissima Annunziata and were provided with a workshop where, according to Vasari, Leonardo created the cartoon of The Virgin and Child with Saint Anne and Saint John the Baptist, a work that won such admiration that "men [and] women, young and old" flocked to see it "as if they were going to a solemn festival." In Cesena in 1502, Leonardo entered the service of Cesare Borgia, the son of Pope Alexander VI, acting as a military architect and engineer and travelling throughout Italy with his patron. Leonardo created a map of Cesare Borgia's stronghold, a town plan of Imola in order to win his patronage. Upon seeing it, Cesare hired Leonardo as his chief military engineer and architect. Later in the year, Leonardo produced another map for his patron, one of Chiana Valley, Tuscany, so as to give his patron a better overlay of the land and greater strategic position. He created this map in conjunction with his other project of constructing a dam from the sea to Florence, in order to allow a supply of water to sustain the canal during all seasons. Leonardo had left Borgia's service and returned to Florence by early 1503, where he rejoined the Guild of Saint Luke on 18 October of that year. By this same month, Leonardo had begun working on a portrait of Lisa del Giocondo, the model for the Mona Lisa, which he would continue working on until his twilight years. In January 1504, he was part of a committee formed to recommend where Michelangelo's statue of David should be placed. He then spent two years in Florence designing and painting a mural of The Battle of Anghiari for the Signoria, with Michelangelo designing its companion piece, The Battle of Cascina. In 1506, Leonardo was summoned to Milan by Charles II d'Amboise, the acting French governor of the city. There, Leonardo took on another pupil, Count Francesco Melzi, the son of a Lombard aristocrat, who is considered to have been his favourite student. The Council of Florence wished Leonardo to return promptly to finish The Battle of Anghiari, but he was given leave at the behest of Louis XII, who considered commissioning the artist to make some portraits. Leonardo may have commenced a project for an equestrian figure of d'Amboise; a wax model attributed to him survives and would be the only extant example of Leonardo's sculpture, but the attribution is not widely accepted. Leonardo was otherwise free to pursue his scientific interests. Many of Leonardo's most prominent pupils either knew or worked with him in Milan, including Bernardino Luini, Giovanni Antonio Boltraffio, and Marco d'Oggiono. In 1507, Leonardo was in Florence sorting out a dispute with his brothers over the estate of his father, who had died in 1504. Second Milanese period (1508–1513) By 1508, Leonardo was back in Milan, living in his own house in Porta Orientale in the parish of Santa Babila. In 1512, Leonardo was working on plans for an equestrian monument for Gian Giacomo Trivulzio, but this was prevented by an invasion of a confederation of Swiss, Spanish and Venetian forces, which drove the French from Milan. Leonardo stayed in the city, spending several months in 1513 at the Medici's Vaprio d'Adda villa. Rome and France (1513–1519) In March 1513, Lorenzo de' Medici's son Giovanni assumed the papacy (as Leo X); Leonardo went to Rome that September, where he was received by the pope's brother Giuliano. From September 1513 to 1516, Leonardo spent much of his time living in the Belvedere Courtyard in the Apostolic Palace, where Michelangelo and Raphael were both active. Leonardo was given an allowance of 33 ducats a month and, according to Vasari, decorated a lizard with scales dipped in quicksilver. The pope gave him a painting commission of unknown subject matter, but cancelled it when the artist set about developing a new kind of varnish. Leonardo became ill, in what may have been the first of multiple strokes leading to his death. He practised botany in the Vatican Gardens, and was commissioned to make plans for the Pope's proposed draining of the Pontine Marshes. He also dissected cadavers, making notes for a treatise on vocal cords; these he gave to an official in hopes of regaining the Pope's favour, but he was unsuccessful. In October 1515, King Francis I of France recaptured Milan. On 21 March 1516 Antonio Maria Pallavicini, the French ambassador to the Holy See, received a letter sent from Lyon a week previously by the royal advisor Guillaume Gouffier, seigneur de Bonnivet, containing the French king's instructions to assist Leonardo in his relocation to France and to inform the artist that the King was eagerly awaiting his arrival. Pallavicini was also asked to reassure Leonardo that he would be well received at court, both by the King and by his mother, Louise of Savoy. Leonardo entered Francis's service later that year, and was given the use of the manor house Clos Lucé near the King's residence at the royal Château d'Amboise. He was frequently visited by Francis, and drew plans for an immense castle town the King intended to erect at Romorantin. He also made a mechanical lion, which during a pageant walked towards the King and – upon being struck by a wand – opened its chest to reveal a cluster of lilies. Leonardo was accompanied during this time by his friend and apprentice Francesco Melzi, and was supported by a pension totalling 10,000 scudi. At some point, Melzi drew a portrait of Leonardo; the only others known from his lifetime were a sketch by an unknown assistant on the back of one of Leonardo's studies () and a drawing by Giovanni Ambrogio Figino depicting an elderly Leonardo with his right arm wrapped in clothing. The latter, in addition to the record of an October 1517 visit by Louis d'Aragon, confirms an account of Leonardo's right hand being paralytic when he was 65, which may indicate why he left works such as the Mona Lisa unfinished. He continued to work at some capacity until eventually becoming ill and bedridden for several months. Death Leonardo died at Clos Lucé on 2 May 1519 at the age of 67, possibly of a stroke. Francis I had become a close friend. Vasari describes Leonardo as lamenting on his deathbed, full of repentance, that "he had offended against God and men by failing to practice his art as he should have done." Vasari states that in his last days, Leonardo sent for a priest to make his confession and to receive the Holy Sacrament. Vasari also records that the King held Leonardo's head in his arms as he died, although this story may be legend rather than fact. In accordance with his will, sixty beggars carrying tapers followed Leonardo's casket. Melzi was the principal heir and executor, receiving, as well as money, Leonardo's paintings, tools, library and personal effects. Leonardo's other long-time pupil and companion, Salaì, and his servant Baptista de Vilanis, each received half of Leonardo's vineyards. His brothers received land, and his serving woman received a fur-lined cloak. On 12 August 1519, Leonardo's remains were interred in the Collegiate Church of Saint Florentin at the Château d'Amboise. Some 20 years after Leonardo's death, Francis was reported by the goldsmith and sculptor Benvenuto Cellini as saying: "There had never been another man born in the world who knew as much as Leonardo, not so much about painting, sculpture and architecture, as that he was a very great philosopher." Salaì, or Il Salaino ("The Little Unclean One", i.e., the devil), entered Leonardo's household in 1490 as an assistant. After only a year, Leonardo made a list of his misdemeanours, calling him "a thief, a liar, stubborn, and a glutton," after he had made off with money and valuables on at least five occasions and spent a fortune on clothes. Nevertheless, Leonardo treated him with great indulgence, and he remained in Leonardo's household for the next thirty years. Salaì executed a number of paintings under the name of Andrea Salaì, but although Vasari claims that Leonardo "taught him many things about painting," his work is generally considered to be of less artistic merit than others among Leonardo's pupils, such as Marco d'Oggiono and Boltraffio. At the time of his death in 1524, Salaì owned a painting referred to as Joconda in a posthumous inventory of his belongings; it was assessed at 505 lire, an exceptionally high valuation for a small panel portrait. Personal life Despite the thousands of pages Leonardo left in notebooks and manuscripts, he scarcely made reference to his personal life. Within Leonardo's lifetime, his extraordinary powers of invention, his "great physical beauty" and "infinite grace," as described by Vasari, as well as all other aspects of his life, attracted the curiosity of others. One such aspect was his love for animals, likely including vegetarianism and according to Vasari, a habit of purchasing caged birds and releasing them. Leonardo had many friends who are now notable either in their fields or for their historical significance, including mathematician Luca Pacioli, with whom he collaborated on the book Divina proportione in the 1490s. Leonardo appears to have had no close relationships with women except for his friendship with Cecilia Gallerani and the two Este sisters, Beatrice and Isabella. While on a journey that took him through Mantua, he drew a portrait of Isabella that appears to have been used to create a painted portrait, now lost. Beyond friendship, Leonardo kept his private life secret. His sexuality has been the subject of satire, analysis, and speculation. This trend began in the mid-16th century and was revived in the 19th and 20th centuries, most notably by Sigmund Freud in his Leonardo da Vinci, A Memory of His Childhood. Leonardo's most intimate relationships were perhaps with his pupils Salaì and Melzi. Melzi, writing to inform Leonardo's brothers of his death, described Leonardo's feelings for his pupils as both loving and passionate. It has been claimed since the 16th century that these relationships were of a sexual or erotic nature. Walter Isaacson in his biography of Leonardo makes explicit his opinion that the relations with Salaì were intimate and homosexual. Earlier in Leonardo's life, court records of 1476, when he was aged twenty-four, show that Leonardo and three other young men were charged with sodomy in an incident involving a known male prostitute. The charges were dismissed for lack of evidence, and there is speculation that since one of the accused, Lionardo de Tornabuoni, was related to Lorenzo de' Medici, the family exerted its influence to secure the dismissal. Since that date much has been written about his presumed homosexuality and its role in his art, particularly in the androgyny and eroticism manifested in Saint John the Baptist and Bacchus and more explicitly in a number of erotic drawings. Paintings Despite the recent awareness and admiration of Leonardo as a scientist and inventor, for the better part of four hundred years his fame rested on his achievements as a painter. A handful of works that are either authenticated or attributed to him have been regarded as among the great masterpieces. These paintings are famous for a variety of qualities that have been much imitated by students and discussed at great length by connoisseurs and critics. By the 1490s Leonardo had already been described as a "Divine" painter. Among the qualities that make Leonardo's work unique are his innovative techniques for laying on the paint; his detailed knowledge of anatomy, light, botany and geology; his interest in physiognomy and the way humans register emotion in expression and gesture; his innovative use of the human form in figurative composition; and his use of subtle gradation of tone. All these qualities come together in his most famous painted works, the Mona Lisa, the Last Supper, and the Virgin of the Rocks. Early works Leonardo first gained attention for his work on the Baptism of Christ, painted in conjunction with Verrocchio. Two other paintings appear to date from his time at Verrocchio's workshop, both of which are Annunciations. One is small, long and high. It is a "predella" to go at the base of a larger composition, a painting by Lorenzo di Credi from which it has become separated. The other is a much larger work, long. In both Annunciations, Leonardo used a formal arrangement, like two well-known pictures by Fra Angelico of the same subject, of the Virgin Mary sitting or kneeling to the right of the picture, approached from the left by an angel in profile, with a rich flowing garment, raised wings and bearing a lily. Although previously attributed to Ghirlandaio, the larger work is now generally attributed to Leonardo. In the smaller painting, Mary averts her eyes and folds her hands in a gesture that symbolised submission to God's will. Mary is not submissive, however, in the larger piece. The girl, interrupted in her reading by this unexpected messenger, puts a finger in her bible to mark the place and raises her hand in a formal gesture of greeting or surprise. This calm young woman appears to accept her role as the Mother of God, not with resignation but with confidence. In this painting, the young Leonardo presents the humanist face of the Virgin Mary, recognising humanity's role in God's incarnation. Paintings of the 1480s In the 1480s, Leonardo received two very important commissions and commenced another work that was of ground-breaking importance in terms of composition. Two of the three were never finished, and the third took so long that it was subject to lengthy negotiations over completion and payment. One of these paintings was Saint Jerome in the Wilderness, which Bortolon associates with a difficult period of Leonardo's life, as evidenced in his diary: "I thought I was learning to live; I was only learning to die." Although the painting is barely begun, the composition can be seen and is very unusual. Jerome, as a penitent, occupies the middle of the picture, set on a slight diagonal and viewed somewhat from above. His kneeling form takes on a trapezoid shape, with one arm stretched to the outer edge of the painting and his gaze looking in the opposite direction. J. Wasserman points out the link between this painting and Leonardo's anatomical studies. Across the foreground sprawls his symbol, a great lion whose body and tail make a double spiral across the base of the picture space. The other remarkable feature is the sketchy landscape of craggy rocks against which the figure is silhouetted. The daring display of figure composition, the landscape elements and personal drama also appear in the great unfinished masterpiece, the Adoration of the Magi, a commission from the Monks of San Donato a Scopeto. It is a complex composition, of about Leonardo did numerous drawings and preparatory studies, including a detailed one in linear perspective of the ruined classical architecture that forms part of the background. In 1482 Leonardo went to Milan at the behest of Lorenzo de' Medici in order to win favour with Ludovico il Moro, and the painting was abandoned. The third important work of this period is the Virgin of the Rocks, commissioned in Milan for the Confraternity of the Immaculate Conception. The painting, to be done with the assistance of the de Predis brothers, was to fill a large complex altarpiece. Leonardo chose to paint an apocryphal moment of the infancy of Christ when the infant John the Baptist, in protection of an angel, met the Holy Family on the road to Egypt. The painting demonstrates an eerie beauty as the graceful figures kneel in adoration around the infant Christ in a wild landscape of tumbling rock and whirling water. While the painting is quite large, about , it is not nearly as complex as the painting ordered by the monks of San Donato, having only four figures rather than about fifty and a rocky landscape rather than architectural details. The painting was eventually finished; in fact, two versions of the painting were finished: one remained at the chapel of the Confraternity, while Leonardo took the other to France. The Brothers did not get their painting, however, nor the de Predis their payment, until the next century. Leonardo's most remarkable portrait of this period is the Lady with an Ermine, presumed to be Cecilia Gallerani (), lover of Ludovico Sforza. The painting is characterised by the pose of the figure with the head turned at a very different angle to the torso, unusual at a date when many portraits were still rigidly in profile. The ermine plainly carries symbolic meaning, relating either to the sitter, or to Ludovico who belonged to the prestigious Order of the Ermine. Paintings of the 1490s Leonardo's most famous painting of the 1490s is The Last Supper, commissioned for the refectory of the Convent of Santa Maria della Grazie in Milan. It represents the last meal shared by Jesus with his disciples before his capture and death, and shows the moment when Jesus has just said "one of you will betray me", and the consternation that this statement caused. The writer Matteo Bandello observed Leonardo at work and wrote that some days he would paint from dawn till dusk without stopping to eat and then not paint for three or four days at a time. This was beyond the comprehension of the prior of the convent, who hounded him until Leonardo asked Ludovico to intervene. Vasari describes how Leonardo, troubled over his ability to adequately depict the faces of Christ and the traitor Judas, told the duke that he might be obliged to use the prior as his model. The painting was acclaimed as a masterpiece of design and characterisation, but it deteriorated rapidly, so that within a hundred years it was described by one viewer as "completely ruined." Leonardo, instead of using the reliable technique of fresco, had used tempera over a ground that was mainly gesso, resulting in a surface subject to mould and to flaking. Despite this, the painting remains one of the most reproduced works of art; countless copies have been made in various mediums. Toward the end of this period, in 1498 Leonardo's trompe-l'œil decoration of the Sala delle Asse was painted for the Duke of Milan in the Castello Sforzesco. Paintings of the 1500s In 1505, Leonardo was commissioned to paint The Battle of Anghiari in the Salone dei Cinquecento (Hall of the Five Hundred) in the Palazzo Vecchio, Florence. Leonardo devised a dynamic composition depicting four men riding raging war horses engaged in a battle for possession of a standard, at the Battle of Anghiari in 1440. Michelangelo was assigned the opposite wall to depict the Battle of Cascina. Leonardo's painting deteriorated rapidly and is now known from a copy by Rubens. Among the works created by Leonardo in the 16th century is the small portrait known as the Mona Lisa or La Gioconda, the laughing one. In the present era, it is arguably the most famous painting in the world. Its fame rests, in particular, on the elusive smile on the woman's face, its mysterious quality perhaps due to the subtly shadowed corners of the mouth and eyes such that the exact nature of the smile cannot be determined. The shadowy quality for which the work is renowned came to be called "sfumato", or Leonardo's smoke. Vasari wrote that the smile was "so pleasing that it seems more divine than human, and it was considered a wondrous thing that it was as lively as the smile of the living original." Other characteristics of the painting are the unadorned dress, in which the eyes and hands have no competition from other details; the dramatic landscape background, in which the world seems to be in a state of flux; the subdued colouring; and the extremely smooth nature of the painterly technique, employing oils laid on much like tempera, and blended on the surface so that the brushstrokes are indistinguishable. Vasari expressed that the painting's quality would make even "the most confident master ... despair and lose heart." The perfect state of preservation and the fact that there is no sign of repair or overpainting is rare in a panel painting of this date. In the painting Virgin and Child with Saint Anne, the composition again picks up the theme of figures in a landscape, which Wasserman describes as "breathtakingly beautiful" and harkens back to the Saint Jerome with the figure set at an oblique angle. What makes this painting unusual is that there are two obliquely set figures superimposed. Mary is seated on the knee of her mother, Saint Anne. She leans forward to restrain the Christ Child as he plays roughly with a lamb, the sign of his own impending sacrifice. This painting, which was copied many times, influenced Michelangelo, Raphael, and Andrea del Sarto, and through them Pontormo and Correggio. The trends in composition were adopted in particular by the Venetian painters Tintoretto and Veronese. Drawings Leonardo was a prolific draughtsman, keeping journals full of small sketches and detailed drawings recording all manner of things that took his attention. As well as the journals there exist many studies for paintings, some of which can be identified as preparatory to particular works such as The Adoration of the Magi, The Virgin of the Rocks and The Last Supper. His earliest dated drawing is a Landscape of the Arno Valley, 1473, which shows the river, the mountains, Montelupo Castle and the farmlands beyond it in great detail. Among his famous drawings are the Vitruvian Man, a study of the proportions of the human body; the Head of an Angel, for The Virgin of the Rocks in the Louvre; a botanical study of Star of Bethlehem; and a large drawing (160×100 cm) in black chalk on coloured paper of The Virgin and Child with Saint Anne and Saint John the Baptist in the National Gallery, London. This drawing employs the subtle sfumato technique of shading, in the manner of the Mona Lisa. It is thought that Leonardo never made a painting from it, the closest similarity being to The Virgin and Child with Saint Anne in the Louvre. Other drawings of interest include numerous studies generally referred to as "caricatures" because, although exaggerated, they appear to be based upon observation of live models. Vasari relates that Leonardo would look for interesting faces in public to use as models for some of his work. There are numerous studies of beautiful young men, often associated with Salaì, with the rare and much admired facial feature, the so-called "Grecian profile". These faces are often contrasted with that of a warrior. Salaì is often depicted in fancy-dress costume. Leonardo is known to have designed sets for pageants with which these may be associated. Other, often meticulous, drawings show studies of drapery. A marked development in Leonardo's ability to draw drapery occurred in his early works. Another often-reproduced drawing is a macabre sketch that was done by Leonardo in Florence in 1479 showing the body of Bernardo Baroncelli, hanged in connection with the murder of Giuliano, brother of Lorenzo de' Medici, in the Pazzi conspiracy. In his notes, Leonardo recorded the colours of the robes that Baroncelli was wearing when he died. Like the two contemporary architects Donato Bramante (who designed the Belvedere Courtyard) and Antonio da Sangallo the Elder, Leonardo experimented with designs for centrally planned churches, a number of which appear in his journals, as both plans and views, although none was ever realised. Journals and notes Renaissance humanism recognised no mutually exclusive polarities between the sciences and the arts, and Leonardo's studies in science and engineering are sometimes considered as impressive and innovative as his artistic work. These studies were recorded in 13,000 pages of notes and drawings, which fuse art and natural philosophy (the forerunner of modern science). They were made and maintained daily throughout Leonardo's life and travels, as he made continual observations of the world around him. Leonardo's notes and drawings display an enormous range of interests and preoccupations, some as mundane as lists of groceries and people who owed him money and some as intriguing as designs for wings and shoes for walking on water. There are compositions for paintings, studies of details and drapery, studies of faces and emotions, of animals, babies, dissections, plant studies, rock formations, whirlpools, war machines, flying machines and architecture. These notebooks – originally loose papers of different types and sizes – were largely entrusted to Leonardo's pupil and heir Francesco Melzi after the master's death. These were to be published, a task of overwhelming difficulty because of its scope and Leonardo's idiosyncratic writing. Some of Leonardo's drawings were copied by an anonymous Milanese artist for a planned treatise on art . After Melzi's death in 1570, the collection passed to his son, the lawyer Orazio, who initially took little interest in the journals. In 1587, a Melzi household tutor named Lelio Gavardi took 13 of the manuscripts to Pisa; there, the architect Giovanni Magenta reproached Gavardi for having taken the manuscripts illicitly and returned them to Orazio. Having many more such works in his possession, Orazio gifted the volumes to Magenta. News spread of these lost works of Leonardo's, and Orazio retrieved seven of the 13 manuscripts, which he then gave to Pompeo Leoni for publication in two volumes; one of these was the Codex Atlanticus. The other six works had been distributed to a few others. After Orazio's death, his heirs sold the rest of Leonardo's possessions, and thus began their dispersal. Some works have found their way into major collections such as the Royal Library at Windsor Castle, the Louvre, the , the Victoria and Albert Museum, the Biblioteca Ambrosiana in Milan, which holds the 12-volume Codex Atlanticus, and the British Library in London, which has put a selection from the Codex Arundel (BL Arundel MS 263) online. Works have also been at Holkham Hall, the Metropolitan Museum of Art, and in the private hands of John Nicholas Brown I and Robert Lehman. The Codex Leicester is the only privately owned major scientific work of Leonardo; it is owned by Bill Gates and displayed once a year in different cities around the world. Most of Leonardo's writings are in mirror-image cursive. Since Leonardo wrote with his left hand, it was probably easier for him to write from right to left. Leonardo used a variety of shorthand and symbols, and states in his notes that he intended to prepare them for publication. In many cases a single topic is covered in detail in both words and pictures on a single sheet, together conveying information that would not be lost if the pages were published out of order. Why they were not published during Leonardo's lifetime is unknown. Science and inventions Leonardo's approach to science was observational: he tried to understand a phenomenon by describing and depicting it in utmost detail and did not emphasise experiments or theoretical explanation. Since he lacked formal education in Latin and mathematics, contemporary scholars mostly ignored Leonardo the scientist, although he did teach himself Latin. His keen observations in many areas were noted, such as when he wrote "Il sole non si move." ("The Sun does not move.") In the 1490s he studied mathematics under Luca Pacioli and prepared a series of drawings of regular solids in a skeletal form to be engraved as plates for Pacioli's book Divina proportione, published in 1509. While living in Milan, he studied light from the summit of Monte Rosa. Scientific writings in his notebook on fossils have been considered as influential on early palaeontology. The content of his journals suggest that he was planning a series of treatises on a variety of subjects. A coherent treatise on anatomy is said to have been observed during a visit by Cardinal Louis d'Aragon's secretary in 1517. Aspects of his work on the studies of anatomy, light and the landscape were assembled for publication by Melzi and eventually published as A Treatise on Painting in France and Italy in 1651 and Germany in 1724, with engravings based upon drawings by the Classical painter Nicolas Poussin. According to Arasse, the treatise, which in France went into 62 editions in fifty years, caused Leonardo to be seen as "the precursor of French academic thought on art." While Leonardo's experimentation followed scientific methods, a recent and exhaustive analysis of Leonardo as a scientist by Fritjof Capra argues that Leonardo was a fundamentally different kind of scientist from Galileo, Newton and other scientists who followed him in that, as a "Renaissance Man", his theorising and hypothesising integrated the arts and particularly painting. Anatomy and physiology Leonardo started his study in the anatomy of the human body under the apprenticeship of Verrocchio, who demanded that his students develop a deep knowledge of the subject. As an artist, he quickly became master of topographic anatomy, drawing many studies of muscles, tendons and other visible anatomical features. As a successful artist, Leonardo was given permission to dissect human corpses at the Hospital of Santa Maria Nuova in Florence and later at hospitals in Milan and Rome. From 1510 to 1511 he collaborated in his studies with the doctor Marcantonio della Torre, professor of Anatomy at the University of Pavia. Leonardo made over 240 detailed drawings and wrote about 13,000 words toward a treatise on anatomy. Only a small amount of the material on anatomy was published in Leonardo's Treatise on painting. During the time that Melzi was ordering the material into chapters for publication, they were examined by a number of anatomists and artists, including Vasari, Cellini and Albrecht Dürer, who made a number of drawings from them. Leonardo's anatomical drawings include many studies of the human skeleton and its parts, and of muscles and sinews. He studied the mechanical functions of the skeleton and the muscular forces that are applied to it in a manner that prefigured the modern science of biomechanics. He drew the heart and vascular system, the sex organs and other internal organs, making one of the first scientific drawings of a fetus in utero. The drawings and notation are far ahead of their time, and if published would undoubtedly have made a major contribution to medical science. Leonardo also closely observed and recorded the effects of age and of human emotion on the physiology, studying in particular the effects of rage. He drew many figures who had significant facial deformities or signs of illness. Leonardo also studied and drew the anatomy of many animals, dissecting cows, birds, monkeys, bears, and frogs, and comparing in his drawings their anatomical structure with that of humans. He also made a number of studies of horses. Leonardo's dissections and documentation of muscles, nerves, and vessels helped to describe the physiology and mechanics of movement. He attempted to identify the source of 'emotions' and their expression. He found it difficult to incorporate the prevailing system and theories of bodily humours, but eventually he abandoned these physiological explanations of bodily functions. He made the observations that humours were not located in cerebral spaces or ventricles. He documented that the humours were not contained in the heart or the liver, and that it was the heart that defined the circulatory system. He was the first to define atherosclerosis and liver cirrhosis. He created models of the cerebral ventricles with the use of melted wax and constructed a glass aorta to observe the circulation of blood through the aortic valve by using water and grass seed to watch flow patterns. Engineering and inventions During his lifetime, Leonardo was also valued as an engineer. With the same rational and analytical approach that moved him to represent the human body and to investigate anatomy, Leonardo studied and designed many machines and devices. He drew their "anatomy" with unparalleled mastery, producing the first form of the modern technical drawing, including a perfected "exploded view" technique, to represent internal components. Those studies and projects collected in his codices fill more than 5,000 pages. In a letter of 1482 to the lord of Milan Ludovico il Moro, he wrote that he could create all sorts of machines both for the protection of a city and for siege. When he fled from Milan to Venice in 1499, he found employment as an engineer and devised a system of moveable barricades to protect the city from attack. In 1502, he created a scheme for diverting the flow of the Arno river, a project on which Niccolò Machiavelli also worked. He continued to contemplate the canalisation of Lombardy's plains while in Louis XII's company and of the Loire and its tributaries in the company of Francis I. Leonardo's journals include a vast number of inventions, both practical and impractical. They include musical instruments, a mechanical knight, hydraulic pumps, reversible crank mechanisms, finned mortar shells, and a steam cannon. Leonardo was fascinated by the phenomenon of flight for much of his life, producing many studies, including Codex on the Flight of Birds (), as well as plans for several flying machines, such as a flapping ornithopter and a machine with a helical rotor. In a 2003 documentary by British television station Channel Four, titled Leonardo's Dream Machines, various designs by Leonardo, such as a parachute and a giant crossbow, were interpreted and constructed. Some of those designs proved successful, whilst others fared less well when tested. Similarly, a team of engineers built ten machines designed by Leonardo in the 2009 American television series Doing DaVinci, including a fighting vehicle and a self-propelled cart. Research performed by Marc van den Broek revealed older prototypes for more than 100 inventions that are ascribed to Leonardo. Similarities between Leonardo's illustrations and drawings from the Middle Ages and from Ancient Greece and Rome, the Chinese and Persian Empires, and Egypt suggest that a large portion of Leonardo's inventions had been conceived before his lifetime. Leonardo's innovation was to combine different functions from existing drafts and set them into scenes that illustrated their utility. By reconstituting technical inventions he created something new. In his notebooks, Leonardo first stated the 'laws' of sliding friction in 1493. His inspiration for investigating friction came about in part from his study of perpetual motion, which he correctly concluded was not possible. His results were never published and the friction laws were not rediscovered until 1699 by Guillaume Amontons, with whose name they are now usually associated. For this contribution, Leonardo was named as the first of the 23 "Men of Tribology" by Duncan Dowson. Legacy Although he had no formal academic training, many historians and scholars regard Leonardo as the prime exemplar of the "Universal Genius" or "Renaissance Man", an individual of "unquenchable curiosity" and "feverishly inventive imagination." He is widely considered one of the most diversely talented individuals ever to have lived. According to art historian Helen Gardner, the scope and depth of his interests were without precedent in recorded history, and "his mind and personality seem to us superhuman, while the man himself mysterious and remote." Scholars interpret his view of the world as being based in logic, though the empirical methods he used were unorthodox for his time. Leonardo's fame within his own lifetime was such that the King of France carried him away like a trophy, and was claimed to have supported him in his old age and held him in his arms as he died. Interest in Leonardo and his work has never diminished. Crowds still queue to see his best-known artworks, T-shirts still bear his most famous drawing, and writers continue to hail him as a genius while speculating about his private life, as well as about what one so intelligent actually believed in. The continued admiration that Leonardo commanded from painters, critics and historians is reflected in many other written tributes. Baldassare Castiglione, author of Il Cortegiano (The Courtier), wrote in 1528: "...Another of the greatest painters in this world looks down on this art in which he is unequalled..." while the biographer known as "Anonimo Gaddiano" wrote, : "His genius was so rare and universal that it can be said that nature worked a miracle on his behalf..." Vasari, in his Lives of the Artists (1568), opens his chapter on Leonardo: In the normal course of events many men and women are born with remarkable talents; but occasionally, in a way that transcends nature, a single person is marvellously endowed by Heaven with beauty, grace and talent in such abundance that he leaves other men far behind, all his actions seem inspired and indeed everything he does clearly comes from God rather than from human skill. Everyone acknowledged that this was true of Leonardo da Vinci, an artist of outstanding physical beauty, who displayed infinite grace in everything that he did and who cultivated his genius so brilliantly that all problems he studied he solved with ease. The 19th century brought a particular admiration for Leonardo's genius, causing Henry Fuseli to write in 1801: "Such was the dawn of modern art, when Leonardo da Vinci broke forth with a splendour that distanced former excellence: made up of all the elements that constitute the essence of genius..." This is echoed by A. E. Rio who wrote in 1861: "He towered above all other artists through the strength and the nobility of his talents." By the 19th century, the scope of Leonardo's notebooks was known, as well as his paintings. Hippolyte Taine wrote in 1866: "There may not be in the world an example of another genius so universal, so incapable of fulfilment, so full of yearning for the infinite, so naturally refined, so far ahead of his own century and the following centuries." Art historian Bernard Berenson wrote in 1896: The interest in Leonardo's genius has continued unabated; experts study and translate his writings, analyse his paintings using scientific techniques, argue over attributions and search for works which have been recorded but never found. Liana Bortolon, writing in 1967, said: The Elmer Belt Library of Vinciana is a special collection at the University of California, Los Angeles. Twenty-first-century author Walter Isaacson based much of his biography of Leonardo on thousands of notebook entries, studying the personal notes, sketches, budget notations, and musings of the man whom he considers the greatest of innovators. Isaacson was surprised to discover a "fun, joyous" side of Leonardo in addition to his limitless curiosity and creative genius. On the 500th anniversary of Leonardo's death, the Louvre in Paris arranged for the largest ever single exhibit of his work, called Leonardo, between November 2019 and February 2020. The exhibit includes over 100 paintings, drawings and notebooks. Eleven of the paintings that Leonardo completed in his lifetime were included. Five of these are owned by the Louvre, but the Mona Lisa was not included because it is in such great demand among general visitors to the Louvre; it remains on display in its gallery. Vitruvian Man, however, is on display following a legal battle with its owner, the Gallerie dell'Accademia in Venice. Salvator Mundi was also not included because its Saudi owner did not agree to lease the work. The Mona Lisa, considered Leonardo's magnum opus, is often regarded as the most famous portrait ever made. The Last Supper is the most reproduced religious painting of all time, and Leonardo's Vitruvian Man drawing is also considered a cultural icon. More than a decade of analysis of Leonardo's genetic genealogy, conducted by Alessandro Vezzosi and Agnese Sabato, came to a conclusion in mid-2021. It was determined that the artist has 14 living male relatives. The work could also help determine the authenticity of remains thought to belong to Leonardo. Location of remains While Leonardo was certainly buried in the collegiate church of Saint Florentin at the Château d'Amboise in 12 August 1519, the current location of his remains is unclear. Much of Château d'Amboise was damaged during the French Revolution, leading to the church's demolition in 1802. Some of the graves were destroyed in the process, scattering the bones interred there and thereby leaving the whereabouts of Leonardo's remains subject to dispute; a gardener may have even buried some in the corner of the courtyard. In 1863, fine-arts inspector general Arsène Houssaye received an imperial commission to excavate the site and discovered a partially complete skeleton with a bronze ring on one finger, white hair, and stone fragments bearing the inscriptions "EO", "AR", "DUS", and "VINC" interpreted as forming "Leonardus Vinci". The skull's eight teeth correspond to someone of approximately the appropriate age, and a silver shield found near the bones depicts a beardless Francis I, corresponding to the king's appearance during Leonardo's time in France. Houssaye postulated that the unusually large skull was an indicator of Leonardo's intelligence; author Charles Nicholl describes this as a "dubious phrenological deduction". At the same time, Houssaye noted some issues with his observations, including that the feet were turned toward the high altar, a practice generally reserved for laymen, and that the skeleton of seemed too short. Art historian Mary Margaret Heaton wrote in 1874 that the height would be appropriate for Leonardo. The skull was allegedly presented to Napoleon III before being returned to the Château d'Amboise, where they were in the chapel of Saint Hubert in 1874. A plaque above the tomb states that its contents are only presumed to be those of Leonardo. It has since been theorised that the folding of the skeleton's right arm over the head may correspond to the paralysis of Leonardo's right hand. In 2016, it was announced that DNA tests would be conducted to determine whether the attribution is correct. The DNA of the remains will be compared to that of samples collected from Leonardo's work and his half-brother Domenico's descendants; it may also be sequenced. In 2019, documents were published revealing that Houssaye had kept the ring and a lock of hair. In 1925, his great-grandson sold these to an American collector. Sixty years later, another American acquired them, leading to their being displayed at the Leonardo Museum in Vinci beginning on 2 May 2019, the 500th anniversary of the artist's death. Notes General Dates of works References Citations Early Modern Works cited Early in in Modern Books volume 2: . A reprint of the original 1883 edition Journals and encyclopedia articles Further reading See and for extensive bibliographies External links General Universal Leonardo, a database of Leonardo's life and works maintained by Martin Kemp and Marina Wallace Leonardo da Vinci on the National Gallery website Works Biblioteca Leonardiana, online bibliography (in Italian) e-Leo: Archivio digitale di storia della tecnica e della scienza, archive of drawings, notes and manuscripts Complete text and images of Richter's translation of the Notebooks The Notebooks of Leonardo da Vinci 1452 births 1519 deaths 15th-century Italian mathematicians 15th-century Italian painters 15th-century Italian scientists 15th-century Italian sculptors 15th-century people from the Republic of Florence 16th-century Italian mathematicians 16th-century Italian painters 16th-century Italian scientists 16th-century Italian sculptors 16th-century people from the Republic of Florence Ambassadors of the Republic of Florence Ballistics experts Fabulists Painters from Florence Italian botanical illustrators Fluid dynamicists History of anatomy Italian anatomists Italian caricaturists Italian civil engineers 16th-century Italian inventors Italian male painters Italian male sculptors Italian military engineers Italian physiologists Italian Renaissance humanists Italian Renaissance painters Italian Renaissance sculptors Italian Roman Catholics Italian LGBTQ painters Italian LGBTQ sculptors Mathematical artists People prosecuted under anti-homosexuality laws Philosophical theists Physiognomists Italian Renaissance architects Writers who illustrated their own writing Historical figures with ambiguous or disputed sexuality
Leonardo da Vinci
[ "Chemistry" ]
12,098
[ "Fluid dynamicists", "Fluid dynamics" ]
18,087
https://en.wikipedia.org/wiki/Lonsdaleite
Lonsdaleite (named in honour of Kathleen Lonsdale), also called hexagonal diamond in reference to the crystal structure, is an allotrope of carbon with a hexagonal lattice, as opposed to the cubical lattice of conventional diamond. It is found in nature in meteorite debris; when meteors containing graphite strike the Earth, the immense heat and stress of the impact transforms the graphite into diamond, but retains graphite's hexagonal crystal lattice. Lonsdaleite was first identified in 1967 from the Canyon Diablo meteorite, where it occurs as microscopic crystals associated with ordinary diamond. It is translucent and brownish-yellow and has an index of refraction of 2.40–2.41 and a specific gravity of 3.2–3.3 . Its hardness is theoretically superior to that of cubic diamond (up to 58% more), according to computational simulations, but natural specimens exhibited somewhat lower hardness through a large range of values (from 7–8 on Mohs hardness scale). The cause is speculated as being due to the samples having been riddled with lattice defects and impurities. In addition to meteorite deposits, hexagonal diamond has been synthesized in the laboratory (1966 or earlier; published in 1967) by compressing and heating graphite either in a static press or using explosives. Hardness According to the conventional interpretation of the results of examining the meagre samples collected from meteorites or manufactured in the lab, lonsdaleite has a hexagonal unit cell, related to the diamond unit cell in the same way that the hexagonal and cubic close packed crystal systems are related. Its diamond structure can be considered to be made up of interlocking rings of six carbon atoms, in the chair conformation. In lonsdaleite, some rings are in the boat conformation instead. At nanoscale dimensions, cubic diamond is represented by diamondoids while hexagonal diamond is represented by wurtzoids. In diamond, all the carbon-to-carbon bonds, both within a layer of rings and between them, are in the staggered conformation, thus causing all four cubic-diagonal directions to be equivalent; whereas in lonsdaleite the bonds between layers are in the eclipsed conformation, which defines the axis of hexagonal symmetry. Mineralogical simulation predicts lonsdaleite to be 58% harder than diamond on the <100> face, and to resist indentation pressures of 152 GPa, whereas diamond would break at 97 GPa. This is yet exceeded by IIa diamond's <111> tip hardness of 162 GPa. The extrapolated properties of lonsdaleite have been questioned, particularly its superior hardness, since specimens under crystallographic inspection have not shown a bulk hexagonal lattice structure, but instead a conventional cubic diamond dominated by structural defects that include hexagonal sequences. A quantitative analysis of the X-ray diffraction data of lonsdaleite has shown that about equal amounts of hexagonal and cubic stacking sequences are present. Consequently, it has been suggested that "stacking disordered diamond" is the most accurate structural description of lonsdaleite. On the other hand, recent shock experiments with in situ X-ray diffraction show strong evidence for creation of relatively pure lonsdaleite in dynamic high-pressure environments comparable to meteorite impacts. Occurrence Lonsdaleite occurs as microscopic crystals associated with diamond in several meteorites: Canyon Diablo, Kenna, and Allan Hills 77283. It is also naturally occurring in non-bolide diamond placer deposits in the Sakha Republic. Material with d-spacings consistent with Lonsdaleite has been found in sediments with highly uncertain dates at Lake Cuitzeo, in the state of Guanajuato, Mexico, by proponents of the controversial Younger Dryas impact hypothesis, which is now refuted by earth scientists and planetary impact specialists. Claims of Lonsdaleite and other nanodiamonds in a layer of the Greenland ice sheet that could be of Younger Dryas age have not been confirmed and are now disputed. Its presence in local peat deposits is claimed as evidence for the Tunguska event being caused by a meteor rather than by a cometary fragment. Manufacture In addition to laboratory synthesis by compressing and heating graphite either in a static press or using explosives, lonsdaleite has also been produced by chemical vapor deposition, and also by the thermal decomposition of a polymer, poly(hydridocarbyne), at atmospheric pressure, under argon atmosphere, at . In 2020, researchers at Australian National University found by accident they were able to produce lonsdaleite at room temperatures using a diamond anvil cell. In 2021, Washington State University's Institute for Shock Physics published a paper stating that they created lonsdaleite crystals large enough to measure their stiffness, confirming that they are stiffer than common cubic diamonds. However, the explosion used to create these crystals also destroys them nanoseconds later, providing just enough time to measure stiffness with lasers. Scams Since the characteristics of lonsdaleite are unknown to most people outside of scientists trained in geology and mineralogy, the names "lonsdaleite" and "hexagonal diamond" have frequently been used in the fraudulent sale of worthless ceramic artifacts, passed off as meteorites on online e-commerce sites and at street fairs and street markets, with prices ranging from a few dollars to thousands of dollars. See also References Further reading External links Native element minerals Meteorite minerals Allotropes of carbon Superhard materials Younger Dryas impact hypothesis Minerals in space group 194
Lonsdaleite
[ "Physics", "Chemistry", "Biology" ]
1,157
[ "Allotropes of carbon", "Younger Dryas impact hypothesis", "Allotropes", "Hypothetical impact events", "Materials", "Superhard materials", "Biological hypotheses", "Matter" ]
18,203
https://en.wikipedia.org/wiki/Lambda%20calculus
Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940. Lambda calculus consists of constructing lambda terms and performing reduction operations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules: : A variable is a character or string representing a parameter. : A lambda abstraction is a function definition, taking as input the bound variable (between the λ and the punctum/dot .) and returning the body . : An application, applying a function to an argument . Both and are lambda terms. The reduction operations include: : α-conversion, renaming the bound variables in the expression. Used to avoid name collisions. : β-reduction, replacing the bound variables with the argument expression in the body of the abstraction. If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form. Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations. Explanation and applications Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function. Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictly weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, in simply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (see below). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus. Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement lambda calculus. Lambda calculus is also a current research topic in category theory. History Lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox. Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus. In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus. Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics and computer science. Origin of the λ symbol There is some uncertainty over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006): By the way, why did Church choose the notation “λ”? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation “” used for class-abstraction by Whitehead and Russell, by first modifying “” to “” to distinguish function-abstraction from class-abstraction, and then changing “” to “λ” for ease of printing. This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen. Dana Scott has also addressed this question in various public lectures. Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard: Dear Professor Church, Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator? According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe". Informal description Motivation Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides simple semantics for computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple. The first simplification is that the lambda calculus treats functions "anonymously;" it does not give them explicit names. For example, the function can be rewritten in anonymous form as (which is read as "a tuple of and is mapped to "). Similarly, the function can be rewritten in anonymous form as where the input is simply mapped to itself. The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the function, can be reworked into an equivalent function that accepts a single input, and as output returns another function, that in turn accepts a single input. For example, can be reworked into This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument. Function application of the function to the arguments (5, 2), yields at once , whereas evaluation of the curried version requires one more step // the definition of has been used with in the inner expression. This is like β-reduction. // the definition of has been used with . Again, similar to β-reduction. to arrive at the same result. The lambda calculus The lambda calculus consists of a language of lambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as an equational theory or as an operational definition. As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, so currying is used to implement functions of several variables. Lambda terms The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term'''". The following three rules give an inductive definition that can be applied to build all syntactically valid lambda terms:{{efn|name=lamTerms|1= The expression e can be: variables x, lambda abstractions, or applications —in BNF, .— from Wikipedia's Simply typed lambda calculus#Syntax for untyped lambda calculus}} variable is itself a valid lambda term. if is a lambda term, and is a variable, then is a lambda term (called an abstraction); if and are lambda terms, then   is a lambda term (called an application). Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See § Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such as The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes. An abstraction denotes an § anonymous function that takes a single input and returns . For example, is an abstraction representing the function defined by using the term for . The name is superfluous when using abstraction. The syntax binds the variable in the term . The definition of a function with an abstraction merely "sets up" the function but does not invoke it. An application   represents the application of a function to an input , that is, it represents the act of calling function on input to produce . A lambda term may refer to a variable that has not been bound, such as the term (which represents the function definition ). In this term, the variable has not been defined and is considered an unknown. The abstraction is a syntactically valid term and represents a function that adds its input to the yet-unknown . Parentheses may be used and might be needed to disambiguate terms. For example, is of form and is therefore an abstraction, while is of form   and is therefore an application. The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variable is a placeholder in both examples. Here, example 1 defines a function , where is , an anonymous function , with input ; while example 2,  , is M applied to N, where is the lambda term being applied to the input which is . Both examples 1 and 2 would evaluate to the identity function . Functions that operate on functions In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions. For example, the lambda term represents the identity function, . Further, represents the constant function , the function that always returns , no matter the input. As an example of a function operating on functions, the function composition can be defined as . There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms. Alpha equivalence A basic form of equivalence, definable on lambda terms, is alpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter. For instance, and are alpha-equivalent lambda terms, and they both represent the same function (the identity function). The terms and are not alpha-equivalent, because they are not bound in an abstraction. In many presentations, it is usual to identify alpha-equivalent lambda terms. The following definitions are necessary in order to be able to define β-reduction: Free variables The free variables of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively: The free variables of are just The set of free variables of is the set of free variables of , but with removed The set of free variables of is the union of the set of free variables of and the set of free variables of . For example, the lambda term representing the identity has no free variables, but the function has a single free variable, . Capture-avoiding substitutions Suppose , and are lambda terms, and and are variables. The notation indicates substitution of for in in a capture-avoiding manner. This is defined so that: ; with substituted for , becomes if ; with substituted for , (which is not ) remains ; substitution distributes to both sides of an application ; a variable bound by an abstraction is not subject to substitution; substituting such variable leaves the abstraction unchanged if and does not appear among the free variables of ( is said to be "fresh" for ) ; substituting a variable which is not bound by an abstraction proceeds in the abstraction's body, provided that the abstracted variable is "fresh" for the substitution term . For example, , and . The freshness condition (requiring that is not in the free variables of ) is crucial in order to ensure that substitution does not change the meaning of functions. For example, a substitution that ignores the freshness condition could lead to errors: . This erroneous substitution would turn the constant function into the identity . In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable. For example, switching back to our correct notion of substitution, in the abstraction can be renamed with a fresh variable , to obtain , and the meaning of the function is preserved by substitution. β-reduction The β-reduction rule states that an application of the form reduces to the term . The notation is used to indicate that β-reduces to . For example, for every , . This demonstrates that really is the identity. Similarly, , which demonstrates that is a constant function. The lambda calculus may be seen as an idealized version of a functional programming language, like Haskell or Standard ML. Under this view, β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the term . Here . That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate. Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied to truth values, strings, or other non-number objects. Formal definition Definition Lambda expressions are composed of: variables v1, v2, ...; the abstraction symbols λ (lambda) and . (dot); parentheses (). The set of lambda expressions, , can be defined inductively: If x is a variable, then If x is a variable and then If then Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications. See § reducible expression This set of rules may be written in Backus–Naur form as: <expression> :== <abstraction> | <application> | <variable> <abstraction> :== λ <variable> . <expression> <application> :== ( <expression> <expression> ) <variable> :== v1 | v2 | ... Notation To keep the notation of lambda expressions uncluttered, the following conventions are usually applied: Outermost parentheses are dropped: M N instead of (M N). Applications are assumed to be left associative: M N P may be written instead of ((M N) P). When all variables are single-letter, the space in applications may be omitted: MNP instead of M N P. The body of an abstraction extends as far right as possible: λx.M N means λx.(M N) and not (λx.M) N. A sequence of abstractions is contracted: λx.λy.λz.N is abbreviated as λxyz.N. Free and bound variables The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. In an expression λx.M, the part λx is often called binder, as a hint that the variable x is getting bound by prepending λx to M. All other variables are called free. For example, in the expression λy.x x y, y is a bound variable and x is a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: λx.y (λx.z x). The set of free variables of a lambda expression, M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows: , where x is a variable. . An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic. Reduction The meaning of lambda expressions is defined by how expressions can be reduced. There are three kinds of reduction: α-conversion: changing bound variables; β-reduction: applying functions to their arguments; η-reduction: which captures a notion of extensionality. We also speak of the resulting equivalences: two expressions are α-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly. The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M) N is a β-redex in expressing the substitution of N for x in M. The expression to which a redex reduces is called its reduct; the reduct of (λx.M) N is M[x := N]. If x is not free in M, λx.M x is also an η-redex, with a reduct of M. α-conversion α-conversion (alpha-conversion), sometimes known as α-renaming, allows bound variable names to be changed. For example, α-conversion of λx.x might yield λy.y. Terms that differ only by α-conversion are called α-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent. The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.x could result in λy.λx.x, but it could not result in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion of variable shadowing. Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace x with y in λx.λy.x, we get λy.λy.y, which is not at all the same. In programming languages with static scope, α-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see α-renaming to make name resolution trivial). In the De Bruijn index notation, any two α-equivalent terms are syntactically identical. Substitution Substitution, written M[x := N], is the process of replacing all free occurrences of the variable x in the expression M with expression N. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression): x[x := N] = N y[x := N] = y, if x ≠ y (M1 M2)[x := N] = M1[x := N] M2[x := N] (λx.M)[x := N] = λx.M (λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N) See above for the FV To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y := x] to result in λx.x, because the substituted x was supposed to be free but ended up being bound. The correct substitution in this case is λz.x, up to α-equivalence. Substitution is defined uniquely up to α-equivalence. See Capture-avoiding substitutions above β-reduction β-reduction (beta reduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M) N is M[x := N]. For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n × 2) 7 → 7 × 2. β-reduction can be seen to be the same as the concept of local reducibility in natural deduction, via the Curry–Howard isomorphism. η-reduction η-reduction (eta reduction) expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between λx.f x and f whenever x does not appear free in f. η-reduction can be seen to be the same as the concept of local completeness in natural deduction, via the Curry–Howard isomorphism. Normal forms and confluence For the untyped lambda calculus, β-reduction as a rewriting rule is neither strongly normalising nor weakly normalising. However, it can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other). Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it. Encoding datatypes The basic lambda calculus may be used to model arithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sections i, ii, iii, and § iv. Arithmetic in lambda calculus There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows: and so on. Or using the alternative syntax presented above in Notation: A Church numeral is a higher-order function—it takes a single-argument function , and returns another single-argument function. The Church numeral is a function that takes a function as argument and returns the -th composition of , i.e. the function composed with itself times. This is denoted and is in fact the -th power of (considered as an operator); is defined to be the identity function. Such repeated compositions (of a single function ) obey the laws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of impossible.) One way of thinking about the Church numeral , which is often useful when analysing programs, is as an instruction 'repeat n times'. For example, using the and functions defined below, one can define a function that constructs a (linked) list of n elements all equal to x by repeating 'prepend another x element' n times, starting from an empty list. The lambda term is By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved. We can define a successor function, which takes a Church numeral and returns by adding another application of , where '(mf)x' means the function 'f' is applied 'm' times on 'x': Because the -th composition of composed with the -th composition of gives the -th composition of , addition can be defined as follows: can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that and are β-equivalent lambda expressions. Since adding to a number can be accomplished by adding 1 times, an alternative definition is: Similarly, multiplication can be defined as Alternatively since multiplying and is the same as repeating the add function times and then applying it to zero. Exponentiation has a rather simple rendering in Church numerals, namely The predecessor function defined by for a positive integer and is considerably more difficult. The formula can be validated by showing inductively that if T denotes , then for . Two other definitions of are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining , yields when and otherwise. Logic and predicates By convention, the following two definitions (known as Church Booleans) are used for the Boolean values and : Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct): We are now able to compute some logic functions, for example: and we see that is equivalent to . A predicate is a function that returns a Boolean value. The most fundamental predicate is , which returns if its argument is the Church numeral , but if its argument were any other Church numeral: The following predicate tests whether the first argument is less-than-or-equal-to the second: , and since , if and , it is straightforward to build a predicate for numerical equality. The availability of predicates and the above definition of and make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as: which can be verified by showing inductively that is the add − 1 function for > 0. Pairs A pair (2-tuple) can be defined in terms of and , by using the Church encoding for pairs. For example, encapsulates the pair (,), returns the first element of the pair, and returns the second. A linked list can be defined as either NIL for the empty list, or the of an element and a smaller list. The predicate tests for the value . (Alternatively, with , the construct obviates the need for an explicit NULL test). As an example of the use of pairs, the shift-and-increment function that maps to can be defined as which allows us to give perhaps the most transparent version of the predecessor function: Additional programming techniques There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign. Named constants In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use to mean N (some explicit lambda-term) in M (another lambda-term, the "main program"), one can say M N Authors often introduce syntactic sugar, such as , to permit writing the above in the more intuitive order NM By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program. A notable restriction of this is that the name be not defined in N, for N to be outside the scope of the abstraction binding ; this means a recursive function definition cannot be used as the N with . The construction would allow writing recursive function definitions. Recursion and fixed points Recursion is the definition of a function invoking itself. A definition containing itself inside itself, by value, leads to the whole value being of infinite size. Other notations which support recursion natively overcome this by referring to the function definition by name. Lambda calculus cannot express this: all functions are anonymous in lambda calculus, so we can't refer by name to a value which is yet to be defined, inside the lambda term defining that same value. However, a lambda expression can receive itself as its own argument, for example in  . Here E should be an abstraction, applying its parameter to a value to express recursion. Consider the factorial function recursively defined by . In the lambda expression which is to represent this function, a parameter (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it – applying it to an argument – will amount to recursion. Thus to achieve recursion, the intended-as-self-referencing argument (called here) must always be passed to itself within the function body, at a call point: with   to hold, so   and The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced and called there. This solves it but requires re-writing each recursive call as self-application. We would like to have a generic solution, without a need for any re-writes: with   to hold, so   and  where  so that  Given a lambda term with first argument representing recursive call (e.g. here), the fixed-point combinator will return a self-replicating lambda expression representing the recursive function (here, ). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression is re-created inside itself, at call-point, achieving self-reference. In fact, there are many possible definitions for this operator, the simplest of them being: In the lambda calculus,   is a fixed-point of , as it expands to: Now, to perform our recursive call to the factorial function, we would simply call ,  where n is the number we are calculating the factorial of. Given n = 4, for example, this gives: Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using , every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively. Standard terms Certain terms have commonly accepted names: is the identity function. and form complete combinator calculus systems that can express any lambda term - see the next section. is , the smallest term that has no normal form. is another such term. is standard and defined above, and can also be defined as , so that . and defined above are commonly abbreviated as and . Abstraction elimination If N is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term T(,N) which is equivalent to N but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as T(,N) removes all occurrences of from N, while still allowing argument values to be substituted into the positions where N contains an . The conversion function T can be defined by: T(, ) := I T(, N) := K N if is not free in N. T(, M N) := S T(, M) T(, N) In either case, a term of the form T(,N) P can reduce by having the initial combinator I, K, or S grab the argument P, just like β-reduction of N P would do. I returns that argument. K throws the argument away, just like N would do if has no free occurrence in N. S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second. The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus. Typed lambda calculus A typed lambda calculus is a typed formalism that uses the lambda-symbol () to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see Kinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type. Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation. Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g., the simply typed lambda calculus is the language of a Cartesian closed category (CCC). Reduction strategies Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include: Normal order The leftmost outermost redex is reduced first. That is, whenever possible, arguments are substituted into the body of an abstraction before the arguments are reduced. If a term has a beta-normal form, normal order reduction will always reach that normal form. Applicative order The leftmost innermost redex is reduced first. As a consequence, a function's arguments are always reduced before they are substituted into the function. Unlike normal order reduction, applicative order reduction may fail to find the beta-normal form of an expression, even if such a normal form exists. For example, the term is reduced to itself by applicative order, while normal order reduces it to its beta-normal form . Full β-reductions Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off". Weak reduction strategies do not reduce under lambda abstractions: Call by value Like applicative order, but no reductions are performed inside abstractions. This is similar to the evaluation order of strict languages like C: the arguments to a function are evaluated before calling the function, and function bodies are not even partially evaluated until the arguments are substituted in. Call by name Like normal order, but no reductions are performed inside abstractions. For example, is in normal form according to this strategy, although it contains the redex . Strategies with sharing reduce computations that are "the same" in parallel: Optimal reduction As normal order, but computations that have the same label are reduced simultaneously. Call by need As call by name (hence weak), but function applications that would duplicate terms instead name the argument. The argument may be evaluated "when needed," at which point the name binding is updated with the reduced value. This can save time compared to normal order evaluation. Computability There is no algorithm that takes as input any two lambda expressions and outputs or depending on whether one expression reduces to the other. More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven. As usual for such a proof, computable means computable by any model of computation that is Turing complete. In fact computability can itself be defined via the lambda calculus: a function F: N → N of natural numbers is a computable function if and only if there exists a lambda expression f such that for every pair of x, y in N, F(x)=y if and only if f  =β ,  where and are the Church numerals corresponding to x and y, respectively and =β meaning equivalence with β-reduction. See the Church–Turing thesis for other approaches to defining computability and their equivalence. Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has a normal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression that closely follows the proof of Gödel's first incompleteness theorem. If is applied to its own Gödel number, a contradiction results. Complexity The notion of computational complexity for the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented. To be precise, one must somehow find the location of all of the occurrences of the bound variable in the expression , implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations of in is O(n) in the length n of . Director strings were an early approach that traded this time cost for a quadratic space usage. More generally this has led to the study of systems that use explicit substitution. In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is a reasonable time cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps. This was a long-standing open problem, due to size explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be. An unreasonable model does not necessarily mean inefficient. Optimal reduction reduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction. It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost. In addition the BOHM prototype implementation of optimal reduction outperformed both Caml Light and Haskell on pure lambda terms. Lambda calculus and programming languages As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation", sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application. Anonymous functions For example, in Python the "square" function can be expressed as a lambda expression as follows: (lambda x: x**2) The above example is an expression that evaluates to a first-class function. The symbol lambda creates an anonymous function, given a list of parameter names, x – just a single argument in this case, and an expression that is evaluated as the body of the function, x**2. Anonymous functions are sometimes called lambda expressions. For example, Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are an insufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at runtime. Such runtime creation of functions is supported in Smalltalk, JavaScript, Wolfram Language, and more recently in Scala, Eiffel (as agents), C# (as delegates) and C++11, among others. Parallelism and concurrency The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order, even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency. Semantics The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set D isomorphic to the function space D → D, of functions on itself. However, no nontrivial such D can exist, by cardinality constraints because the set of all functions from D to D has greater cardinality than D, unless D is a singleton set. In the 1970s, Dana Scott showed that if only continuous functions were considered, a set or domain D with the required property could be found, thus providing a model for the lambda calculus. This work also formed the basis for the denotational semantics of programming languages. Variations and extensions These extensions are in the lambda cube: Typed lambda calculus – Lambda calculus with typed variables (and functions) System F – A typed lambda calculus with type-variables Calculus of constructions – A typed lambda calculus with types as first-class values These formal systems are extensions of lambda calculus that are not in the lambda cube: Binary lambda calculus – A version of lambda calculus with binary input/output (I/O), a binary encoding of terms, and a designated universal machine. Lambda-mu calculus – An extension of the lambda calculus for treating classical logic These formal systems are variations of lambda calculus: Kappa calculus – A first-order analogue of lambda calculus These formal systems are related to lambda calculus: Combinatory logic – A notation for mathematical logic without variables SKI combinator calculus – A computational system based on the S, K and I''' combinators, equivalent to lambda calculus, but reducible without variable substitutions See also Applicative computing systems – Treatment of objects in the style of the lambda calculus Cartesian closed category – A setting for lambda calculus in category theory Categorical abstract machine – A model of computation applicable to lambda calculus Clojure, programming language Curry–Howard isomorphism – The formal correspondence between programs and proofs De Bruijn index – notation disambiguating alpha conversions De Bruijn notation – notation using postfix modification functions Domain theory – Study of certain posets giving denotational semantics for lambda calculus Evaluation strategy – Rules for the evaluation of expressions in programming languages Explicit substitution – The theory of substitution, as used in β-reduction Harrop formula – A kind of constructive logical formula such that proofs are lambda terms Interaction nets Kleene–Rosser paradox – A demonstration that some form of lambda calculus is inconsistent Knights of the Lambda Calculus – A semi-fictional organization of LISP and Scheme hackers Krivine machine – An abstract machine to interpret call-by-name in lambda calculus Lambda calculus definition – Formal definition of the lambda calculus. Let expression – An expression closely related to an abstraction. Minimalism (computing) Rewriting – Transformation of formulæ in formal systems SECD machine – A virtual machine designed for the lambda calculus Scott–Curry theorem – A theorem about sets of lambda terms To Mock a Mockingbird – An introduction to combinatory logic Universal Turing machine – A formal computing machine equivalent to lambda calculus Unlambda – A functional esoteric programming language based on combinatory logic Further reading Abelson, Harold & Gerald Jay Sussman. Structure and Interpretation of Computer Programs. The MIT Press. . Barendregt, Hendrik Pieter Introduction to Lambda Calculus. Barendregt, Hendrik Pieter, The Impact of the Lambda Calculus in Logic and Computer Science. The Bulletin of Symbolic Logic, Volume 3, Number 2, June 1997. Barendregt, Hendrik Pieter, The Type Free Lambda Calculus pp1091–1132 of Handbook of Mathematical Logic, North-Holland (1977) Cardone, Felice and Hindley, J. Roger, 2006. History of Lambda-calculus and Combinatory Logic . In Gabbay and Woods (eds.), Handbook of the History of Logic, vol. 5. Elsevier. Church, Alonzo, An unsolvable problem of elementary number theory, American Journal of Mathematics, 58 (1936), pp. 345–363. This paper contains the proof that the equivalence of lambda expressions is in general not decidable. () Kleene, Stephen, A theory of positive integers in formal logic, American Journal of Mathematics, 57 (1935), pp. 153–173 and 219–244. Contains the lambda calculus definitions of several familiar functions. Landin, Peter, A Correspondence Between ALGOL 60 and Church's Lambda-Notation, Communications of the ACM, vol. 8, no. 2 (1965), pages 89–101. Available from the ACM site. A classic paper highlighting the importance of lambda calculus as a basis for programming languages. Larson, Jim, An Introduction to Lambda Calculus and Scheme. A gentle introduction for programmers. Schalk, A. and Simmons, H. (2005) An introduction to λ-calculi and arithmetic with a decent selection of exercises. Notes for a course in the Mathematical Logic MSc at Manchester University. A paper giving a formal underpinning to the idea of 'meaning-is-use' which, even if based on proofs, it is different from proof-theoretic semantics as in the Dummett–Prawitz tradition since it takes reduction as the rules giving meaning. Hankin, Chris, An Introduction to Lambda Calculi for Computer Scientists, Monographs/textbooks for graduate students Sørensen, Morten Heine and Urzyczyn, Paweł (2006), Lectures on the Curry–Howard isomorphism, Elsevier, is a recent monograph that covers the main topics of lambda calculus from the type-free variety, to most typed lambda calculi, including more recent developments like pure type systems and the lambda cube. It does not cover subtyping extensions. covers lambda calculi from a practical type system perspective; some topics like dependent types are only mentioned, but subtyping is an important topic. Documents A Short Introduction to the Lambda Calculus-(PDF) by Achim Jung A timeline of lambda calculus-(PDF) by Dana Scott A Tutorial Introduction to the Lambda Calculus-(PDF) by Raúl Rojas Lecture Notes on the Lambda Calculus-(PDF) by Peter Selinger Graphic lambda calculus by Marius Buliga Lambda Calculus as a Workflow Model by Peter Kelly, Paul Coddington, and Andrew Wendelborn; mentions graph reduction as a common means of evaluating lambda expressions and discusses the applicability of lambda calculus for distributed computing (due to the Church–Rosser property, which enables parallel graph reduction for lambda expressions). Notes References Some parts of this article are based on material from FOLDOC, used with permission. External links Graham Hutton, Lambda Calculus, a short (12 minutes) Computerphile video on the Lambda Calculus Helmut Brandl, Step by Step Introduction to Lambda Calculus David C. Keenan, To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction L. Allison, Some executable λ-calculus examples Georg P. Loczewski, The Lambda Calculus and A++ Bret Victor, Alligator Eggs: A Puzzle Game Based on Lambda Calculus Lambda Calculus on Safalra's Website LCI Lambda Interpreter a simple yet powerful pure calculus interpreter Lambda Calculus links on Lambda-the-Ultimate Mike Thyer, Lambda Animator, a graphical Java applet demonstrating alternative reduction strategies. Implementing the Lambda calculus using C++ Templates Shane Steinert-Threlkeld, "Lambda Calculi", Internet Encyclopedia of Philosophy'' Anton Salikhmetov, Macro Lambda Calculus 1936 in computing Computability theory Formal methods Models of computation Theoretical computer science Programming language comparisons Articles with example Lisp (programming language) code Articles with example Python (programming language) code
Lambda calculus
[ "Mathematics", "Technology", "Engineering" ]
10,407
[ "Theoretical computer science", "Applied mathematics", "Mathematical logic", "Computing comparisons", "Software engineering", "Computability theory", "Programming language comparisons", "Formal methods" ]
18,285
https://en.wikipedia.org/wiki/Lagrange%20point
In celestial mechanics, the Lagrange points (; also Lagrangian points or libration points) are points of equilibrium for small-mass objects under the gravitational influence of two massive orbiting bodies. Mathematically, this involves the solution of the restricted three-body problem. Normally, the two massive bodies exert an unbalanced gravitational force at a point, altering the orbit of whatever is at that point. At the Lagrange points, the gravitational forces of the two large bodies and the centrifugal force balance each other. This can make Lagrange points an excellent location for satellites, as orbit corrections, and hence fuel requirements, needed to maintain the desired orbit are kept at a minimum. For any combination of two orbital bodies, there are five Lagrange points, L1 to L5, all in the orbital plane of the two large bodies. There are five Lagrange points for the Sun–Earth system, and five different Lagrange points for the Earth–Moon system. L1, L2, and L3 are on the line through the centers of the two large bodies, while L4 and L5 each act as the third vertex of an equilateral triangle formed with the centers of the two large bodies. When the mass ratio of the two bodies is large enough, the L4 and L5 points are stable points, meaning that objects can orbit them and that they have a tendency to pull objects into them. Several planets have trojan asteroids near their L4 and L5 points with respect to the Sun; Jupiter has more than one million of these trojans. Some Lagrange points are being used for space exploration. Two important Lagrange points in the Sun-Earth system are L1, between the Sun and Earth, and L2, on the same line at the opposite side of the Earth; both are well outside the Moon's orbit. Currently, an artificial satellite called the Deep Space Climate Observatory (DSCOVR) is located at L1 to study solar wind coming toward Earth from the Sun and to monitor Earth's climate, by taking images and sending them back. The James Webb Space Telescope, a powerful infrared space observatory, is located at L2. This allows the satellite's large sunshield to protect the telescope from the light and heat of the Sun, Earth and Moon. The L1 and L2 Lagrange points are located about from Earth. The European Space Agency's earlier Gaia telescope, and its newly launched Euclid, also occupy orbits around L2. Gaia keeps a tighter Lissajous orbit around L2, while Euclid follows a halo orbit similar to JWST. Each of the space observatories benefit from being far enough from Earth's shadow to utilize solar panels for power, from not needing much power or propellant for station-keeping, from not being subjected to the Earth's magnetospheric effects, and from having direct line-of-sight to Earth for data transfer. History The three collinear Lagrange points (L1, L2, L3) were discovered by the Swiss mathematician Leonhard Euler around 1750, a decade before the Italian-born Joseph-Louis Lagrange discovered the remaining two. In 1772, Lagrange published an "Essay on the three-body problem". In the first chapter he considered the general three-body problem. From that, in the second chapter, he demonstrated two special constant-pattern solutions, the collinear and the equilateral, for any three masses, with circular orbits. Lagrange points The five Lagrange points are labelled and defined as follows: point The point lies on the line defined between the two large masses M1 and M2. It is the point where the gravitational attraction of M2 and that of M1 combine to produce an equilibrium. An object that orbits the Sun more closely than Earth would typically have a shorter orbital period than Earth, but that ignores the effect of Earth's gravitational pull. If the object is directly between Earth and the Sun, then Earth's gravity counteracts some of the Sun's pull on the object, increasing the object's orbital period. The closer to Earth the object is, the greater this effect is. At the point, the object's orbital period becomes exactly equal to Earth's orbital period. is about 1.5 million kilometers, or 0.01 au, from Earth in the direction of the Sun. point The point lies on the line through the two large masses beyond the smaller of the two. Here, the combined gravitational forces of the two large masses balance the centrifugal force on a body at . On the opposite side of Earth from the Sun, the orbital period of an object would normally be greater than Earth's. The extra pull of Earth's gravity decreases the object's orbital period, and at the point, that orbital period becomes equal to Earth's. Like L1, L2 is about 1.5 million kilometers or 0.01 au from Earth (away from the sun). An example of a spacecraft designed to operate near the Earth–Sun L2 is the James Webb Space Telescope. Earlier examples include the Wilkinson Microwave Anisotropy Probe and its successor, Planck. point The point lies on the line defined by the two large masses, beyond the larger of the two. Within the Sun–Earth system, the point exists on the opposite side of the Sun, a little outside Earth's orbit and slightly farther from the center of the Sun than Earth is. This placement occurs because the Sun is also affected by Earth's gravity and so orbits around the two bodies' barycenter, which is well inside the body of the Sun. An object at Earth's distance from the Sun would have an orbital period of one year if only the Sun's gravity is considered. But an object on the opposite side of the Sun from Earth and directly in line with both "feels" Earth's gravity adding slightly to the Sun's and therefore must orbit a little farther from the barycenter of Earth and Sun in order to have the same 1-year period. It is at the point that the combined pull of Earth and Sun causes the object to orbit with the same period as Earth, in effect orbiting an Earth+Sun mass with the Earth-Sun barycenter at one focus of its orbit. and points The and points lie at the third vertices of the two equilateral triangles in the plane of orbit whose common base is the line between the centers of the two masses, such that the point lies 60° ahead of () or behind () the smaller mass with regard to its orbit around the larger mass. Stability The triangular points ( and ) are stable equilibria, provided that the ratio of is greater than 24.96. This is the case for the Sun–Earth system, the Sun–Jupiter system, and, by a smaller margin, the Earth–Moon system. When a body at these points is perturbed, it moves away from the point, but the factor opposite of that which is increased or decreased by the perturbation (either gravity or angular momentum-induced speed) will also increase or decrease, bending the object's path into a stable, kidney bean-shaped orbit around the point (as seen in the corotating frame of reference). The points , , and are positions of unstable equilibrium. Any object orbiting at , , or will tend to fall out of orbit; it is therefore rare to find natural objects there, and spacecraft inhabiting these areas must employ a small but critical amount of station keeping in order to maintain their position. Natural objects at Lagrange points Due to the natural stability of and , it is common for natural objects to be found orbiting in those Lagrange points of planetary systems. Objects that inhabit those points are generically referred to as 'trojans' or 'trojan asteroids'. The name derives from the names that were given to asteroids discovered orbiting at the Sun–Jupiter and points, which were taken from mythological characters appearing in Homer's Iliad, an epic poem set during the Trojan War. Asteroids at the point, ahead of Jupiter, are named after Greek characters in the Iliad and referred to as the "Greek camp". Those at the point are named after Trojan characters and referred to as the "Trojan camp". Both camps are considered to be types of trojan bodies. As the Sun and Jupiter are the two most massive objects in the Solar System, there are more known Sun–Jupiter trojans than for any other pair of bodies. However, smaller numbers of objects are known at the Lagrange points of other orbital systems: The Sun–Earth and points contain interplanetary dust and at least two asteroids, and . The Earth–Moon and points contain concentrations of interplanetary dust, known as Kordylewski clouds. Stability at these specific points is greatly complicated by solar gravitational influence. The Sun–Neptune and points contain several dozen known objects, the Neptune trojans. Mars has four accepted Mars trojans: 5261 Eureka, , , and . Saturn's moon Tethys has two smaller moons of Saturn in its and points, Telesto and Calypso. Another Saturn moon, Dione also has two Lagrange co-orbitals, Helene at its point and Polydeuces at . The moons wander azimuthally about the Lagrange points, with Polydeuces describing the largest deviations, moving up to 32° away from the Saturn–Dione point. One version of the giant impact hypothesis postulates that an object named Theia formed at the Sun–Earth or point and crashed into Earth after its orbit destabilized, forming the Moon. In binary stars, the Roche lobe has its apex located at ; if one of the stars expands past its Roche lobe, then it will lose matter to its companion star, known as Roche lobe overflow. Objects which are on horseshoe orbits are sometimes erroneously described as trojans, but do not occupy Lagrange points. Known objects on horseshoe orbits include 3753 Cruithne with Earth, and Saturn's moons Epimetheus and Janus. Physical and mathematical details Lagrange points are the constant-pattern solutions of the restricted three-body problem. For example, given two massive bodies in orbits around their common barycenter, there are five positions in space where a third body, of comparatively negligible mass, could be placed so as to maintain its position relative to the two massive bodies. This occurs because the combined gravitational forces of the two massive bodies provide the exact centripetal force required to maintain the circular motion that matches their orbital motion. Alternatively, when seen in a rotating reference frame that matches the angular velocity of the two co-orbiting bodies, at the Lagrange points the combined gravitational fields of two massive bodies balance the centrifugal pseudo-force, allowing the smaller third body to remain stationary (in this frame) with respect to the first two. The location of L1 is the solution to the following equation, gravitation providing the centripetal force: where r is the distance of the L1 point from the smaller object, R is the distance between the two main objects, and M1 and M2 are the masses of the large and small object, respectively. The quantity in parentheses on the right is the distance of L1 from the center of mass. The solution for r is the only real root of the following quintic function where is the mass fraction of M2 and is the normalised distance. If the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1) then and are at approximately equal distances r from the smaller object, equal to the radius of the Hill sphere, given by: We may also write this as: Since the tidal effect of a body is proportional to its mass divided by the distance cubed, this means that the tidal effect of the smaller body at the L or at the L point is about three times of that body. We may also write: where ρ and ρ are the average densities of the two bodies and d and d are their diameters. The ratio of diameter to distance gives the angle subtended by the body, showing that viewed from these two Lagrange points, the apparent sizes of the two bodies will be similar, especially if the density of the smaller one is about thrice that of the larger, as in the case of the earth and the sun. This distance can be described as being such that the orbital period, corresponding to a circular orbit with this distance as radius around M2 in the absence of M1, is that of M2 around M1, divided by ≈ 1.73: The location of L2 is the solution to the following equation, gravitation providing the centripetal force: with parameters defined as for the L1 case. The corresponding quintic equation is Again, if the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1) then L2 is at approximately the radius of the Hill sphere, given by: The same remarks about tidal influence and apparent size apply as for the L point. For example, the angular radius of the sun as viewed from L2 is arcsin() ≈ 0.264°, whereas that of the earth is arcsin() ≈ 0.242°. Looking toward the sun from L2 one sees an annular eclipse. It is necessary for a spacecraft, like Gaia, to follow a Lissajous orbit or a halo orbit around L2 in order for its solar panels to get full sun. L3 The location of L3 is the solution to the following equation, gravitation providing the centripetal force: with parameters M1, M2, and R defined as for the L1 and L2 cases, and r being defined such that the distance of L3 from the centre of the larger object is R − r. If the mass of the smaller object (M2) is much smaller than the mass of the larger object (M1), then: Thus the distance from L3 to the larger object is less than the separation of the two objects (although the distance between L3 and the barycentre is greater than the distance between the smaller object and the barycentre). and The reason these points are in balance is that at and the distances to the two masses are equal. Accordingly, the gravitational forces from the two massive bodies are in the same ratio as the masses of the two bodies, and so the resultant force acts through the barycenter of the system. Additionally, the geometry of the triangle ensures that the resultant acceleration is to the distance from the barycenter in the same ratio as for the two massive bodies. The barycenter being both the center of mass and center of rotation of the three-body system, this resultant force is exactly that required to keep the smaller body at the Lagrange point in orbital equilibrium with the other two larger bodies of the system (indeed, the third body needs to have negligible mass). The general triangular configuration was discovered by Lagrange working on the three-body problem. Radial acceleration The radial acceleration a of an object in orbit at a point along the line passing through both bodies is given by: where r is the distance from the large body M1, R is the distance between the two main objects, and sgn(x) is the sign function of x. The terms in this function represent respectively: force from M1; force from M2; and centripetal force. The points L3, L1, L2 occur where the acceleration is zero — see chart at right. Positive acceleration is acceleration towards the right of the chart and negative acceleration is towards the left; that is why acceleration has opposite signs on opposite sides of the gravity wells. Stability Although the , , and points are nominally unstable, there are quasi-stable periodic orbits called halo orbits around these points in a three-body system. A full n-body dynamical system such as the Solar System does not contain these periodic orbits, but does contain quasi-periodic (i.e. bounded but not precisely repeating) orbits following Lissajous-curve trajectories. These quasi-periodic Lissajous orbits are what most of Lagrangian-point space missions have used until now. Although they are not perfectly stable, a modest effort of station keeping keeps a spacecraft in a desired Lissajous orbit for a long time. For Sun–Earth- missions, it is preferable for the spacecraft to be in a large-amplitude () Lissajous orbit around than to stay at , because the line between Sun and Earth has increased solar interference on Earth–spacecraft communications. Similarly, a large-amplitude Lissajous orbit around keeps a probe out of Earth's shadow and therefore ensures continuous illumination of its solar panels. The and points are stable provided that the mass of the primary body (e.g. the Earth) is at least 25 times the mass of the secondary body (e.g. the Moon), The Earth is over 81 times the mass of the Moon (the Moon is 1.23% of the mass of the Earth). Although the and points are found at the top of a "hill", as in the effective potential contour plot above, they are nonetheless stable. The reason for the stability is a second-order effect: as a body moves away from the exact Lagrange position, Coriolis acceleration (which depends on the velocity of an orbiting object and cannot be modeled as a contour map) curves the trajectory into a path around (rather than away from) the point. Because the source of stability is the Coriolis force, the resulting orbits can be stable, but generally are not planar, but "three-dimensional": they lie on a warped surface intersecting the ecliptic plane. The kidney-shaped orbits typically shown nested around and are the projections of the orbits on a plane (e.g. the ecliptic) and not the full 3-D orbits. Solar System values This table lists sample values of L1, L2, and L3 within the Solar System. Calculations assume the two bodies orbit in a perfect circle with separation equal to the semimajor axis and no other bodies are nearby. Distances are measured from the larger body's center of mass (but see barycenter especially in the case of Moon and Jupiter) with L3 showing a negative direction. The percentage columns show the distance from the orbit compared to the semimajor axis. E.g. for the Moon, L1 is from Earth's center, which is 84.9% of the Earth–Moon distance or 15.1% "in front of" (Earthwards from) the Moon; L2 is located from Earth's center, which is 116.8% of the Earth–Moon distance or 16.8% beyond the Moon; and L3 is located from Earth's center, which is 99.3% of the Earth–Moon distance or 0.7084% inside (Earthward) of the Moon's 'negative' position. Spaceflight applications Sun–Earth Sun–Earth is suited for making observations of the Sun–Earth system. Objects here are never shadowed by Earth or the Moon and, if observing Earth, always view the sunlit hemisphere. The first mission of this type was the 1978 International Sun Earth Explorer 3 (ISEE-3) mission used as an interplanetary early warning storm monitor for solar disturbances. Since June 2015, DSCOVR has orbited the L1 point. Conversely, it is also useful for space-based solar telescopes, because it provides an uninterrupted view of the Sun and any space weather (including the solar wind and coronal mass ejections) reaches L1 up to an hour before Earth. Solar and heliospheric missions currently located around L1 include the Solar and Heliospheric Observatory, Wind, Aditya-L1 Mission and the Advanced Composition Explorer. Planned missions include the Interstellar Mapping and Acceleration Probe(IMAP) and the NEO Surveyor. Sun–Earth is a good spot for space-based observatories. Because an object around will maintain the same relative position with respect to the Sun and Earth, shielding and calibration are much simpler. It is, however, slightly beyond the reach of Earth's umbra, so solar radiation is not completely blocked at L2. Spacecraft generally orbit around L2, avoiding partial eclipses of the Sun to maintain a constant temperature. From locations near L2, the Sun, Earth and Moon are relatively close together in the sky; this means that a large sunshade with the telescope on the dark-side can allow the telescope to cool passively to around 50 K – this is especially helpful for infrared astronomy and observations of the cosmic microwave background. The James Webb Space Telescope was positioned in a halo orbit about L2 on January 24, 2022. Sun–Earth and are saddle points and exponentially unstable with time constant of roughly 23 days. Satellites at these points will wander off in a few months unless course corrections are made. Sun–Earth was a popular place to put a "Counter-Earth" in pulp science fiction and comic books, despite the fact that the existence of a planetary body in this location had been understood as an impossibility once orbital mechanics and the perturbations of planets upon each other's orbits came to be understood, long before the Space Age; the influence of an Earth-sized body on other planets would not have gone undetected, nor would the fact that the foci of Earth's orbital ellipse would not have been in their expected places, due to the mass of the counter-Earth. The Sun–Earth , however, is a weak saddle point and exponentially unstable with time constant of roughly 150 years. Moreover, it could not contain a natural object, large or small, for very long because the gravitational forces of the other planets are stronger than that of Earth (for example, Venus comes within 0.3 AU of this every 20 months). A spacecraft orbiting near Sun–Earth would be able to closely monitor the evolution of active sunspot regions before they rotate into a geoeffective position, so that a seven-day early warning could be issued by the NOAA Space Weather Prediction Center. Moreover, a satellite near Sun–Earth would provide very important observations not only for Earth forecasts, but also for deep space support (Mars predictions and for crewed missions to near-Earth asteroids). In 2010, spacecraft transfer trajectories to Sun–Earth were studied and several designs were considered. Earth–Moon Earth–Moon allows comparatively easy access to Lunar and Earth orbits with minimal change in velocity and this has as an advantage to position a habitable space station intended to help transport cargo and personnel to the Moon and back. The SMART-1 Mission passed through the L1 Lagrangian Point on 11 November 2004 and passed into the area dominated by the Moon's gravitational influence. Earth–Moon has been used for a communications satellite covering the Moon's far side, for example, Queqiao, launched in 2018, and would be "an ideal location" for a propellant depot as part of the proposed depot-based space transportation architecture. Earth–Moon and are the locations for the Kordylewski dust clouds. The L5 Society's name comes from the L4 and L5 Lagrangian points in the Earth–Moon system proposed as locations for their huge rotating space habitats. Both positions are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth. Sun–Venus Scientists at the B612 Foundation were planning to use Venus's L3 point to position their planned Sentinel telescope, which aimed to look back towards Earth's orbit and compile a catalogue of near-Earth asteroids. Sun–Mars In 2017, the idea of positioning a magnetic dipole shield at the Sun–Mars point for use as an artificial magnetosphere for Mars was discussed at a NASA conference. The idea is that this would protect the planet's atmosphere from the Sun's radiation and solar winds. See also Co-orbital configuration Euler's three-body problem Gegenschein Interplanetary Transport Network Klemperer rosette L5 Society Lagrange point colonization Lagrangian mechanics List of objects at Lagrange points Lunar space elevator Oberth effect Explanatory notes References External links Joseph-Louis, Comte Lagrange, from Œuvres, Tome 6, « Essai sur le Problème des Trois Corps »—Essai (PDF); source Tome 6 (Viewer) "Essay on the Three-Body Problem" by J.-L. Lagrange, translated from the above, in merlyn.demon.co.uk . Considerationes de motu corporum coelestium—Leonhard Euler—transcription and translation at merlyn.demon.co.uk . ZIP file—J R Stockton - Includes translations of Lagrange's Essai and of two related papers by Euler What are Lagrange points?—European Space Agency page, with good animations Explanation of Lagrange points—Neil J. Cornish A NASA explanation—also attributed to Neil J. Cornish Explanation of Lagrange points—John Baez Locations of Lagrange points, with approximations—David Peter Stern An online calculator to compute the precise positions of the 5 Lagrange points for any 2-body system—Tony Dunn Astronomy Cast—Ep. 76: "Lagrange Points" by Fraser Cain and Pamela L. Gay The Five Points of Lagrange by Neil deGrasse Tyson Earth, a lone Trojan discovered See the Lagrange Points and Halo Orbits subsection under the section on Geosynchronous Transfer Orbit in NASA: Basics of Space Flight, Chapter 5 Trojans (astronomy) Point
Lagrange point
[ "Physics", "Mathematics" ]
5,361
[ "Lagrangian mechanics", "Classical mechanics", "Dynamical systems" ]
18,339
https://en.wikipedia.org/wiki/Law%20of%20multiple%20proportions
In chemistry, the law of multiple proportions states that in compounds which contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. For instance, the ratio of the hydrogen content in methane (CH4) and ethane (C2H6) per measure of carbon is 4:3. This law is also known as Dalton's Law, named after John Dalton, the chemist who first expressed it. The discovery of this pattern led Dalton to develop the modern theory of atoms, as it suggested that the elements combine with each other in multiples of a basic quantity. Along with the law of definite proportions, the law of multiple proportions forms the basis of stoichiometry. The law of multiple proportions often does not apply when comparing very large molecules. For example, if one tried to demonstrate it using the hydrocarbons decane (C10H22) and undecane (C11H24), one would find that 100 grams of carbon could react with 18.46 grams of hydrogen to produce decane or with 18.31 grams of hydrogen to produce undecane, for a ratio of hydrogen masses of 121:120, which is hardly a ratio of "small" whole numbers. History In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right. The following examples come from Dalton's own books A New System of Chemical Philosophy (in two volumes, 1808 and 1817): Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. Tin oxides are actually crystals, they don't exist in molecular form. Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide (Fe2O2) and iron(III) oxide (Fe2O3). Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". As with tin oxides, iron oxides are crystals. Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there are 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2. The earliest definition of Dalton's observation appears in an 1807 chemistry encyclopedia: The first known writer to refer to this principle as the "doctrine of multiple proportions" was Jöns Jacob Berzelius in 1813. Dalton's atomic theory garnered widespread interest but not universal acceptance shortly after he published it because the law of multiple proportions by itself was not complete proof of the existence of atoms. Over the course of the 19th century, other discoveries in the fields of chemistry and physics would give atomic theory more credence, such that by the end of the 19th century it had found universal acceptance. Footnotes References Bibliography Physical chemistry Stoichiometry
Law of multiple proportions
[ "Physics", "Chemistry" ]
1,288
[ "Stoichiometry", "Chemical reaction engineering", "Applied and interdisciplinary physics", "nan", "Physical chemistry" ]
18,404
https://en.wikipedia.org/wiki/Lorentz%20transformation
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz. The most common form of the transformation, parametrized by the real constant representing a velocity confined to the -direction, is expressed as where and are the coordinates of an event in two frames with the spatial origins coinciding at ==0, where the primed frame is seen from the unprimed frame as moving with speed along the -axis, where is the speed of light, and is the Lorentz factor. When speed is much smaller than , the Lorentz factor is negligibly different from 1, but as approaches , grows without bound. The value of must be smaller than for the transformation to make sense. Expressing the speed as an equivalent form of the transformation is Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity. In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame. They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity. Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity. The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group. History Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905. Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations. In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and he named it after Lorentz. Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary. Derivation of the group of Lorentz transformations An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates to specify position in space in that frame. Subscripts label individual events. From Einstein's second postulate of relativity (invariance of c) it follows that: in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events and . The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that: where are the spacetime coordinates used to define events in one frame, and are the coordinates in another frame. First one observes that () is satisfied if an arbitrary -tuple of numbers are added to events and . Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too: (a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. First equation in () can be written more compactly as: where refers to the bilinear form of signature on exposed by the right hand side formula in (). The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as endowed with this bilinear form is known as Minkowski space . The Lorentz transformation is thus an element of the group , the Lorentz group or, for those that prefer the other metric signature, (also called the Lorentz group). One has: which is precisely preservation of the bilinear form () which implies (by linearity of and bilinearity of the form) that () is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group. Generalities The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations. Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and boost is a homogeneous transformation, which transforms the origin back to the origin. The full Lorentz group also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed. Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group. Physical formulation of Lorentz boosts Coordinate transformation A "stationary" observer in frame defines events with coordinates . Another frame moves with velocity relative to , and an observer in this "moving" frame defines events using the coordinates . The coordinate axes in each frame are parallel (the and axes are parallel, the and axes are parallel, and the and axes are parallel), remain mutually perpendicular, and relative motion is along the coincident axes. At , the origins of both coordinate systems are the same, . In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized. If an observer in records an event , then an observer in records the same event with coordinates where is the relative velocity between frames in the -direction, is the speed of light, and (lowercase gamma) is the Lorentz factor. Here, is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity is motion along the positive directions of the axes, zero relative velocity is no relative motion, while negative relative velocity is relative motion along the negative directions of the axes. The magnitude of relative velocity cannot equal or exceed , so only subluminal speeds are allowed. The corresponding range of is . The transformations are not defined if is outside these limits. At the speed of light () is infinite, and faster than light () is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers. As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the axes, because of the in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the axes, while the event does not change and is simply represented in another coordinate system, a passive transformation. The inverse relations ( in terms of ) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here is the "stationary" frame while is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from to must take exactly the same form as the transformations from to . The only difference is moves with velocity relative to (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in notes an event , then an observer in notes the same event with coordinates and the value of remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction. Sometimes it is more convenient to use (lowercase beta) instead of , so that which shows much more clearly the symmetry in the transformation. From the allowed ranges of and the definition of , it follows . The use of and is standard throughout the literature. When the boost velocity is in an arbitrary vector direction with the boost vector , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by, where the Lorentz factor is . The determinant of the transformation matrix is +1 and its trace is . The inverse of the transformation is given by reversing the sign of . The quantity is invariant under the transformation. The Lorentz transformations can also be derived in a way that resembles circular rotations in 3d space using the hyperbolic functions. For the boost in the direction, the results are where (lowercase zeta) is a parameter called rapidity (many other symbols are used, including ). Given the strong resemblance to rotations of spatial coordinates in 3d space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4d Minkowski space. The parameter is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram. The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking or in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying , which parametrizes the curves according to the identity Conversely the and axes can be constructed for varying coordinates but constant . The definition provides the link between a constant value of rapidity, and the slope of the axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between , , and are Taking the inverse hyperbolic tangent gives the rapidity Since , it follows . From the relation between and , positive rapidity is motion along the positive directions of the axes, zero rapidity is no relative motion, while negative rapidity is relative motion along the negative directions of the axes. The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity since this is equivalent to negating the relative velocity. Therefore, The inverse transformations can be similarly visualized by considering the cases when and . So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences; with inverse relations where (uppercase delta) indicates a difference of quantities; e.g., for two values of coordinates, and so on. These transformations on differences rather than spatial points or instants of time are useful for a number of reasons: in calculations and experiments, it is lengths between two points or time intervals that are measured or of interest (e.g., the length of a moving vehicle, or time duration it takes to travel from one place to another), the transformations of velocity can be readily derived by making the difference infinitesimally small and dividing the equations, and the process repeated for the transformation of acceleration, if the coordinate systems are never coincident (i.e., not in standard configuration), and if both observers can agree on an event in and in , then they can use that event as the origin, and the spacetime coordinate differences are the differences between their coordinates and this origin, e.g., , , etc. Physical implications A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in the equation for a pulse of light along the direction is , then in the Lorentz transformations give , and vice versa, for any . For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation: in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance". Three counterintuitive, but correct, predictions of the transformations are: Relativity of simultaneity Suppose two events occur along the x axis simultaneously () in , but separated by a nonzero displacement . Then in , we find that , so the events are no longer simultaneous according to a moving observer. Time dilation Suppose there is a clock at rest in . If a time interval is measured at the same point in that frame, so that , then the transformations give this interval in by . Conversely, suppose there is a clock at rest in . If an interval is measured at the same point in that frame, so that , then the transformations give this interval in F by . Either way, each observer measures the time interval between ticks of a moving clock to be longer by a factor than the time interval between ticks of his own clock. Length contraction Suppose there is a rod at rest in aligned along the x axis, with length . In , the rod moves with velocity , so its length must be measured by taking two simultaneous () measurements at opposite ends. Under these conditions, the inverse Lorentz transform shows that . In the two measurements are no longer simultaneous, but this does not matter because the rod is at rest in . So each observer measures the distance between the end points of a moving rod to be shorter by a factor than the end points of an identical rod at rest in his own frame. Length contraction affects any geometric quantity related to lengths, so from the perspective of a moving observer, areas and volumes will also appear to shrink along the direction of motion. Vector transformations The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector with a magnitude that cannot equal or exceed , so that . Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector as measured in , and as measured in , each into components perpendicular (⊥) and parallel ( ‖ ) to , then the transformations are where is the dot product. The Lorentz factor retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition with magnitude is also used by some authors. Introducing a unit vector in the direction of relative motion, the relative velocity is with magnitude and direction , and vector projection and rejection give respectively Accumulating the results gives the full transformations, The projection and rejection also applies to . For the inverse transformations, exchange and to switch observed coordinates, and negate the relative velocity (or simply the unit vector since the magnitude is always positive) to obtain The unit vector has the advantage of simplifying equations for a single boost, allows either or to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing and . It is not convenient for multiple boosts. The vectorial relation between relative velocity and rapidity is and the "rapidity vector" can be defined as each of which serves as a useful abbreviation in some contexts. The magnitude of is the absolute value of the rapidity scalar confined to , which agrees with the range . Transformation of velocities Defining the coordinate velocities and Lorentz factor by taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to The velocities and are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity relative to F, or equivalently with velocity relative to F′, in turn F′ moves with velocity relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange and , and change to . The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect. The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential. Transformation of other quantities In general, given four quantities and and their Lorentz-boosted counterparts and , a relation of the form implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates; The decomposition of (and ) into components perpendicular and parallel to is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange and to switch observed quantities, and reverse the direction of relative motion by the substitution ). The quantities collectively make up a four-vector, where is the "timelike component", and the "spacelike component". Examples of and are the following: For a given object (e.g., particle, fluid, field, material), if or correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity , however a boosted observer will perceive a nonzero timelike component and an altered spin. Not all quantities are invariant in the form as shown above, for example orbital angular momentum does not have a timelike quantity, and neither does the electric field nor the magnetic field . The definition of angular momentum is , and in a boosted frame the altered angular momentum is . Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out transforms with another vector quantity related to boosts, see relativistic angular momentum for details. For the case of the and fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in it is while in it is . A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below. Mathematical formulation Throughout, italic non-bold capital letters are 4×4 matrices, while non-italic bold letters are 3×3 matrices. Homogeneous Lorentz group Writing the coordinates in column vectors and the Minkowski metric as a square matrix the spacetime interval takes the form (superscript denotes transpose) and is invariant under a Lorentz transformation where is a square matrix which can depend on parameters. The set of all Lorentz transformations in this article is denoted . This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication. From the invariance of the spacetime interval it follows and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule gives immediately Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form, carrying out the block matrix multiplications obtains general conditions on to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results is useful; always so it follows that The negative inequality may be unexpected, because multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then is the Lorentz factor. The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets. where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities. The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations and from a particular subgroup, the composite Lorentz transformations and must be in the same subgroup as and . This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets , , , and all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. , , ) do not form subgroups. Proper transformations If a Lorentz covariant 4-vector is measured in one inertial frame with result , and the same measurement made in another inertial frame (with the same orientation and origin) gives result , the two results will be related by where the boost matrix represents the rotation-free Lorentz transformation between the unprimed and primed frames and is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by where is the magnitude of the velocity and is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by . If a frame is boosted with velocity relative to frame , and another frame is boosted with velocity relative to , the separate boosts are and the composition of the two boosts connects the coordinates in and , Successive transformations act on the left. If and are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: . This composite transformation happens to be another boost, , where is collinear with and . If and are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: and are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of or . The and are composite velocities, while and are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply where is a 3d rotation matrix, which rotates any 3d vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect and (or and ) to the original boost parameters and . In a composition of boosts, the matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for . In this article the axis-angle representation is used for . The rotation is about an axis in the direction of a unit vector , through angle (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector" will serve as a useful abbreviation. Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include: inverses: (relative motion in the opposite direction), and (rotation in the opposite sense about the same axis) identity transformation for no relative motion/rotation: unit determinant: . This property makes them proper transformations. matrix symmetry: is symmetric (equals transpose), while is nonsymmetric but orthogonal (transpose equals inverse, ). The most general proper Lorentz transformation includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, and . An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes and . The Lie group SO+(3,1) The set of transformations with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension). For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about , where the higher order terms not shown are negligible because is small, and is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at , For now, is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained where the limit definition of the exponential has been used (see also characterizations of the exponential function). More generally The axis-angle vector and rapidity vector are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are and , each vectors of matrices with the explicit forms These are all defined in an analogous way to above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: are the rotation generators which correspond to angular momentum, and are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve with in the group depending on some group parameter with respect to that group parameter, evaluated at , serves as a definition of a corresponding group generator , and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map smoothly back into the group via for all ; this curve will yield again when differentiated at . Expanding the exponentials in their Taylor series obtains which compactly reproduce the boost and rotation matrices as given in the previous section. It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product is commutative because only linear terms are required (products like and count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular, because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators and ), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies. The Lie algebra so(3,1) Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers. The generators form a basis set of V, and the components of the axis-angle and rapidity vectors, , are the coordinates of a Lorentz generator with respect to this basis. Three of the commutation relations of the Lorentz generators are where the bracket is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat). These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers. Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense. The exponential map from the Lie algebra to the Lie group, provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra. Improper transformations Lorentz transformations also include parity inversion which negates all the spatial coordinates only, and time reversal which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here is the 3d identity matrix. These are both symmetric, they are their own inverses (see involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations. If is a proper orthochronous Lorentz transformation, then is improper antichronous, is improper orthochronous, and is proper antichronous. Inhomogeneous Lorentz group Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article. Tensor formulation Contravariant vectors Writing the general matrix transformation of coordinates as the matrix equation allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4d spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index. The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If is any four-vector, then in tensor index notation Alternatively, one writes in which the primed indices denote the indices of A in the primed frame. For a general -component object one may write where is the appropriate representation of the Lorentz group, an matrix for every . In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from to . E.g., if is a bispinor, then the indices are called Dirac indices. Covariant vectors There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g., where is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by where, when viewed as matrices, is the inverse of . As it happens, . This is referred to as raising an index. To transform a covariant vector , first raise its index, then transform it according to the same rule as for contravariant -vectors, then finally lower the index; But That is, it is the -component of the inverse Lorentz transformation. One defines (as a matter of notation), and may in this notation write Now for a subtlety. The implied summation on the right hand side of is running over a row index of the matrix representing . Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of acting on the column vector . That is, in pure matrix notation, This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace with . Tensors If and are linear operators on vector spaces and , then a linear operator may be defined on the tensor product of and , denoted according to From this it is immediately clear that if and are a four-vectors in , then transforms as The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor . These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity . It is given by where is defined above. This form can generally be reduced to the form for general -component objects given above with a single matrix () operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor. Transformation of the electromagnetic field Lorentz transformations can also be used to illustrate that the magnetic field and electric field are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment. An observer measures a charge at rest in frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer does not observe any magnetic field. The other observer in frame F′ moves at velocity relative to F and the charge. This observer sees a different electric field because the charge moves at velocity in their rest frame. The motion of the charge corresponds to an electric current, and thus the observer in frame F′ also sees a magnetic field. The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector. The electromagnetic field strength tensor is given by in SI units. In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field and the magnetic induction have the same units making the appearance of the electromagnetic field tensor more natural. Consider a Lorentz boost in the -direction. It is given by where the field tensor is displayed side by side for easiest possible reference in the manipulations below. The general transformation law becomes For the magnetic field one obtains For the electric field results Here, is used. These results can be summarized by and are independent of the metric signature. For SI units, substitute . refer to this last form as the view as opposed to the geometric view represented by the tensor expression and make a strong point of the ease with which results that are difficult to achieve using the view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields (alone) and (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations and that immediately yield . One should note that the primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is Length contraction has an effect on charge density and current density , and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors, or, in the simpler geometric view, Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector. The Maxwell equations are invariant under Lorentz transformations. Spinors Equation hold unmodified for any representation of the Lorentz group, including the bispinor representation. In one simply replaces all occurrences of by the bispinor representation , The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons. Transformation of general fields A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule where is the Wigner's little group and is the representation of . See also Footnotes Notes References Websites Papers . See also: English translation. eqn (55). Books Further reading External links Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties. The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page. Relativity – a chapter from an online textbook Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects. visualizing the Lorentz transformation. MinutePhysics video on YouTube explaining and visualizing the Lorentz transformation with a mechanical Minkowski diagram Interactive graph on Desmos (graphing) showing Lorentz transformations with a virtual Minkowski diagram Interactive graph on Desmos showing Lorentz transformations with points and hyperbolas Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc. Special relativity Mathematical physics Spacetime Coordinate systems Hendrik Lorentz
Lorentz transformation
[ "Physics", "Mathematics" ]
9,476
[ "Coordinate systems", "Vector spaces", "Applied mathematics", "Theoretical physics", "Space (mathematics)", "Special relativity", "Theory of relativity", "Spacetime", "Mathematical physics" ]
18,420
https://en.wikipedia.org/wiki/Basis%20%28linear%20algebra%29
In mathematics, a set of vectors in a vector space is called a basis (: bases) if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to . The elements of a basis are called . Equivalently, a set is a basis if its elements are linearly independent and every element of is a linear combination of elements of . In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Basis vectors find applications in the study of crystal structures and frames of reference. Definition A basis of a vector space over a field (such as the real numbers or the complex numbers ) is a linearly independent subset of that spans . This means that a subset of is a basis if it satisfies the two following conditions: linear independence for every finite subset of , if for some in , then spanning property for every vector in , one can choose in and in such that The scalars are called the coordinates of the vector with respect to the basis , and by the first property they are uniquely determined. A vector space that has a finite basis is called finite-dimensional. In this case, the finite subset can be taken as itself to check for linear independence in the above definition. It is often convenient or even necessary to have an ordering on the basis vectors, for example, when discussing orientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of an ordered basis, which is therefore not simply an unstructured set, but a sequence, an indexed family, or similar; see below. Examples The set of the ordered pairs of real numbers is a vector space under the operations of component-wise addition and scalar multiplication where is any real number. A simple basis of this vector space consists of the two vectors and . These vectors form a basis (called the standard basis) because any vector of may be uniquely written as Any other pair of linearly independent vectors of , such as and , forms also a basis of . More generally, if is a field, the set of -tuples of elements of is a vector space for similarly defined addition and scalar multiplication. Let be the -tuple with all components equal to 0, except the th, which is 1. Then is a basis of which is called the standard basis of A different flavor of example is given by polynomial rings. If is a field, the collection of all polynomials in one indeterminate with coefficients in is an -vector space. One basis for this space is the monomial basis , consisting of all monomials: Any set of polynomials such that there is exactly one polynomial of each degree (such as the Bernstein basis polynomials or Chebyshev polynomials) is also a basis. (Such a set of polynomials is called a polynomial sequence.) But there are also many bases for that are not of this form. Properties Many properties of finite bases result from the Steinitz exchange lemma, which states that, for any vector space , given a finite spanning set and a linearly independent set of elements of , one may replace well-chosen elements of by the elements of to get a spanning set containing , having its other elements in , and having the same number of elements as . Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require the axiom of choice or a weaker form of it, such as the ultrafilter lemma. If is a vector space over a field , then: If is a linearly independent subset of a spanning set , then there is a basis such that has a basis (this is the preceding property with being the empty set, and ). All bases of have the same cardinality, which is called the dimension of . This is the dimension theorem. A generating set is a basis of if and only if it is minimal, that is, no proper subset of is also a generating set of . A linearly independent set is a basis if and only if it is maximal, that is, it is not a proper subset of any linearly independent set. If is a vector space of dimension , then: A subset of with elements is a basis if and only if it is linearly independent. A subset of with elements is a basis if and only if it is a spanning set of . Coordinates Let be a vector space of finite dimension over a field , and be a basis of . By definition of a basis, every in may be written, in a unique way, as where the coefficients are scalars (that is, elements of ), which are called the coordinates of over . However, if one talks of the set of the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the same set of coefficients. For example, and have the same set of coefficients , and are different. It is therefore often convenient to work with an ordered basis; this is typically done by indexing the basis elements by the first natural numbers. Then, the coordinates of a vector form a sequence similarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with an origin, is also called a coordinate frame or simply a frame (for example, a Cartesian frame or an affine frame). Let, as usual, be the set of the -tuples of elements of . This set is an -vector space, with addition and scalar multiplication defined component-wise. The map is a linear isomorphism from the vector space onto . In other words, is the coordinate space of , and the -tuple is the coordinate vector of . The inverse image by of is the -tuple all of whose components are 0, except the th that is 1. The form an ordered basis of , which is called its standard basis or canonical basis. The ordered basis is the image by of the canonical basis of It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis of and that every linear isomorphism from onto may be defined as the isomorphism that maps the canonical basis of onto a given ordered basis of . In other words, it is equivalent to define an ordered basis of , or a linear isomorphism from onto . Change of basis Let be a vector space of dimension over a field . Given two (ordered) bases and of , it is often useful to express the coordinates of a vector with respect to in terms of the coordinates with respect to This can be done by the change-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer to and as the old basis and the new basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one has expressions involving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates. Typically, the new basis vectors are given by their coordinates over the old basis, that is, If and are the coordinates of a vector over the old and the new basis respectively, the change-of-basis formula is for . This formula may be concisely written in matrix notation. Let be the matrix of the and be the column vectors of the coordinates of in the old and the new basis respectively, then the formula for changing coordinates is The formula can be proven by considering the decomposition of the vector on the two bases: one has and The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, here that is for . Related notions Free module If one replaces the field occurring in the definition of a vector space by a ring, one gets the definition of a module. For modules, linear independence and spanning sets are defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set". Like for vector spaces, a basis of a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called a free module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules through free resolutions. A module over the integers is exactly the same thing as an abelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, if is a subgroup of a finitely generated free abelian group (that is an abelian group that has a finite basis), then there is a basis of and an integer such that is a basis of , for some nonzero integers For details, see . Analysis In the context of infinite-dimensional vector spaces over the real or complex numbers, the term (named after Georg Hamel) or algebraic basis can be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives are orthogonal bases on Hilbert spaces, Schauder bases, and Markushevich bases on normed linear spaces. In the case of the real numbers R viewed as a vector space over the field Q of rational numbers, Hamel bases are uncountable, and have specifically the cardinality of the continuum, which is the cardinal number where (aleph-nought) is the smallest infinite cardinal, the cardinal of the integers. The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a large class of vector spaces including e.g. Hilbert spaces, Banach spaces, or Fréchet spaces. The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: If X is an infinite-dimensional normed vector space that is complete (i.e. X is a Banach space), then any Hamel basis of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Consider the space of the sequences of real numbers that have only finitely many non-zero elements, with the norm Its standard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis. Example In the study of Fourier series, one learns that the functions are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functions f satisfying The functions are linearly independent, and every function f that is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense that for suitable (real or complex) coefficients ak, bk. But many square-integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereas orthonormal bases of these spaces are essential in Fourier analysis. Geometry The geometric notions of an affine space, projective space, convex set, and cone have related notions of basis. An affine basis for an n-dimensional affine space is points in general linear position. A is points in general position, in a projective space of dimension n. A of a polytope is the set of the vertices of its convex hull. A consists of one point by edge of a polygonal cone. See also a Hilbert basis (linear programming). Random basis For a probability distribution in with a probability density function, such as the equidistribution in an n-dimensional ball with respect to Lebesgue measure, it can be shown that randomly and independently chosen vectors will form a basis with probability one, which is due to the fact that linearly dependent vectors , ..., in should satisfy the equation (zero determinant of the matrix with columns ), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases. It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. For spaces with inner product, x is ε-orthogonal to y if (that is, cosine of the angle between and is less than ). In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution in n-dimensional ball. Choose N independent random vectors from a ball (they are independent and identically distributed). Let θ be a small positive number. Then for random vectors are all pairwise ε-orthogonal with probability . This growth exponentially with dimension and for sufficiently big . This property of random bases is a manifestation of the so-called . The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from the n-dimensional cube as a function of dimension, n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was within then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are within then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For each n, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented. Proof that every vector space has a basis Let be any vector space over some field . Let be the set of all linearly independent subsets of . The set is nonempty since the empty set is an independent subset of , and it is partially ordered by inclusion, which is denoted, as usual, by . Let be a subset of that is totally ordered by , and let be the union of all the elements of (which are themselves certain subsets of ). Since is totally ordered, every finite subset of is a subset of an element of , which is a linearly independent subset of , and hence is linearly independent. Thus is an element of . Therefore, is an upper bound for in : it is an element of , that contains every element of . As is nonempty, and every totally ordered subset of has an upper bound in , Zorn's lemma asserts that has a maximal element. In other words, there exists some element of satisfying the condition that whenever for some element of , then . It remains to prove that is a basis of . Since belongs to , we already know that is a linearly independent subset of . If there were some vector of that is not in the span of , then would not be an element of either. Let . This set is an element of , that is, it is a linearly independent subset of (because w is not in the span of , and is independent). As , and (because contains the vector that is not contained in ), this contradicts the maximality of . Thus this shows that spans . Hence is linearly independent and spans . It is thus a basis of , and this proves that every vector space has a basis. This proof relies on Zorn's lemma, which is equivalent to the axiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true. Thus the two assertions are equivalent. See also Basis of a matroid Basis of a linear program Notes References General references Historical references , reprint: External links Instructional videos from Khan Academy Introduction to bases of subspaces Proof that any subspace basis has same number of elements Articles containing proofs Axiom of choice Linear algebra
Basis (linear algebra)
[ "Mathematics" ]
3,596
[ "Mathematical axioms", "Axiom of choice", "Axioms of set theory", "Linear algebra", "Articles containing proofs", "Algebra" ]
18,531
https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s%20rule
L'Hôpital's rule (, ) or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli. L'Hôpital's rule states that for functions and which are defined on an open interval and differentiable on for a (possibly infinite) accumulation point of , if and for all in , and exists, then The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be directly evaluated by continuity. History Guillaume de l'Hôpital (also written l'Hospital) published this rule in his 1696 book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation: Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook on differential calculus. However, it is believed that the rule was discovered by the Swiss mathematician Johann Bernoulli. General form The general form of L'Hôpital's rule covers many cases. Let and be extended real numbers: real numbers, positive or negative infinity. Let be an open interval containing (for a two-sided limit) or an open interval with endpoint (for a one-sided limit, or a limit at infinity if is infinite). On , the real-valued functions and are assumed differentiable with . It is also assumed that , a finite or infinite limit. If eitherorthenAlthough we have written throughout, the limits may also be one-sided limits ( or ), when is a finite endpoint of . In the second case, the hypothesis that diverges to infinity is not necessary; in fact, it is sufficient that The hypothesis that appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses which imply . For example, one may require in the definition of the limit that the function must be defined everywhere on an interval . Another method is to require that both and be differentiable everywhere on an interval containing . Necessity of conditions: Counterexamples All four conditions for L'Hôpital's rule are necessary: Indeterminacy of form: or ; Differentiability of functions: and are differentiable on an open interval except possibly at the limit point in ; Non-zero derivative of denominator: for all in with ; Existence of limit of the quotient of the derivatives: exists. Where one of the above conditions is not satisfied, L'Hôpital's rule is not valid in general, and its conclusion may be false in certain cases. 1. Form is not indeterminate The necessity of the first condition can be seen by considering the counterexample where the functions are and and the limit is . The first condition is not satisfied for this counterexample because and . This means that the form is not indeterminate. The second and third conditions are satisfied by and . The fourth condition is also satisfied with But the conclusion fails, since 2. Differentiability of functions Differentiability of functions is a requirement because if a function is not differentiable, then the derivative of the function is not guaranteed to exist at each point in . The fact that is an open interval is grandfathered in from the hypothesis of the Cauchy's mean value theorem. The notable exception of the possibility of the functions being not differentiable at exists because L'Hôpital's rule only requires the derivative to exist as the function approaches ; the derivative does not need to be taken at . For example, let , , and . In this case, is not differentiable at . However, since is differentiable everywhere except , then still exists. Thus, since and exists, L'Hôpital's rule still holds. 3. Derivative of denominator is zero The necessity of the condition that near can be seen by the following counterexample due to Otto Stolz. Let and Then there is no limit for as However, which tends to 0 as , although it is undefined at infinitely many points. Further examples of this type were found by Ralph P. Boas Jr. 4. Limit of derivatives does not exist The requirement that the limit exists is essential; if it does not exist, the other limit may nevertheless exist. Indeed, as approaches , the functions or may exhibit many oscillations of small amplitude but steep slope, which do not affect but do prevent the convergence of . For example, if , and , then which does not approach a limit since cosine oscillates infinitely between and . But the ratio of the original functions does approach a limit, since the amplitude of the oscillations of becomes small relative to : In a case such as this, all that can be concluded is that so that if the limit of exists, then it must lie between the inferior and superior limits of . In the example, 1 does indeed lie between 0 and 2.) Note also that by the contrapositive form of the Rule, if does not exist, then also does not exist. Examples In the following computations, we indicate each application of L'Hopital's rule by the symbol . Here is a basic example involving the exponential function, which involves the indeterminate form at : This is a more elaborate example involving . Applying L'Hôpital's rule a single time still results in an indeterminate form. In this case, the limit may be evaluated by applying the rule three times: Here is an example involving : Repeatedly apply L'Hôpital's rule until the exponent is zero (if is an integer) or negative (if is fractional) to conclude that the limit is zero. Here is an example involving the indeterminate form (see below), which is rewritten as the form : Here is an example involving the mortgage repayment formula and . Let be the principal (loan amount), the interest rate per period and the number of periods. When is zero, the repayment amount per period is (since only principal is being repaid); this is consistent with the formula for non-zero interest rates: One can also use L'Hôpital's rule to prove the following theorem. If is twice-differentiable in a neighborhood of and its second derivative is continuous on this neighborhood, then Sometimes L'Hôpital's rule is invoked in a tricky way: suppose converges as and that converges to positive or negative infinity. Then:and so, exists and (This result remains true without the added hypothesis that converges to positive or negative infinity, but the justification is then incomplete.) Complications Sometimes L'Hôpital's rule does not reduce to an obvious limit in a finite number of steps, unless some intermediate simplifications are applied. Examples include the following: Two applications can lead to a return to the original expression that was to be evaluated: This situation can be dealt with by substituting and noting that goes to infinity as goes to infinity; with this substitution, this problem can be solved with a single application of the rule: Alternatively, the numerator and denominator can both be multiplied by at which point L'Hôpital's rule can immediately be applied successfully: An arbitrarily large number of applications may never lead to an answer even without repeating:This situation too can be dealt with by a transformation of variables, in this case : Again, an alternative approach is to multiply numerator and denominator by before applying L'Hôpital's rule: A common logical fallacy is to use L'Hôpital's rule to prove the value of a derivative by computing the limit of a difference quotient. Since applying l'Hôpital requires knowing the relevant derivatives, this amounts to circular reasoning or begging the question, assuming what is to be proved. For example, consider the proof of the derivative formula for powers of x: Applying L'Hôpital's rule and finding the derivatives with respect to yields as expected, but this computation requires the use of the very formula that is being proven. Similarly, to prove , applying L'Hôpital requires knowing the derivative of at , which amounts to calculating in the first place; a valid proof requires a different method such as the squeeze theorem. Other indeterminate forms Other indeterminate forms, such as , , , , and , can sometimes be evaluated using L'Hôpital's rule. We again indicate applications of L'Hopital's rule by . For example, to evaluate a limit involving , convert the difference of two functions to a quotient: L'Hôpital's rule can be used on indeterminate forms involving exponents by using logarithms to "move the exponent down". Here is an example involving the indeterminate form : It is valid to move the limit inside the exponential function because this function is continuous. Now the exponent has been "moved down". The limit is of the indeterminate form dealt with in an example above: L'Hôpital may be used to determine that Thus The following table lists the most common indeterminate forms and the transformations which precede applying l'Hôpital's rule: Stolz–Cesàro theorem The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finite difference operators rather than derivatives. Geometric interpretation: parametric curve and velocity vector Consider the parametric curve in the xy-plane with coordinates given by the continuous functions and , the locus of points , and suppose . The slope of the tangent to the curve at is the limit of the ratio as . The tangent to the curve at the point is the velocity vector with slope . L'Hôpital's rule then states that the slope of the curve at the origin () is the limit of the tangent slope at points approaching the origin, provided that this is defined. Proof of L'Hôpital's rule Special case The proof of L'Hôpital's rule is simple in the case where and are continuously differentiable at the point and where a finite limit is found after the first round of differentiation. This is only a special case of L'Hôpital's rule, because it only applies to functions satisfying stronger conditions than required by the general rule. However, many common functions have continuous derivatives (e.g. polynomials, sine and cosine, exponential functions), so this special case covers most applications. Suppose that and are continuously differentiable at a real number , that , and that . Then This follows from the difference quotient definition of the derivative. The last equality follows from the continuity of the derivatives at . The limit in the conclusion is not indeterminate because . The proof of a more general version of L'Hôpital's rule is given below. General proof The following proof is due to , where a unified proof for the and indeterminate forms is given. Taylor notes that different proofs may be found in and . Let f and g be functions satisfying the hypotheses in the General form section. Let be the open interval in the hypothesis with endpoint c. Considering that on this interval and g is continuous, can be chosen smaller so that g is nonzero on . For each x in the interval, define and as ranges over all values between x and c. (The symbols inf and sup denote the infimum and supremum.) From the differentiability of f and g on , Cauchy's mean value theorem ensures that for any two distinct points x and y in there exists a between x and y such that . Consequently, for all choices of distinct x and y in the interval. The value g(x)-g(y) is always nonzero for distinct x and y in the interval, for if it was not, the mean value theorem would imply the existence of a p between x and y such that g' (p)=0. The definition of m(x) and M(x) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases, m(x) and M(x) will establish bounds on the ratio . Case 1: For any x in the interval , and point y between x and c, and therefore as y approaches c, and become zero, and so Case 2: For every x in the interval , define . For every point y between x and c, As y approaches c, both and become zero, and therefore The limit superior and limit inferior are necessary since the existence of the limit of has not yet been established. It is also the case that and and In case 1, the squeeze theorem establishes that exists and is equal to L. In the case 2, and the squeeze theorem again asserts that , and so the limit exists and is equal to L. This is the result that was to be proven. In case 2 the assumption that f(x) diverges to infinity was not used within the proof. This means that if |g(x)| diverges to infinity as x approaches c and both f and g satisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit of f(x): It could even be the case that the limit of f(x) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz. In the case when |g(x)| diverges to infinity as x approaches c and f(x) converges to a finite limit at c, then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit of f(x)/g(x) as x approaches c must be zero. Corollary A simple but very useful consequence of L'Hopital's rule is that the derivative of a function cannot have a removable discontinuity. That is, suppose that f is continuous at a, and that exists for all x in some open interval containing a, except perhaps for . Suppose, moreover, that exists. Then also exists and In particular, f''' is also continuous at a. Thus, if a function is not continuously differentiable near a point, the derivative must have an essential discontinuity at that point. Proof Consider the functions and . The continuity of f at a'' tells us that . Moreover, since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows that . See also L'Hôpital controversy Notes References Sources Articles containing proofs Theorems in calculus Theorems in real analysis Limits (mathematics)
L'Hôpital's rule
[ "Mathematics" ]
3,107
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theorems in real analysis", "Articles containing proofs" ]
18,568
https://en.wikipedia.org/wiki/List%20of%20algorithms
An algorithm is fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems. Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology. The following is a list of well-known algorithms along with one-line descriptions for each. Automated planning Combinatorial algorithms General combinatorial algorithms Brent's algorithm: finds a cycle in function value iterations using only two iterators Floyd's cycle-finding algorithm: finds a cycle in function value iterations Gale–Shapley algorithm: solves the stable marriage problem Pseudorandom number generators (uniformly distributed—see also List of pseudorandom number generators for other PRNGs with varying degrees of convergence and varying statistical quality): ACORN generator Blum Blum Shub Lagged Fibonacci generator Linear congruential generator Mersenne Twister Graph algorithms Coloring algorithm: Graph coloring algorithm. Hopcroft–Karp algorithm: convert a bipartite graph to a maximum cardinality matching Hungarian algorithm: algorithm for finding a perfect matching Prüfer coding: conversion between a labeled tree and its Prüfer sequence Tarjan's off-line lowest common ancestors algorithm: computes lowest common ancestors for pairs of nodes in a tree Topological sort: finds linear order of nodes (e.g. jobs) based on their dependencies. Graph drawing Force-based algorithms (also known as force-directed algorithms or spring-based algorithm) Spectral layout Network theory Network analysis Link analysis Girvan–Newman algorithm: detect communities in complex systems Web link analysis Hyperlink-Induced Topic Search (HITS) (also known as Hubs and authorities) PageRank TrustRank Flow networks Dinic's algorithm: is a strongly polynomial algorithm for computing the maximum flow in a flow network. Edmonds–Karp algorithm: implementation of Ford–Fulkerson Ford–Fulkerson algorithm: computes the maximum flow in a graph Karger's algorithm: a Monte Carlo method to compute the minimum cut of a connected graph Push–relabel algorithm: computes a maximum flow in a graph Routing for graphs Edmonds' algorithm (also known as Chu–Liu/Edmonds' algorithm): find maximum or minimum branchings Euclidean minimum spanning tree: algorithms for computing the minimum spanning tree of a set of points in the plane Longest path problem: find a simple path of maximum length in a given graph Minimum spanning tree Borůvka's algorithm Kruskal's algorithm Prim's algorithm Reverse-delete algorithm Nonblocking minimal spanning switch say, for a telephone exchange Shortest path problem Bellman–Ford algorithm: computes shortest paths in a weighted graph (where some of the edge weights may be negative) Dijkstra's algorithm: computes shortest paths in a graph with non-negative edge weights Floyd–Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph Johnson's algorithm: all pairs shortest path algorithm in sparse weighted directed graph Transitive closure problem: find the transitive closure of a given binary relation Traveling salesman problem Christofides algorithm Nearest neighbour algorithm Warnsdorff's rule: a heuristic method for solving the Knight's tour problem Graph search A*: special case of best-first search that uses heuristics to improve speed B*: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals) Backtracking: abandons partial solutions when they are found not to satisfy a complete solution Beam search: is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement Beam stack search: integrates backtracking with beam search Best-first search: traverses a graph in the order of likely importance using a priority queue Bidirectional search: find the shortest path from an initial vertex to a goal vertex in a directed graph Breadth-first search: traverses a graph level by level Brute-force search: an exhaustive and reliable search method, but computationally inefficient in many applications D*: an incremental heuristic search algorithm Depth-first search: traverses a graph branch by branch Dijkstra's algorithm: a special case of A* for which no heuristic function is used General Problem Solver: a seminal theorem-proving algorithm intended to work as a universal problem solver machine. Iterative deepening depth-first search (IDDFS): a state space search strategy Jump point search: an optimization to A* which may reduce computation time by an order of magnitude using further heuristics Lexicographic breadth-first search (also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph Uniform-cost search: a tree search that finds the lowest-cost route where costs vary SSS*: state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm Subgraphs Cliques Bron–Kerbosch algorithm: a technique for finding maximal cliques in an undirected graph MaxCliqueDyn maximum clique algorithm: find a maximum clique in an undirected graph Strongly connected components Path-based strong component algorithm Kosaraju's algorithm Tarjan's strongly connected components algorithm Subgraph isomorphism problem Sequence algorithms Approximate sequence matching Bitap algorithm: fuzzy algorithm that determines if strings are approximately equal. Phonetic algorithms Daitch–Mokotoff Soundex: a Soundex refinement which allows matching of Slavic and Germanic surnames Double Metaphone: an improvement on Metaphone Match rating approach: a phonetic algorithm developed by Western Airlines Metaphone: an algorithm for indexing words by their sound, when pronounced in English NYSIIS: phonetic algorithm, improves on Soundex Soundex: a phonetic algorithm for indexing names by sound, as pronounced in English String metrics: computes a similarity or dissimilarity (distance) score between two pairs of text strings Damerau–Levenshtein distance: computes a distance measure between two strings, improves on Levenshtein distance Dice's coefficient (also known as the Dice coefficient): a similarity measure related to the Jaccard index Hamming distance: sum number of positions which are different Jaro–Winkler distance: is a measure of similarity between two strings Levenshtein edit distance: computes a metric for the amount of difference between two sequences Trigram search: search for text when the exact syntax or spelling of the target object is not precisely known Selection algorithms Quickselect Introselect Sequence search Linear search: locates an item in an unsorted sequence Selection algorithm: finds the kth largest item in a sequence Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa Sorted lists Binary search algorithm: locates an item in a sorted sequence Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers Jump search (or block search): linear search on a smaller subset of the sequence Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search. Uniform binary search: an optimization of the classic binary search algorithm Eytzinger binary search: cache friendly binary search algorithm Sequence merging Simple merge algorithm k-way merge algorithm Union (merge, with elements on the output not repeated) Sequence permutations Fisher–Yates shuffle (also known as the Knuth shuffle): randomly shuffle a finite set Schensted algorithm: constructs a pair of Young tableaux from a permutation Steinhaus–Johnson–Trotter algorithm (also known as the Johnson–Trotter algorithm): generates permutations by transposing elements Heap's permutation generation algorithm: interchange elements to generate next permutation Sequence combinations Sequence alignment Dynamic time warping: measure similarity between two sequences which may vary in time or speed Hirschberg's algorithm: finds the least cost sequence alignment between two sequences, as measured by their Levenshtein distance Needleman–Wunsch algorithm: find global alignment between two sequences Smith–Waterman algorithm: find local sequence alignment Sequence sorting Exchange sorts Bubble sort: for each pair of indices, swap the items if out of order Cocktail shaker sort or bidirectional bubble sort, a bubble sort traversing the list alternately from front to back and back to front Comb sort Gnome sort Odd–even sort Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice Humorous or ineffective Bogosort Slowsort Stooge sort Hybrid Flashsort Introsort: begin with quicksort and switch to heapsort when the recursion depth exceeds a certain level Timsort: adaptative algorithm derived from merge sort and insertion sort. Used in Python 2.3 and up, and Java SE 7. Insertion sorts Insertion sort: determine where the current item belongs in the list of sorted ones, and insert it there Library sort Patience sorting Shell sort: an attempt to improve insertion sort Tree sort (binary tree sort): build binary tree, then traverse it to create sorted list Cycle sort: in-place with theoretically optimal number of writes Merge sorts Merge sort: sort the first and second half of the list separately, then merge the sorted lists Slowsort Strand sort Non-comparison sorts Bead sort Bucket sort Burstsort: build a compact, cache efficient burst trie and then traverse it to create sorted output Counting sort Pigeonhole sort Postman sort: variant of Bucket sort which takes advantage of hierarchical structure Radix sort: sorts strings letter by letter Selection sorts Heapsort: convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list Selection sort: pick the smallest of the remaining elements, add it to the end of the sorted list Smoothgamersort Other Bitonic sorter Pancake sorting Spaghetti sort Topological sort Unknown class Samplesort Subsequences Longest common subsequence problem: Find the longest subsequence common to all sequences in a set of sequences Longest increasing subsequence problem: Find the longest increasing subsequence of a given sequence Ruzzo–Tompa algorithm: Find all non-overlapping, contiguous, maximal scoring subsequences in a sequence of real numbers Shortest common supersequence problem: Find the shortest supersequence that contains two or more sequences as subsequences Substrings Kadane's algorithm: finds the contiguous subarray with largest sum in an array of numbers Longest common substring problem: find the longest string (or strings) that is a substring (or are substrings) of two or more strings Substring search Aho–Corasick string matching algorithm: trie based algorithm for finding all substring matches to any of a finite set of strings Boyer–Moore string-search algorithm: amortized linear (sublinear in most times) algorithm for substring search Boyer–Moore–Horspool algorithm: Simplification of Boyer–Moore Knuth–Morris–Pratt algorithm: substring search which bypasses reexamination of matched characters Rabin–Karp string search algorithm: searches multiple patterns efficiently Zhu–Takaoka string matching algorithm: a variant of Boyer–Moore Ukkonen's algorithm: a linear-time, online algorithm for constructing suffix trees Matching wildcards Rich Salz' wildmat: a widely used open-source recursive algorithm Krauss matching wildcards algorithm: an open-source non-recursive algorithm Computational mathematics Abstract algebra Chien search: a recursive algorithm for determining roots of polynomials defined over a finite field Schreier–Sims algorithm: computing a base and strong generating set (BSGS) of a permutation group Todd–Coxeter algorithm: Procedure for generating cosets. Computer algebra Buchberger's algorithm: finds a Gröbner basis Cantor–Zassenhaus algorithm: factor polynomials over finite fields Faugère F4 algorithm: finds a Gröbner basis (also mentions the F5 algorithm) Gosper's algorithm: find sums of hypergeometric terms that are themselves hypergeometric terms Knuth–Bendix completion algorithm: for rewriting rule systems Multivariate division algorithm: for polynomials in several indeterminates Pollard's kangaroo algorithm (also known as Pollard's lambda algorithm ): an algorithm for solving the discrete logarithm problem Polynomial long division: an algorithm for dividing a polynomial by another polynomial of the same or lower degree Risch algorithm: an algorithm for the calculus operation of indefinite integration (i.e. finding antiderivatives) Geometry Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them Collision detection algorithms: check for the collision or intersection of two given solids Cone algorithm: identify surface points Convex hull algorithms: determining the convex hull of a set of points Graham scan Quickhull Gift wrapping algorithm or Jarvis march Chan's algorithm Kirkpatrick–Seidel algorithm Euclidean distance transform: computes the distance between every point in a grid and a discrete collection of points. Geometric hashing: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation Gilbert–Johnson–Keerthi distance algorithm: determining the smallest distance between two convex shapes. Jump-and-Walk algorithm: an algorithm for point location in triangulations Laplacian smoothing: an algorithm to smooth a polygonal mesh Line segment intersection: finding whether lines intersect, usually with a sweep line algorithm Bentley–Ottmann algorithm Shamos–Hoey algorithm Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points Nearest neighbor search: find the nearest point or points to a query point Nesting algorithm: make the most efficient use of material or space Point in polygon algorithms: tests whether a given point lies within a given polygon Point set registration algorithms: finds the transformation between two point sets to optimally align them. Rotating calipers: determine all antipodal pairs of points and vertices on a convex polygon or convex hull. Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane Triangulation Delaunay triangulation Ruppert's algorithm (also known as Delaunay refinement): create quality Delaunay triangulations Chew's second algorithm: create quality constrained Delaunay triangulations Marching triangles: reconstruct two-dimensional surface geometry from an unstructured point cloud Polygon triangulation algorithms: decompose a polygon into a set of triangles Voronoi diagrams, geometric dual of Delaunay triangulation Bowyer–Watson algorithm: create voronoi diagram in any number of dimensions Fortune's Algorithm: create voronoi diagram Quasitriangulation Number theoretic algorithms Binary GCD algorithm: Efficient way of calculating GCD. Booth's multiplication algorithm Chakravala method: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation Discrete logarithm: Baby-step giant-step Index calculus algorithm Pollard's rho algorithm for logarithms Pohlig–Hellman algorithm Euclidean algorithm: computes the greatest common divisor Extended Euclidean algorithm: also solves the equation ax + by = c Integer factorization: breaking an integer into its prime factors Congruence of squares Dixon's algorithm Fermat's factorization method General number field sieve Lenstra elliptic curve factorization Pollard's p − 1 algorithm Pollard's rho algorithm prime factorization algorithm Quadratic sieve Shor's algorithm Special number field sieve Trial division Multiplication algorithms: fast multiplication of two numbers Karatsuba algorithm Schönhage–Strassen algorithm Toom–Cook multiplication Modular square root: computing square roots modulo a prime number Tonelli–Shanks algorithm Cipolla's algorithm Berlekamp's root finding algorithm Odlyzko–Schönhage algorithm: calculates nontrivial zeroes of the Riemann zeta function Lenstra–Lenstra–Lovász algorithm (also known as LLL algorithm): find a short, nearly orthogonal lattice basis in polynomial time Primality tests: determining whether a given number is prime AKS primality test Baillie–PSW primality test Fermat primality test Lucas primality test Miller–Rabin primality test Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram Numerical algorithms Differential equation solving Euler method Backward Euler method Trapezoidal rule (differential equations) Linear multistep methods Runge–Kutta methods Euler integration Multigrid methods (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations Partial differential equation: Finite difference method Crank–Nicolson method for diffusion equations Lax–Wendroff for wave equations Verlet integration (): integrate Newton's equations of motion Elementary and special functions Computation of π: Borwein's algorithm: an algorithm to calculate the value of 1/π Gauss–Legendre algorithm: computes the digits of pi Chudnovsky algorithm: a fast method for calculating the digits of π Bailey–Borwein–Plouffe formula: (BBP formula) a spigot algorithm for the computation of the nth binary digit of π Division algorithms: for computing quotient and/or remainder of two numbers Long division Restoring division Non-restoring division SRT division Newton–Raphson division: uses Newton's method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q. Goldschmidt division Hyperbolic and Trigonometric Functions: BKM algorithm: computes elementary functions using a table of logarithms CORDIC: computes hyperbolic and trigonometric functions using a table of arctangents Exponentiation: Addition-chain exponentiation: exponentiation by positive integer powers that requires a minimal number of multiplications Exponentiating by squaring: an algorithm used for the fast computation of large integer powers of a number Montgomery reduction: an algorithm that allows modular arithmetic to be performed efficiently when the modulus is large Multiplication algorithms: fast multiplication of two numbers Booth's multiplication algorithm: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation Fürer's algorithm: an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity Karatsuba algorithm: an efficient procedure for multiplying large numbers Schönhage–Strassen algorithm: an asymptotically fast multiplication algorithm for large integers Toom–Cook multiplication: (Toom3) a multiplication algorithm for large integers Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). Newton's method Rounding functions: the classic ways to round numbers Spigot algorithm: a way to compute the value of a mathematical constant without knowing preceding digits Square and Nth root of a number: Alpha max plus beta min algorithm: an approximation of the square-root of the sum of two squares Methods of computing square roots nth root algorithm Summation: Binary splitting: a divide and conquer technique which speeds up the numerical evaluation of many types of series with rational terms Kahan summation algorithm: a more accurate method of summing floating-point numbers Unrestricted algorithm Geometric Filtered back-projection: efficiently computes the inverse 2-dimensional Radon transform. Level set method (LSM): a numerical technique for tracking interfaces and shapes Interpolation and extrapolation Birkhoff interpolation: an extension of polynomial interpolation Cubic interpolation Hermite interpolation Lagrange interpolation: interpolation using Lagrange polynomials Linear interpolation: a method of curve fitting using linear polynomials Monotone cubic interpolation: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Multivariate interpolation Bicubic interpolation: a generalization of cubic interpolation to two dimensions Bilinear interpolation: an extension of linear interpolation for interpolating functions of two variables on a regular grid Lanczos resampling ("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data Nearest-neighbor interpolation Tricubic interpolation: a generalization of cubic interpolation to three dimensions Pareto interpolation: a method of estimating the median and other properties of a population that follows a Pareto distribution. Polynomial interpolation Neville's algorithm Spline interpolation: Reduces error with Runge's phenomenon. De Boor algorithm: B-splines De Casteljau's algorithm: Bézier curves Trigonometric interpolation Linear algebra Krylov methods (for large sparse matrix problems; third most-important numerical method class of the 20th century as ranked by SISC; after fast-fourier and fast-multipole) Eigenvalue algorithms Arnoldi iteration Inverse iteration Jacobi method Lanczos iteration Power iteration QR algorithm Rayleigh quotient iteration Gram–Schmidt process: orthogonalizes a set of vectors Matrix multiplication algorithms Cannon's algorithm: a distributed algorithm for matrix multiplication especially suitable for computers laid out in an N × N mesh Coppersmith–Winograd algorithm: square matrix multiplication Freivalds' algorithm: a randomized algorithm used to verify matrix multiplication Strassen algorithm: faster matrix multiplication Solving systems of linear equations Biconjugate gradient method: solves systems of linear equations Conjugate gradient: an algorithm for the numerical solution of particular systems of linear equations Gaussian elimination Gauss–Jordan elimination: solves systems of linear equations Gauss–Seidel method: solves systems of linear equations iteratively Levinson recursion: solves equation involving a Toeplitz matrix Stone's method: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations Successive over-relaxation (SOR): method used to speed up convergence of the Gauss–Seidel method Tridiagonal matrix algorithm (Thomas algorithm): solves systems of tridiagonal equations Sparse matrix algorithms Cuthill–McKee algorithm: reduce the bandwidth of a symmetric sparse matrix Minimum degree algorithm: permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition Symbolic Cholesky decomposition: Efficient way of storing sparse matrix Monte Carlo Gibbs sampling: generates a sequence of samples from the joint probability distribution of two or more random variables Hybrid Monte Carlo: generates a sequence of samples using Hamiltonian weighted Markov chain Monte Carlo, from a probability distribution which is difficult to sample directly. Metropolis–Hastings algorithm: used to generate a sequence of samples from the probability distribution of one or more variables Wang and Landau algorithm: an extension of Metropolis–Hastings algorithm sampling Numerical integration MISER algorithm: Monte Carlo simulation, numerical integration Root finding Bisection method False position method: and Illinois method: 2-point, bracketing Halley's method: uses first and second derivatives ITP method: minmax optimal and superlinear convergence simultaneously Muller's method: 3-point, quadratic interpolation Newton's method: finds zeros of functions with calculus Ridder's method: 3-point, exponential scaling Secant method: 2-point, 1-sided Optimization algorithms Hybrid Algorithms Alpha–beta pruning: search to reduce number of nodes in minimax algorithm Branch and bound Bruss algorithm: see odds algorithm Chain matrix multiplication Combinatorial optimization: optimization problems where the set of feasible solutions is discrete Greedy randomized adaptive search procedure (GRASP): successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search Hungarian method: a combinatorial optimization algorithm which solves the assignment problem in polynomial time Constraint satisfaction General algorithms for the constraint satisfaction AC-3 algorithm Difference map algorithm Min conflicts algorithm Chaff algorithm: an algorithm for solving instances of the Boolean satisfiability problem Davis–Putnam algorithm: check the validity of a first-order logic formula Davis–Putnam–Logemann–Loveland algorithm (DPLL): an algorithm for deciding the satisfiability of propositional logic formula in conjunctive normal form, i.e. for solving the CNF-SAT problem Exact cover problem Algorithm X: a nondeterministic algorithm Dancing Links: an efficient implementation of Algorithm X Cross-entropy method: a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling Differential evolution Dynamic Programming: problems exhibiting the properties of overlapping subproblems and optimal substructure Ellipsoid method: is an algorithm for solving convex optimization problems Evolutionary computation: optimization inspired by biological mechanisms of evolution Evolution strategy Gene expression programming Genetic algorithms Fitness proportionate selection – also known as roulette-wheel selection Stochastic universal sampling Truncation selection Tournament selection Memetic algorithm Swarm intelligence Ant colony optimization Bees algorithm: a search algorithm which mimics the food foraging behavior of swarms of honey bees Particle swarm Frank-Wolfe algorithm: an iterative first-order optimization algorithm for constrained convex optimization Golden-section search: an algorithm for finding the maximum of a real function Gradient descent Grid Search Harmony search (HS): a metaheuristic algorithm mimicking the improvisation process of musicians Interior point method Linear programming Benson's algorithm: an algorithm for solving linear vector optimization problems Dantzig–Wolfe decomposition: an algorithm for solving linear programming problems with special structure Delayed column generation Integer linear programming: solve linear programming problems where some or all the unknowns are restricted to integer values Branch and cut Cutting-plane method Karmarkar's algorithm: The first reasonably efficient algorithm that solves the linear programming problem in polynomial time. Simplex algorithm: an algorithm for solving linear programming problems Line search Local search: a metaheuristic for solving computationally hard optimization problems Random-restart hill climbing Tabu search Minimax used in game programming Nearest neighbor search (NNS): find closest points in a metric space Best Bin First: find an approximate solution to the nearest neighbor search problem in very-high-dimensional spaces Newton's method in optimization Nonlinear optimization BFGS method: a nonlinear optimization algorithm Gauss–Newton algorithm: an algorithm for solving nonlinear least squares problems Levenberg–Marquardt algorithm: an algorithm for solving nonlinear least squares problems Nelder–Mead method (downhill simplex method): a nonlinear optimization algorithm Odds algorithm (Bruss algorithm): Finds the optimal strategy to predict a last specific event in a random sequence event Random Search Simulated annealing Stochastic tunneling Subset sum algorithm A hybrid HS-LS conjugate gradient algorithm (see https://doi.org/10.1016/j.cam.2023.115304) A hybrid BFGS-Like method (see more https://doi.org/10.1016/j.cam.2024.115857) Conjugate gradient methods (see more https://doi.org/10.1016/j.jksus.2022.101923) Computational science Astronomy Doomsday algorithm: day of the week Zeller's congruence is an algorithm to calculate the day of the week for any Julian or Gregorian calendar date various Easter algorithms are used to calculate the day of Easter Bioinformatics Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information Kabsch algorithm: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures. Velvet: a set of algorithms manipulating de Bruijn graphs for genomic sequence assembly Sorting by signed reversals: an algorithm for understanding genomic evolution. Maximum parsimony (phylogenetics): an algorithm for finding the simplest phylogenetic tree to explain a given character matrix. UPGMA: a distance-based phylogenetic tree construction algorithm. Bloom Filter: probabilistic data structure used to test for the existence of an element within a set. Primarily used in bioinformatics to test for the existence of a k-mer in a sequence or sequences. Geoscience Vincenty's formulae: a fast algorithm to calculate the distance between two latitude/longitude points on an ellipsoid Geohash: a public domain algorithm that encodes a decimal latitude/longitude pair as a hash string Linguistics Lesk algorithm: word sense disambiguation Stemming algorithm: a method of reducing words to their stem, base, or root form Sukhotin's algorithm: a statistical classification algorithm for classifying characters in a text as vowels or consonants Medicine ESC algorithm for the diagnosis of heart failure Manning Criteria for irritable bowel syndrome Pulmonary embolism diagnostic algorithms Texas Medication Algorithm Project Physics Constraint algorithm: a class of algorithms for satisfying constraints for bodies that obey Newton's equations of motion Demon algorithm: a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy Featherstone's algorithm: computes the effects of forces applied to a structure of joints and links Ground state approximation Variational method Ritz method n-body problems Barnes–Hut simulation: Solves the n-body problem in an approximate way that has the order instead of as in a direct-sum simulation. Fast multipole method (FMM): speeds up the calculation of long-ranged forces Rainflow-counting algorithm: Reduces a complex stress history to a count of elementary stress-reversals for use in fatigue analysis Sweep and prune: a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision VEGAS algorithm: a method for reducing error in Monte Carlo simulations Glauber dynamics: a method for simulating the Ising Model on a computer Statistics Algorithms for calculating variance: avoiding instability and numerical overflow Approximate counting algorithm: allows counting large number of events in a small register Bayesian statistics Nested sampling algorithm: a computational approach to the problem of comparing models in Bayesian statistics Clustering algorithms Average-linkage clustering: a simple agglomerative clustering algorithm Canopy clustering algorithm: an unsupervised pre-clustering algorithm related to the K-means algorithm Chinese whispers Complete-linkage clustering: a simple agglomerative clustering algorithm DBSCAN: a density based clustering algorithm Expectation-maximization algorithm Fuzzy clustering: a class of clustering algorithms where each point has a degree of belonging to clusters Fuzzy c-means FLAME clustering (Fuzzy clustering by Local Approximation of MEmberships): define clusters in the dense parts of a dataset and perform cluster assignment solely based on the neighborhood relationships among objects KHOPCA clustering algorithm: a local clustering algorithm, which produces hierarchical multi-hop clusters in static and mobile environments. k-means clustering: cluster objects based on attributes into partitions k-means++: a variation of this, using modified random seeds k-medoids: similar to k-means, but chooses datapoints or medoids as centers Linde–Buzo–Gray algorithm: a vector quantization algorithm to derive a good codebook Lloyd's algorithm (Voronoi iteration or relaxation): group data points into a given number of categories, a popular algorithm for k-means clustering OPTICS: a density based clustering algorithm with a visual evaluation method Single-linkage clustering: a simple agglomerative clustering algorithm SUBCLU: a subspace clustering algorithm Ward's method: an agglomerative clustering algorithm, extended to more general Lance–Williams algorithms WACA clustering algorithm: a local clustering algorithm with potentially multi-hop structures; for dynamic networks Estimation Theory Expectation-maximization algorithm A class of related algorithms for finding maximum likelihood estimates of parameters in probabilistic models Ordered subset expectation maximization (OSEM): used in medical imaging for positron emission tomography, single-photon emission computed tomography and X-ray computed tomography. Odds algorithm (Bruss algorithm) Optimal online search for distinguished value in sequential random input Kalman filter: estimate the state of a linear dynamic system from a series of noisy measurements False nearest neighbor algorithm (FNN) estimates fractal dimension Hidden Markov model Baum–Welch algorithm: computes maximum likelihood estimates and posterior mode estimates for the parameters of a hidden Markov model Forward-backward algorithm: a dynamic programming algorithm for computing the probability of a particular observation sequence Viterbi algorithm: find the most likely sequence of hidden states in a hidden Markov model Partial least squares regression: finds a linear model describing some predicted variables in terms of other observable variables Queuing theory Buzen's algorithm: an algorithm for calculating the normalization constant G(K) in the Gordon–Newell theorem RANSAC (an abbreviation for "RANdom SAmple Consensus"): an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers Scoring algorithm: is a form of Newton's method used to solve maximum likelihood equations numerically Yamartino method: calculate an approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data Ziggurat algorithm: generates random numbers from a non-uniform distribution Computer science Computer architecture Tomasulo algorithm: allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially Computer graphics Clipping Line clipping Cohen–Sutherland Cyrus–Beck Fast-clipping Liang–Barsky Nicholl–Lee–Nicholl Polygon clipping Sutherland–Hodgman Vatti Weiler–Atherton Contour lines and Isosurfaces Marching cubes: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels) Marching squares: generates contour lines for a two-dimensional scalar field Marching tetrahedrons: an alternative to Marching cubes Discrete Green's theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm Flood fill: fills a connected region of a multi-dimensional array with a specified symbol Global illumination algorithms: Considers direct illumination and reflection from other objects. Ambient occlusion Beam tracing Cone tracing Image-based lighting Metropolis light transport Path tracing Photon mapping Radiosity Ray tracing Hidden-surface removal or visual surface determination Newell's algorithm: eliminate polygon cycles in the depth sorting required in hidden-surface removal Painter's algorithm: detects visible parts of a 3-dimensional scenery Scanline rendering: constructs an image by moving an imaginary line over the image Warnock algorithm Line drawing: graphical algorithm for approximating a line segment on discrete graphical media. Bresenham's line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables) DDA line algorithm: plots points of a 2-dimensional array to form a straight line between specified points Xiaolin Wu's line algorithm: algorithm for line antialiasing. Midpoint circle algorithm: an algorithm used to determine the points needed for drawing a circle Ramer–Douglas–Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points Shading Gouraud shading: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics Phong shading: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics Slerp (spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation Summed area table (also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time Binary space partitioning Cryptography Asymmetric (public key) encryption: ElGamal Elliptic curve cryptography MAE1 NTRUEncrypt RSA Digital signatures (asymmetric authentication): DSA, and its variants: ECDSA and Deterministic ECDSA EdDSA (Ed25519) RSA Cryptographic hash functions (see also the section on message authentication codes): BLAKE MD5 – Note that there is now a method of generating collisions for MD5 RIPEMD-160 SHA-1 – Note that there is now a method of generating collisions for SHA-1 SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) SHA-3 (SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256) Tiger (TTH), usually used in Tiger tree hashes WHIRLPOOL Cryptographically secure pseudo-random number generators Blum Blum Shub – based on the hardness of factorization Fortuna, intended as an improvement on Yarrow algorithm Linear-feedback shift register (note: many LFSR-based algorithms are weak or have been broken) Yarrow algorithm Key exchange Diffie–Hellman key exchange Elliptic-curve Diffie–Hellman (ECDH) Key derivation functions, often used for password hashing and key stretching bcrypt PBKDF2 scrypt Argon2 Message authentication codes (symmetric authentication algorithms, which take a key as a parameter): HMAC: keyed-hash message authentication Poly1305 SipHash Secret sharing, secret splitting, key splitting, M of N algorithms Blakey's scheme Shamir's secret sharing Symmetric (secret key) encryption: Advanced Encryption Standard (AES), winner of NIST competition, also known as Rijndael Blowfish Twofish Threefish Data Encryption Standard (DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes IDEA RC4 (cipher) Tiny Encryption Algorithm (TEA) Salsa20, and its updated variant ChaCha20 Post-quantum cryptography Proof-of-work algorithms Digital logic Boolean minimization Quine–McCluskey algorithm: also called as Q-M algorithm, programmable method for simplifying the Boolean equations Petrick's method: another algorithm for Boolean simplification Espresso heuristic logic minimizer: a fast algorithm for Boolean function minimization Machine learning and statistical classification Almeida–Pineda recurrent backpropagation: Adjust a matrix of synaptic weights to generate desired outputs given its inputs ALOPEX: a correlation-based machine-learning algorithm Association rule learning: discover interesting relations between variables, used in data mining Apriori algorithm Eclat algorithm FP-growth algorithm One-attribute rule Zero-attribute rule Boosting (meta-algorithm): Use many weak learners to boost effectiveness AdaBoost: adaptive boosting BrownBoost: a boosting algorithm that may be robust to noisy datasets LogitBoost: logistic regression boosting LPBoost: linear programming boosting Bootstrap aggregating (bagging): technique to improve stability and classification accuracy Computer Vision Grabcut based on Graph cuts Decision Trees C4.5 algorithm: an extension to ID3 ID3 algorithm (Iterative Dichotomiser 3): use heuristic to generate small decision trees Clustering: a class of unsupervised learning algorithms for grouping and bucketing related input vector k-nearest neighbors (k-NN): a non-parametric method for classifying objects based on closest training examples in the feature space Linde–Buzo–Gray algorithm: a vector quantization algorithm used to derive a good codebook Locality-sensitive hashing (LSH): a method of performing probabilistic dimension reduction of high-dimensional data Neural Network Backpropagation: a supervised learning method which requires a teacher that knows, or can calculate, the desired output for any given input Hopfield net: a Recurrent neural network in which all connections are symmetric Perceptron: the simplest kind of feedforward neural network: a linear classifier. Pulse-coupled neural networks (PCNN): Neural models proposed by modeling a cat's visual cortex and developed for high-performance biomimetic image processing. Radial basis function network: an artificial neural network that uses radial basis functions as activation functions Self-organizing map: an unsupervised network that produces a low-dimensional representation of the input space of the training samples Random forest: classify using many decision trees Reinforcement learning: Q-learning: learns an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter State–Action–Reward–State–Action (SARSA): learn a Markov decision process policy Temporal difference learning Relevance-Vector Machine (RVM): similar to SVM, but provides probabilistic classification Supervised learning: Learning by examples (labelled data-set split into training-set and test-set) Support Vector Machine (SVM): a set of methods which divide multidimensional data by finding a dividing hyperplane with the maximum margin between the two sets Structured SVM: allows training of a classifier for general structured output labels. Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme Programming language theory C3 linearization: an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming Chaitin's algorithm: a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric Hindley–Milner type inference algorithm Rete algorithm: an efficient pattern matching algorithm for implementing production rule systems Sethi-Ullman algorithm: generates optimal code for arithmetic expressions Parsing CYK algorithm: an O(n3) algorithm for parsing context-free grammars in Chomsky normal form Earley parser: another O(n3) algorithm for parsing any context-free grammar GLR parser: an algorithm for parsing any context-free grammar by Masaru Tomita. It is tuned for deterministic grammars, on which it performs almost linear time and O(n3) in worst case. Inside-outside algorithm: an O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars LL parser: a relatively simple linear time parsing algorithm for a limited class of context-free grammars LR parser: A more complex linear time parsing algorithm for a larger class of context-free grammars. Variants: Canonical LR parser LALR (look-ahead LR) parser Operator-precedence parser SLR (Simple LR) parser Simple precedence parser Packrat parser: a linear time parsing algorithm supporting some context-free grammars and parsing expression grammars Recursive descent parser: a top-down parser suitable for LL(k) grammars Shunting-yard algorithm: converts an infix-notation math expression to postfix Pratt parser Lexical analysis Quantum algorithms Deutsch–Jozsa algorithm: criterion of balance for Boolean function Grover's algorithm: provides quadratic speedup for many search problems Shor's algorithm: provides exponential speedup (relative to currently known non-quantum algorithms) for factoring a number Simon's algorithm: provides a provably exponential speedup (relative to any non-quantum algorithm) for a black-box problem Theory of computation and automata Hopcroft's algorithm, Moore's algorithm, and Brzozowski's algorithm: algorithms for minimizing the number of states in a deterministic finite automaton Powerset construction: algorithm to convert nondeterministic automaton to deterministic automaton. Tarski–Kuratowski algorithm: a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchy and analytical hierarchy Information theory and signal processing Coding theory Error detection and correction BCH Codes Berlekamp–Massey algorithm Peterson–Gorenstein–Zierler algorithm Reed–Solomon error correction BCJR algorithm: decoding of error correcting codes defined on trellises (principally convolutional codes) Forward error correction Gray code Hamming codes Hamming(7,4): a Hamming code that encodes 4 bits of data into 7 bits by adding 3 parity bits Hamming distance: sum number of positions which are different Hamming weight (population count): find the number of 1 bits in a binary word Redundancy checks Adler-32 Cyclic redundancy check Damm algorithm Fletcher's checksum Longitudinal redundancy check (LRC) Luhn algorithm: a method of validating identification numbers Luhn mod N algorithm: extension of Luhn to non-numeric characters Parity: simple/fast error detection technique Verhoeff algorithm Lossless compression algorithms Burrows–Wheeler transform: preprocessing useful for improving lossless compression Context tree weighting Delta encoding: aid to compression of data in which sequential data occurs frequently Dynamic Markov compression: Compression using predictive arithmetic coding Dictionary coders Byte pair encoding (BPE) Deflate Lempel–Ziv LZ77 and LZ78 Lempel–Ziv Jeff Bonwick (LZJB) Lempel–Ziv–Markov chain algorithm (LZMA) Lempel–Ziv–Oberhumer (LZO): speed oriented Lempel–Ziv–Stac (LZS) Lempel–Ziv–Storer–Szymanski (LZSS) Lempel–Ziv–Welch (LZW) LZWL: syllable-based variant LZX Lempel–Ziv Ross Williams (LZRW) Entropy encoding: coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols Arithmetic coding: advanced entropy coding Range encoding: same as arithmetic coding, but looked at in a slightly different way Huffman coding: simple lossless compression taking advantage of relative character frequencies Adaptive Huffman coding: adaptive coding technique based on Huffman coding Package-merge algorithm: Optimizes Huffman coding subject to a length restriction on code strings Shannon–Fano coding Shannon–Fano–Elias coding: precursor to arithmetic encoding Entropy coding with known entropy characteristics Golomb coding: form of entropy coding that is optimal for alphabets following geometric distributions Rice coding: form of entropy coding that is optimal for alphabets following geometric distributions Truncated binary encoding Unary coding: code that represents a number n with n ones followed by a zero Universal codes: encodes positive integers into binary code words Elias delta, gamma, and omega coding Exponential-Golomb coding Fibonacci coding Levenshtein coding Fast Efficient & Lossless Image Compression System (FELICS): a lossless image compression algorithm Incremental encoding: delta encoding applied to sequences of strings Prediction by partial matching (PPM): an adaptive statistical data compression technique based on context modeling and prediction Run-length encoding: lossless data compression taking advantage of strings of repeated characters SEQUITUR algorithm: lossless compression by incremental grammar inference on a string Lossy compression algorithms 3Dc: a lossy data compression algorithm for normal maps Audio and Speech compression A-law algorithm: standard companding algorithm Code-excited linear prediction (CELP): low bit-rate speech compression Linear predictive coding (LPC): lossy compression by representing the spectral envelope of a digital signal of speech in compressed form Mu-law algorithm: standard analog signal compression or companding algorithm Warped Linear Predictive Coding (WLPC) Image compression Block Truncation Coding (BTC): a type of lossy image compression technique for greyscale images Embedded Zerotree Wavelet (EZW) Fast Cosine Transform algorithms (FCT algorithms): computes Discrete Cosine Transform (DCT) efficiently Fractal compression: method used to compress images using fractals Set Partitioning in Hierarchical Trees (SPIHT) Wavelet compression: form of data compression well suited for image compression (sometimes also video compression and audio compression) Transform coding: type of data compression for "natural" data like audio signals or photographic images Video compression Vector quantization: technique often used in lossy data compression Digital signal processing Adaptive-additive algorithm (AA algorithm): find the spatial frequency phase of an observed wave source Discrete Fourier transform: determines the frequencies contained in a (segment of a) signal Bluestein's FFT algorithm Bruun's FFT algorithm Cooley–Tukey FFT algorithm Fast Fourier transform Prime-factor FFT algorithm Rader's FFT algorithm Fast folding algorithm: an efficient algorithm for the detection of approximately periodic events within time series data Gerchberg–Saxton algorithm: Phase retrieval algorithm for optical planes Goertzel algorithm: identify a particular frequency component in a signal. Can be used for DTMF digit decoding. Karplus-Strong string synthesis: physical modelling synthesis to simulate the sound of a hammered or plucked string or some types of percussion Image processing Contrast Enhancement Histogram equalization: use histogram to improve image contrast Adaptive histogram equalization: histogram equalization which adapts to local changes in contrast Connected-component labeling: find and label disjoint regions Dithering and half-toning Error diffusion Floyd–Steinberg dithering Ordered dithering Riemersma dithering Elser difference-map algorithm: a search algorithm for general constraint satisfaction problems. Originally used for X-Ray diffraction microscopy Feature detection Canny edge detector: detect a wide range of edges in images Generalised Hough transform Hough transform Marr–Hildreth algorithm: an early edge detection algorithm SIFT (Scale-invariant feature transform): is an algorithm to detect and describe local features in images. : is a robust local feature detector, first presented by Herbert Bay et al. in 2006, that can be used in computer vision tasks like object recognition or 3D reconstruction. It is partly inspired by the SIFT descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations than SIFT. Richardson–Lucy deconvolution: image de-blurring algorithm Blind deconvolution: image de-blurring algorithm when point spread function is unknown. Median filtering Seam carving: content-aware image resizing algorithm Segmentation: partition a digital image into two or more regions GrowCut algorithm: an interactive segmentation algorithm Random walker algorithm Region growing Watershed transformation: a class of algorithms based on the watershed analogy Software engineering Cache algorithms CHS conversion: converting between disk addressing systems Double dabble: convert binary numbers to BCD Hash function: convert a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index into an array Fowler–Noll–Vo hash function: fast with low collision rate Pearson hashing: computes 8-bit value only, optimized for 8-bit computers Zobrist hashing: used in the implementation of transposition tables Unicode collation algorithm Xor swap algorithm: swaps the values of two variables without using a buffer Database algorithms Algorithms for Recovery and Isolation Exploiting Semantics (ARIES): transaction recovery Join algorithms Block nested loop Hash join Nested loop join Sort-Merge Join The Chase Distributed systems algorithms Clock synchronization Berkeley algorithm Cristian's algorithm Intersection algorithm Marzullo's algorithm Consensus (computer science): agreeing on a single value or history among unreliable processors Chandra–Toueg consensus algorithm Paxos algorithm Raft (computer science) Detection of Process Termination Dijkstra-Scholten algorithm Huang's algorithm Lamport ordering: a partial ordering of events based on the happened-before relation Leader election: a method for dynamically selecting a coordinator Bully algorithm Mutual exclusion Lamport's Distributed Mutual Exclusion Algorithm Naimi-Trehel's log(n) Algorithm Maekawa's Algorithm Raymond's Algorithm Ricart–Agrawala Algorithm Snapshot algorithm: record a consistent global state for an asynchronous system Chandy–Lamport algorithm Vector clocks: generate a partial ordering of events in a distributed system and detect causality violations Memory allocation and deallocation algorithms Buddy memory allocation: an algorithm to allocate memory such with less fragmentation Garbage collectors Cheney's algorithm: an improvement on the Semi-space collector Generational garbage collector: Fast garbage collectors that segregate memory by age Mark-compact algorithm: a combination of the mark-sweep algorithm and Cheney's copying algorithm Mark and sweep Semi-space collector: an early copying collector Reference counting Networking Karn's algorithm: addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP Luleå algorithm: a technique for storing and searching internet routing tables efficiently Network congestion Exponential backoff Nagle's algorithm: improve the efficiency of TCP/IP networks by coalescing packets Truncated binary exponential backoff Operating systems algorithms Banker's algorithm: algorithm used for deadlock avoidance Page replacement algorithms: for selecting the victim page under low memory conditions Adaptive replacement cache: better performance than LRU Clock with Adaptive Replacement (CAR): a page replacement algorithm with performance comparable to adaptive replacement cache Process synchronization Dekker's algorithm Lamport's Bakery algorithm Peterson's algorithm Scheduling Earliest deadline first scheduling Fair-share scheduling Least slack time scheduling List scheduling Multi level feedback queue Rate-monotonic scheduling Round-robin scheduling Shortest job next Shortest remaining time Top-nodes algorithm: resource calendar management I/O scheduling Disk scheduling Elevator algorithm: Disk scheduling algorithm that works like an elevator. Shortest seek first: Disk scheduling algorithm to reduce seek time. See also List of data structures List of machine learning algorithms List of pathfinding algorithms List of algorithm general topics List of terms relating to algorithms and data structures Heuristic References Algorithms
List of algorithms
[ "Mathematics" ]
11,291
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
18,610
https://en.wikipedia.org/wiki/Laplace%20transform
In mathematics, the Laplace transform, named after Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually , in the time domain) to a function of a complex variable (in the complex-valued frequency domain, also known as s-domain, or s-plane). The transform is useful for converting differentiation and integration in the time domain into much easier multiplication and division in the Laplace domain (analogous to how logarithms are useful for simplifying multiplication and division into addition and subtraction). This gives the transform many applications in science and engineering, mostly as a tool for solving linear differential equations and dynamical systems by simplifying ordinary differential equations and integral equations into algebraic polynomial equations, and by simplifying convolution into multiplication. Once solved, the inverse Laplace transform reverts to the original domain. The Laplace transform is defined (for suitable functions ) by the integral where s is a complex number. It is related to many other transforms, most notably the Fourier transform and the Mellin transform. Formally, the Laplace transform is converted into a Fourier transform by the substitution where is real. However, unlike the Fourier transform, which gives the decomposition of a function into its components in each frequency, the Laplace transform of a function with suitable decay is an analytic function, and so has a convergent power series, the coefficients of which give the decomposition of a function into its moments. Also unlike the Fourier transform, when regarded in this way as an analytic function, the techniques of complex analysis, and especially contour integrals, can be used for calculations. History The Laplace transform is named after mathematician and astronomer Pierre-Simon, Marquis de Laplace, who used a similar transform in his work on probability theory. Laplace wrote extensively about the use of generating functions (1814), and the integral form of the Laplace transform evolved naturally as a result. Laplace's use of generating functions was similar to what is now known as the z-transform, and he gave little attention to the continuous variable case which was discussed by Niels Henrik Abel. From 1744, Leonhard Euler investigated integrals of the form as solutions of differential equations, introducing in particular the gamma function. Joseph-Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form which resembles a Laplace transform. These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power. Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space, because those solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space. In 1821, Cauchy developed an operational calculus for the Laplace transform that could be used to study linear differential equations in much the same way the transform is now used in basic engineering. This method was popularized, and perhaps rediscovered, by Oliver Heaviside around the turn of the century. Bernhard Riemann used the Laplace transform in his 1859 paper On the Number of Primes Less Than a Given Magnitude, in which he also developed the inversion theorem. Riemann used the Laplace transform to develop the functional equation of the Riemann zeta function, and this method is still used to related the modular transformation law of the Jacobi theta function, which is simple to prove via Poisson summation, to the functional equation. Hjalmar Mellin was among the first to study the Laplace transform, rigorously in the Karl Weierstrass school of analysis, and apply it to the study of differential equations and special functions, at the turn of the 20th century. At around the same time, Heaviside was busy with his operational calculus. Thomas Joannes Stieltjes considered a generalization of the Laplace transform connected to his work on moments. Other contributors in this time period included Mathias Lerch, Oliver Heaviside, and Thomas Bromwich. In 1934, Raymond Paley and Norbert Wiener published the important work Fourier transforms in the complex domain, about what is now called the Laplace transform (see below). Also during the 30s, the Laplace transform was instrumental in G H Hardy and John Edensor Littlewood's study of tauberian theorems, and this application was later expounded on by Widder (1941), who developed other aspects of the theory such as a new method for inversion. Edward Charles Titchmarsh wrote the influential Introduction to the theory of the Fourier integral (1937). The current widespread use of the transform (mainly in engineering) came about during and soon after World War II, replacing the earlier Heaviside operational calculus. The advantages of the Laplace transform had been emphasized by Gustav Doetsch, to whom the name Laplace transform is apparently due. Formal definition The Laplace transform of a function , defined for all real numbers , is the function , which is a unilateral transform defined by where s is a complex frequency-domain parameter with real numbers and . An alternate notation for the Laplace transform is instead of , often written as in an abuse of notation. The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that must be locally integrable on . For locally integrable functions that decay at infinity or are of exponential type (), the integral can be understood to be a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can be understood in a weak sense, and this is dealt with below. One can define the Laplace transform of a finite Borel measure by the Lebesgue integral An important special case is where is a probability measure, for example, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density function . In that case, to avoid potential confusion, one often writes where the lower limit of is shorthand notation for This limit emphasizes that any point mass located at is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform. Bilateral Laplace transform When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform, or two-sided Laplace transform, by extending the limits of integration to be the entire real axis. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by the Heaviside step function. The bilateral Laplace transform is defined as follows: An alternate notation for the bilateral Laplace transform is , instead of . Inverse Laplace transform Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range. Typical function spaces in which this is true include the spaces of bounded continuous functions, the space , or more generally tempered distributions on . The Laplace transform is also defined and injective for suitable spaces of tempered distributions. In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin's inverse formula): where is a real number so that the contour path of integration is in the region of convergence of . In most applications, the contour can be closed, allowing the use of the residue theorem. An alternative formula for the inverse Laplace transform is given by Post's inversion formula. The limit here is interpreted in the weak-* topology. In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table and construct the inverse by inspection. Probability theory In pure and applied probability, the Laplace transform is defined as an expected value. If is a random variable with probability density function , then the Laplace transform of is given by the expectation where is the expectation of random variable . By convention, this is referred to as the Laplace transform of the random variable itself. Here, replacing by gives the moment generating function of . The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory. Of particular use is the ability to recover the cumulative distribution function of a continuous random variable by means of the Laplace transform as follows: Algebraic construction The Laplace transform can be alternatively defined in a purely algebraic manner by applying a field of fractions construction to the convolution ring of functions on the positive half-line. The resulting space of abstract operators is exactly equivalent to Laplace space, but in this construction the forward and reverse transforms never need to be explicitly defined (avoiding the related difficulties with proving convergence). Region of convergence If is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform of converges provided that the limit exists. The Laplace transform converges absolutely if the integral exists as a proper Lebesgue integral. The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former but not in the latter sense. The set of values for which converges absolutely is either of the form or , where is an extended real constant with (a consequence of the dominated convergence theorem). The constant is known as the abscissa of absolute convergence, and depends on the growth behavior of . Analogously, the two-sided transform converges absolutely in a strip of the form , and possibly including the lines or . The subset of values of for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem. Similarly, the set of values for which converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at , then it automatically converges for all with . Therefore, the region of convergence is a half-plane of the form , possibly including some points of the boundary line . In the region of convergence , the Laplace transform of can be expressed by integrating by parts as the integral That is, can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. There are several Paley–Wiener theorems concerning the relationship between the decay properties of , and the properties of the Laplace transform within the region of convergence. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region . As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part. This ROC is used in knowing about the causality and stability of a system. Properties and theorems The Laplace transform's key property is that it converts differentiation and integration in the time domain into multiplication and division by in the Laplace domain. Thus, the Laplace variable is also known as an operator variable in the Laplace domain: either the derivative operator or (for the integration operator. Given the functions and , and their respective Laplace transforms and , the following table is a list of properties of unilateral Laplace transform: Initial value theorem Final value theorem , if all poles of are in the left half-plane. The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions (or other difficult algebra). If has a pole in the right-hand plane or poles on the imaginary axis (e.g., if or ), then the behaviour of this formula is undefined. Relation to power series The Laplace transform can be viewed as a continuous analogue of a power series. If is a discrete function of a positive integer , then the power series associated to is the series where is a real variable (see Z-transform). Replacing summation over with integration over , a continuous version of the power series becomes where the discrete function is replaced by the continuous one . Changing the base of the power from to gives For this to converge for, say, all bounded functions , it is necessary to require that . Making the substitution gives just the Laplace transform: In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameter is replaced by the continuous parameter , and is replaced by . Relation to moments The quantities are the moments of the function . If the first moments of converge absolutely, then by repeated differentiation under the integral, This is of special significance in probability theory, where the moments of a random variable are given by the expectation values . Then, the relation holds Transform of a function's derivative It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows: yielding and in the bilateral case, The general result where denotes the th derivative of , can then be established with an inductive argument. Evaluating integrals over the positive real axis A useful property of the Laplace transform is the following: under suitable assumptions on the behaviour of in a right neighbourhood of and on the decay rate of in a left neighbourhood of . The above formula is a variation of integration by parts, with the operators and being replaced by and . Let us prove the equivalent formulation: By plugging in the left-hand side turns into: but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side. This method can be used to compute integrals that would otherwise be difficult to compute using elementary methods of real calculus. For example, Relationship to other transforms Laplace–Stieltjes transform The (unilateral) Laplace–Stieltjes transform of a function is defined by the Lebesgue–Stieltjes integral The function is assumed to be of bounded variation. If is the antiderivative of : then the Laplace–Stieltjes transform of and the Laplace transform of coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to . So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function. Fourier transform Let be a complex-valued Lebesgue integrable function supported on , and let be its Laplace transform. Then, within the region of convergence, we have which is the Fourier transform of the function . Indeed, the Fourier transform is a special case (under certain conditions) of the bilateral Laplace transform. The main difference is that the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. The Laplace transform is usually restricted to transformation of functions of with . A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable . Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory. Formally, the Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument when the condition explained below is fulfilled, This convention of the Fourier transform ( in ) requires a factor of on the inverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system. The above relation is valid as stated if and only if the region of convergence (ROC) of contains the imaginary axis, . For example, the function has a Laplace transform whose ROC is . As is a pole of , substituting in does not yield the Fourier transform of , which contains terms proportional to the Dirac delta functions . However, a relation of the form holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley–Wiener theorems. Mellin transform The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. If in the Mellin transform we set we get a two-sided Laplace transform. Z-transform The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of where is the sampling interval (in units of time e.g., seconds) and is the sampling rate (in samples per second or hertz). Let be a sampling impulse train (also called a Dirac comb) and be the sampled representation of the continuous-time The Laplace transform of the sampled signal is This is the precise definition of the unilateral Z-transform of the discrete function with the substitution of . Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal, The similarity between the Z- and Laplace transforms is expanded upon in the theory of time scale calculus. Borel transform The integral form of the Borel transform is a special case of the Laplace transform for an entire function of exponential type, meaning that for some constants and . The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined. Fundamental relationships Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms. Table of selected Laplace transforms The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory Notes at the end of the table. Because the Laplace transform is a linear operator, The Laplace transform of a sum is the sum of Laplace transforms of each term. The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function. Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly. The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, . The entries of the table that involve a time delay are required to be causal (meaning that ). A causal system is a system where the impulse response is zero for all time prior to . In general, the region of convergence for causal systems is not the same as that of anticausal systems. s-domain equivalent circuits and impedances The Laplace transform is often used in circuit analysis, and simple conversions to the -domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances. Here is a summary of equivalents: Note that the resistor is exactly the same in the time domain and the -domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the -domain account for that. The equivalents for current and voltage sources are simply derived from the transformations in the table above. Examples and applications The Laplace transform is used frequently in engineering and physics; the output of a linear time-invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory. The Laplace transform is invertible on a large class of functions. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications. The Laplace transform can also be used to solve differential equations and is used extensively in mechanical engineering and electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus. Evaluating improper integrals Let . Then (see the table above) From which one gets: In the limit , one gets provided that the interchange of limits can be justified. This is often possible as a consequence of the final value theorem. Even when the interchange cannot be justified the calculation can be suggestive. For example, with , proceeding formally one has The validity of this identity can be proved by other means. It is an example of a Frullani integral. Another example is Dirichlet integral. Complex impedance of a capacitor In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (with equations as for the SI unit system). Symbolically, this is expressed by the differential equation where is the capacitance of the capacitor, is the electric current through the capacitor as a function of time, and is the voltage across the terminals of the capacitor, also as a function of time. Taking the Laplace transform of this equation, we obtain where and Solving for we have The definition of the complex impedance (in ohms) is the ratio of the complex voltage divided by the complex current while holding the initial state at zero: Using this definition and the previous equation, we find: which is the correct expression for the complex impedance of a capacitor. In addition, the Laplace transform has large applications in control theory. Impulse response Consider a linear time-invariant system with transfer function The impulse response is simply the inverse Laplace transform of this transfer function: Partial fraction expansion To evaluate this inverse transform, we begin by expanding using the method of partial fraction expansion, The unknown constants and are the residues located at the corresponding poles of the transfer function. Each residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue , we multiply both sides of the equation by to get Then by letting , the contribution from vanishes and all that is left is Similarly, the residue is given by Note that and so the substitution of and into the expanded expression for gives Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of to obtain which is the impulse response of the system. Convolution The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions and . That is, the inverse of is Phase delay Starting with the Laplace transform, we find the inverse by first rearranging terms in the fraction: We are now able to take the inverse Laplace transform of our terms: This is just the sine of the sum of the arguments, yielding: We can apply similar logic to find that Statistical mechanics In statistical mechanics, the Laplace transform of the density of states defines the partition function. That is, the canonical partition function is given by and the inverse is given by Spatial (not time) structure from astronomical spectrum The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain). Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum. When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement. Birth and death processes Consider a random walk, with steps occurring with probabilities . Suppose also that the time step is an Poisson process, with parameter . Then the probability of the walk being at the lattice point at time is This leads to a system of integral equations (or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for namely: which may now be solved by standard methods. Tauberian theory The Laplace transform of the measure on is given by It is intuitively clear that, for small , the exponentially decaying integrand will become more sensitive to the concentration of the measure on larger subsets of the domain. To make this more precise, introduce the distribution function: Formally, we expect a limit of the following kind: Tauberian theorems are theorems relating the asymptotics of the Laplace transform, as , to those of the distribution of as . They are thus of importance in asymptotic formulae of probability and statistics, where often the spectral side has asymptotics that are simpler to infer. Two tauberian theorems of note are the Hardy–Littlewood tauberian theorem and the Wiener tauberian theorem. The Wiener theorem generalizes the Ikehara tauberian theorem, which is the following statement: Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c, has an extension as a continuous function for ℜ(s) ≥ 1. Then the limit as x goes to infinity of e−x A(x) is equal to c. This statement can be applied in particular to the logarithmic derivative of Riemann zeta function, and thus provides an extremely short way to prove the prime number theorem. See also Analog signal processing Bernstein's theorem on monotone functions Continuous-repayment mortgage Hamburger moment problem Hardy–Littlewood Tauberian theorem Laplace–Carson transform Moment-generating function Nonlocal operator Post's inversion formula Signal-flow graph Transfer function Notes References Modern Historical , Chapters 3–5 Further reading . Mathews, Jon; Walker, Robert L. (1970), Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, - See Chapter VI. The Laplace transform. J.A.C.Weidman and Bengt Fornberg: "Fully numerical Laplace transform methods", Numerical Algorithms, vol.92 (2023), pp. 985–1006. https://doi.org/10.1007/s11075-022-01368-x . External links Online Computation of the transform or inverse transform, wims.unice.fr Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. Good explanations of the initial and final value theorems Laplace Transforms at MathPages Computational Knowledge Engine allows to easily calculate Laplace Transforms and its inverse Transform. Laplace Calculator to calculate Laplace Transforms online easily. Code to visualize Laplace Transforms and many example videos. Differential equations Fourier analysis Mathematical physics Integral transforms
Laplace transform
[ "Physics", "Mathematics" ]
6,322
[ "Applied mathematics", "Theoretical physics", "Mathematical objects", "Differential equations", "Equations", "Mathematical physics" ]
18,631
https://en.wikipedia.org/wiki/Lorentz%20force
In physics, specifically in electromagnetism, the Lorentz force law is the combination of electric and magnetic force on a point charge due to electromagnetic fields. The Lorentz force, on the other hand, is a physical effect that occurs in the vicinity of electrically neutral, current-carrying conductors causing moving electrical charges to experience a magnetic force. The Lorentz force law states that a particle of charge moving with a velocity in an electric field and a magnetic field experiences a force (in SI units) of It says that the electromagnetic force on a charge is a combination of (1) a force in the direction of the electric field (proportional to the magnitude of the field and the quantity of charge), and (2) a force at right angles to both the magnetic field and the velocity of the charge (proportional to the magnitude of the field, the charge, and the velocity). Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a moving charged particle. Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force. Lorentz force law as the definition of E and B In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields and . To be specific, the Lorentz force is understood to be the following empirical statement: The electromagnetic force on a test charge at a given point and time is a certain function of its charge and velocity , which can be parameterized by exactly two vectors and , in the functional form: This is valid, even for particles approaching the speed of light (that is, magnitude of , ). So the two vector fields and are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force. Physical interpretation of the Lorentz force Coulomb's law is only valid for point charges at rest. In fact, the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity. For small relative velocities and very small accelerations, instead of the Coulomb force, the Weber force can be applied. The sum of the Weber forces of all charge carriers in a closed DC loop on a single test charge produces – regardless of the shape of the current loop – the Lorentz force. The interpretation of magnetism by means of a modified Coulomb law was first proposed by Carl Friedrich Gauss. In 1835, Gauss assumed that each segment of a DC loop contains an equal number of negative and positive point charges that move at different speeds. If Coulomb's law were completely correct, no force should act between any two short segments of such current loops. However, around 1825, André-Marie Ampère demonstrated experimentally that this is not the case. Ampère also formulated a force law. Based on this law, Gauss concluded that the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity. The Weber force is a central force and complies with Newton's third law. This demonstrates not only the conservation of momentum but also that the conservation of energy and the conservation of angular momentum apply. Weber electrodynamics is only a quasistatic approximation, i.e. it should not be used for higher velocities and accelerations. However, the Weber force illustrates that the Lorentz force can be traced back to central forces between numerous point-like charge carriers. Equation Charged particle The force acting on a particle of electric charge with instantaneous velocity , due to an external electric field and magnetic field , is given by (SI definition of quantities): where is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have: In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as: in which is the position vector of the charged particle, is time, and the overdot is a time derivative. A positively charged particle will be accelerated in the same linear orientation as the field, but will curve perpendicularly to both the instantaneous velocity vector and the field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of and are then curled to point in the direction of , then the extended thumb will point in the direction of ). The term is called the electric force, while the term is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: In what follows, the term Lorentz force will refer to the expression for the total force. The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force. The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle. Continuous charge distribution For a continuous charge distribution in motion, the Lorentz force equation becomes: where is the force on a small piece of the charge distribution with charge . If both sides of this equation are divided by the volume of this small piece of the charge distribution , the result is: where is the force density (force per unit volume) and is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is so the continuous analogue to the equation is The total force is the volume integral over the charge distribution: By eliminating and , using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor , in turn this can be combined with the Poynting vector to obtain the electromagnetic stress–energy tensor T used in general relativity. In terms of and , another way to write the Lorentz force (per unit volume) is where is the speed of light and ∇· denotes the divergence of a tensor field. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details. The density of power associated with the Lorentz force in a material medium is If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is where: is the density of free charge; is the polarization density; is the density of free current; and is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is Formulation in the Gaussian system The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations: where is the vacuum permittivity and the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context. History Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields. The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name. Trajectories of particles due to the Lorentz force In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation. Significance of the Lorentz force While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another. In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory). Force on a current-carrying wire When a wire carrying an electric current is placed in a magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field: where is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current . If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire , then adding up all these forces by integration. This results in the same formal expression, but should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque. If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current is given by integration along the wire, One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's magnetic field. EMF The magnetic force () component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire. In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force () term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations). Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa. Lorentz force and Faraday's law of induction Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is: where is the magnetic flux through the loop, is the magnetic field, is a surface bounded by the closed contour , at time , is an infinitesimal vector area element of (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch). The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wirebut also for a moving wire. From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law. Let be the moving wire, moving together without rotation and with constant velocity and be the internal surface of the wire. The EMF around the closed path is given by: where is the electric field and is an infinitesimal vector element of the contour . NB: Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. The above result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called here the Maxwell–Faraday equation: The Maxwell–Faraday equation also can be written in an integral form using the Kelvin–Stokes theorem. So we have, the Maxwell Faraday equation: and the Faraday Law, The two are equivalent if the wire is not moving. Using the Leibniz integral rule and that , results in, and using the Maxwell Faraday equation, since this is valid for any wire position it implies that, Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law. If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, will change. Alternatively, if the loop changes orientation with respect to the B-field, the differential element will change because of the different angle between and , also changing . As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in . Note that the Maxwell Faraday's equation implies that the Electric Field is non conservative when the Magnetic Field varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero. Lorentz force in terms of potentials The and fields can be replaced by the magnetic vector potential and (scalar) electrostatic potential by where is the gradient, is the divergence, and is the curl. The force becomes Using an identity for the triple product this can be rewritten as, (Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on not on thus, there is no need of using Feynman's subscript notation in the equation above). Using the chain rule, the total derivative of is: so that the above expression becomes: With , we can put the equation into the convenient Euler–Lagrange form where and Lorentz force and analytical mechanics The Lagrangian for a charged particle of mass and charge in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by: where and are the potential fields as above. The quantity can be thought as a velocity-dependent potential function. Using Lagrange's equations, the equation for the Lorentz force given above can be obtained again. The potential energy depends on the velocity of the particle, so the force is velocity dependent, so it is not conservative. The relativistic Lagrangian is The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential. Relativistic form of the Lorentz force Covariant form of the Lorentz force Field tensor Using the metric signature , the Lorentz force for a charge can be written in covariant form: where is the four-momentum, defined as the proper time of the particle, the contravariant electromagnetic tensor and is the covariant 4-velocity of the particle, defined as: in which is the Lorentz factor. The fields are transformed to a frame moving with constant relative velocity by: where is the Lorentz transformation tensor. Translation to vector notation The component (x-component) of the force is Substituting the components of the covariant electromagnetic tensor F yields Using the components of covariant four-velocity yields The calculation for (force components in the and directions) yields similar results, so collecting the 3 equations into one: and since differentials in coordinate time and proper time are related by the Lorentz factor, so we arrive at This is precisely the Lorentz force law, however, it is important to note that is the relativistic expression, Lorentz force in spacetime algebra (STA) The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields , and an arbitrary time-direction, . This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space, as and is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector. The relativistic velocity is given by the (time-like) changes in a time-position vector where (which shows our choice for the metric) and the velocity is The proper (invariant is an inadequate term because no transformation has been defined) form of the Lorentz force law is simply Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression. Lorentz force in general relativity In the general theory of relativity the equation of motion for a particle with mass and charge , moving in a space with metric tensor and electromagnetic field , is given as where ( is taken along the trajectory), and The equation can also be written as where is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as where is the covariant differential in general relativity (metric, torsion-free). Applications The Lorentz force occurs in many devices, including: Cyclotrons and other circular path particle accelerators Mass spectrometers Velocity Filters Magnetrons Lorentz force velocimetry In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices including: Electric motors Railguns Linear motors Loudspeakers Magnetoplasmadynamic thrusters Electrical generators Homopolar generators Linear alternators See also Hall effect Electromagnetism Gravitomagnetism Ampère's force law Hendrik Lorentz Maxwell's equations Formulation of Maxwell's equations in special relativity Moving magnet and conductor problem Abraham–Lorentz force Larmor formula Cyclotron radiation Magnetoresistance Scalar potential Helmholtz decomposition Guiding center Field line Coulomb's law Electromagnetic buoyancy Footnotes References The numbered references refer in part to the list immediately below. : volume 2. External links Lorentz force (demonstration) Interactive Java applet on the magnetic deflection of a particle beam in a homogeneous magnetic field by Wolfgang Bauer Physical phenomena Electromagnetism Maxwell's equations Hendrik Lorentz
Lorentz force
[ "Physics" ]
5,261
[ "Electromagnetism", "Physical phenomena", "Equations of physics", "Fundamental interactions", "Maxwell's equations" ]
18,644
https://en.wikipedia.org/wiki/Median%20lethal%20dose
In toxicology, the median lethal dose, LD50 (abbreviation for "lethal dose, 50%"), LC50 (lethal concentration, 50%) or LCt50 is a toxic unit that measures the lethal dose of a given substance. The value of LD50 for a substance is the dose required to kill half the members of a tested population after a specified test duration. LD50 figures are frequently used as a general indicator of a substance's acute toxicity. A lower LD50 is indicative of higher toxicity. The term LD50 is generally attributed to John William Trevan. The test was created by J. W. Trevan in 1927. The term semilethal dose is occasionally used in the same sense, in particular with translations of foreign language text, but can also refer to a sublethal dose. LD50 is usually determined by tests on animals such as laboratory mice. In 2011, the U.S. Food and Drug Administration approved alternative methods to LD50 for testing the cosmetic drug Botox without animal tests. Conventions The LD50 is usually expressed as the mass of substance administered per unit mass of test subject, typically as milligrams of substance per kilogram of body mass, sometimes also stated as nanograms (suitable for botulinum), micrograms, or grams (suitable for paracetamol) per kilogram. Stating it this way allows the relative toxicity of different substances to be compared and normalizes for the variation in the size of the animals exposed (although toxicity does not always scale simply with body mass). For substances in the environment, such as poisonous vapors or substances in water that are toxic to fish, the concentration in the environment (per cubic metre or per litre) is used, giving a value of LC50. But in this case, the exposure time is important (see below). The choice of 50% lethality as a benchmark avoids the potential for ambiguity of making measurements in the extremes and reduces the amount of testing required. However, this also means that LD50 is not the lethal dose for all subjects; some may be killed by much less, while others survive doses far higher than the LD50. Measures such as "LD1" and "LD99" (dosage required to kill 1% or 99%, respectively, of the test population) are occasionally used for specific purposes. Lethal dosage often varies depending on the method of administration; for instance, many substances are less toxic when administered orally than when intravenously administered. For this reason, LD50 figures are often qualified with the mode of administration, e.g., "LD50 i.v." The related quantities LD50/30 or LD50/60 are used to refer to a dose that without treatment will be lethal to 50% of the population within (respectively) 30 or 60 days. These measures are used more commonly within radiation health physics, for ionizing radiation, as survival beyond 60 days usually results in recovery. A comparable measurement is LCt50, which relates to lethal dosage from exposure, where C is concentration and t is time. It is often expressed in terms of mg-min/m3. ICt50 is the dose that will cause incapacitation rather than death. These measures are commonly used to indicate the comparative efficacy of chemical warfare agents, and dosages are typically qualified by rates of breathing (e.g., resting = 10 L/min) for inhalation, or degree of clothing for skin penetration. The concept of Ct was first proposed by Fritz Haber and is sometimes referred to as Haber's law, which assumes that exposure to 1 minute of 100 mg/m3 is equivalent to 10 minutes of 10 mg/m3 (1 × 100 = 100, as does 10 × 10 = 100). Some chemicals, such as hydrogen cyanide, are rapidly detoxified by the human body, and do not follow Haber's law. In these cases, the lethal concentration may be given simply as LC50 and qualified by a duration of exposure (e.g., 10 minutes). The material safety data sheets for toxic substances frequently use this form of the term even if the substance does follow Haber's law. For disease-causing organisms, there is also a measure known as the median infective dose and dosage. The median infective dose (ID50) is the number of organisms received by a person or test animal qualified by the route of administration (e.g., 1,200 org/man per oral). Because of the difficulties in counting actual organisms in a dose, infective doses may be expressed in terms of biological assay, such as the number of LD50s to some test animal. In biological warfare infective dosage is the number of infective doses per cubic metre of air times the number of minutes of exposure (e.g., ICt50 is 100 medium doses - min/m3). Limitation As a measure of toxicity, LD50 is somewhat unreliable and results may vary greatly between testing facilities due to factors such as the genetic characteristics of the sample population, animal species tested, environmental factors and mode of administration. There can be wide variability between species as well; what is relatively safe for rats may very well be extremely toxic for humans (cf. paracetamol toxicity), and vice versa. For example, chocolate, comparatively harmless to humans, is known to be toxic to many animals. When used to test venom from venomous creatures, such as snakes, LD50 results may be misleading due to the physiological differences between mice, rats, and humans. Many venomous snakes are specialized predators on mice, and their venom may be adapted specifically to incapacitate mice; and mongooses may be exceptionally resistant. While most mammals have a very similar physiology, LD50 results may or may not have equal bearing upon every mammal species, such as humans, etc. Examples Note: Comparing substances (especially drugs) to each other by LD50 can be misleading in many cases due (in part) to differences in effective dose (ED50). Therefore, it is more useful to compare such substances by therapeutic index, which is simply the ratio of LD50 to ED50. The following examples are listed in reference to LD50 values, in descending order, and accompanied by LC50 values, {bracketed}, when appropriate. Poison scale The LD50 values have a very wide range. The botulinum toxin as the most toxic substance known has an LD50 value of 1 ng/kg, while the most non-toxic substance water has an LD50 value of more than 90 g/kg; a difference of about 1 in 100 billion, or 11 orders of magnitude. As with all measured values that differ by many orders of magnitude, a logarithmic view is advisable. Well-known examples are the indication of the earthquake strength using the Richter scale, the pH value, as a measure for the acidic or basic character of an aqueous solution or of loudness in decibels. In this case, the negative decimal logarithm of the LD50 values, which is standardized in kg per kg body weight, is considered . The dimensionless value found can be entered in a toxin scale. Water as the baseline substance is neatly 1 in the negative logarithmic toxin scale. Procedures A number of procedures have been defined to derive the LD50. The earliest was the 1927 "conventional" procedure by Trevan, which requires 40 or more animals. The fixed-dose procedure, proposed in 1984, estimates a level of toxicity by feeding at defined doses and looking for signs of toxicity (without requiring death). The up-and-down procedure, proposed in 1985, yields an LD50 value while dosing only one animal at a time. See also Animal testing Reed-Muench method The dose makes the poison – the toxicology adage that high quantities of any substance is lethal Other measures of toxicity IDLH Certain safety factor Therapeutic index Protective index Median toxic dose (TD50) Lowest published lethal dose (LDLo) EC50 (half maximal effective concentration) IC50 (half maximal inhibitory concentration) Draize test Indicative limit value No-observed-adverse-effect level (NOAEL) Lowest-observed-adverse-effect level (LOAEL) Related measures TCID50 Tissue Culture Infective Dosage Plaque forming units (pfu) References Further reading External links Canadian Centre for Occupational Health and Safety Causes of death Animal testing Concentration indicators Mathematics in medicine Toxicology
Median lethal dose
[ "Chemistry", "Mathematics", "Environmental_science" ]
1,745
[ "Animal testing", "Mathematics in medicine", "Toxicology", "Applied mathematics" ]
18,669
https://en.wikipedia.org/wiki/Life%20expectancy
Human life expectancy is a statistical measure of the estimate of the average remaining years of life at a given age. The most commonly used measure is life expectancy at birth (LEB, or in demographic notation e0, where ex denotes the average life remaining at age x). This can be defined in two ways. Cohort LEB is the mean length of life of a birth cohort (in this case, all individuals born in a given year) and can be computed only for cohorts born so long ago that all their members have died. Period LEB is the mean length of life of a hypothetical cohort assumed to be exposed, from birth through death, to the mortality rates observed at a given year. National LEB figures reported by national agencies and international organizations for human populations are estimates of period LEB. Human remains from the early Bronze Age indicate an LEB of 24. In 2019, world LEB was 73.3. A combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, before modern medicine was widely available, significantly lowers LEB. For example, a society with a LEB of 40 would have relatively few people dying at exactly 40: most will die before 30 or after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity, LEB can be grossly misinterpreted, leading to the belief that a population with a low LEB would have a small proportion of older people. A different measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood. For instance, in a society with a life expectancy of 30, it may nevertheless be common to have a 40-year remaining timespan at age 5 (but not a 60-year one). Aggregate population measures—such as the proportion of the population in various age groups—are also used alongside individual-based measures—such as formal life expectancy—when analyzing population structure and dynamics. Pre-modern societies had universally higher mortality rates and lower life expectancies at every age for both males and females. Life expectancy, longevity, and maximum lifespan are not synonymous. Longevity refers to the relatively long lifespan of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Mathematically, life expectancy is denoted and is the mean number of years of life remaining at a given age , with a particular mortality. Because life expectancy is an average, a particular person may die many years before or after the expected survival. Life expectancy is also used in plant or animal ecology, and in life tables (also known as actuarial tables). The concept of life expectancy may also be used in the context of manufactured objects, though the related term shelf life is commonly used for consumer products, and the terms "mean time to breakdown" and "mean time between failures" are used in engineering. History The earliest documented work on life expectancy was done in the 1660s by John Graunt, Christiaan Huygens, and Lodewijck Huygens. Human patterns Maximum The longest verified lifespan for any human is that of Frenchwoman Jeanne Calment, who is verified as having lived to age 122 years, 164 days, between 21 February 1875 and 4 August 1997. This is referred to as the "maximum life span", which is the upper boundary of life, the maximum number of years any human is known to have lived. According to a study by biologists Bryan G. Hughes and Siegfried Hekimi, there is no evidence for limit on human lifespan. However, this view has been questioned on the basis of error patterns. A theoretical study shows that the maximum life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. Variation over time The following information is derived from the 1961 Encyclopædia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender. Life expectancy at birth takes account of infant mortality and child mortality but not prenatal mortality. English life expectancy at birth averaged about 36 years in the 17th and 18th centuries, one of the highest levels in the world although infant and child mortality remained higher than in later periods. Life expectancy was under 25 years in the early Colony of Virginia, and in seventeenth-century New England, about 40% died before reaching adulthood. During the Industrial Revolution, the life expectancy of children increased dramatically. Recorded deaths among children under the age of 5 years fell in London from 74.5% of the recorded births in 1730–49 to 31.8% in 1810–29, though this overstates mortality and its fall because of net immigration (hence more dying in the metropolis than were born there) and incomplete registration (particularly of births, and especially in the earlier period). English life expectancy at birth reached 41 years in the 1840s, 43 in the 1870s and 46 in the 1890s, though infant mortality remained at around 150 per thousand throughout this period. Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic, the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health. Regional variations There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. Human beings are expected to live on average 60 years in Eswatini and 82.6 years in Japan. An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities, excellent public health, and a healthy diet. The World Health Organization announced that the COVID-19 pandemic reversed the trend of steady gain in life expectancy at birth. The pandemic wiped out nearly a decade of progress in improving life expectancy. Africa During the last 200 years, African countries have generally not had the same improvements in mortality rates that have been enjoyed by countries in Asia, Latin America, and Europe. This is most apparent by the impact of AIDS on many African countries. According to projections made by the United Nations in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been: 70.7 years instead of 31.6 years, Botswana 69.9 years instead of 41.5 years, South Africa 70.5 years instead of 31.8 years, Zimbabwe Eastern Europe On average, eastern Europeans tend to live shorter lives than their western counterparts. For example, Spaniards from Madrid can expect to live to 85, but Bulgarians from the region of Severozapaden are predicted to live just past their 73rd birthday. This is in large part due to poor health habits, such as heavy smoking and high alcoholism in the region, and environmental actors, such as high air pollution. United States In 2022, the life expectancy was 77.5 in the United States, a decline from 2014, but an increase from 2021. In what has been described as a "life expectancy crisis", there were a total of 13 million "missing Americans" from 1980 to 2021, deaths that would have been averted if it had the standard mortality rate of "wealthy nations". The annual number of "missing Americans" has been increasing, with 622,534 in 2019 alone. Most excess deaths in the United States can largely be attributed to increasing obesity, alcoholism, drug overdoses, car accidents, suicides, and murders, with poor sleep, unhealthy diets, and loneliness being linked to most of them. Black Americans have generally shorter life expectancies than their White American counterparts. For example, white Americans in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since 1975 at the latest, the greatest difference being 7.1 years in 1993. In contrast, Asian American women live the longest of all ethnic and gender groups in the United States, with a life expectancy of 85.8 years. The life expectancy of Hispanic Americans is 81.2 years. Japan In 2023, the life expectancy was 84.5 in Japan, 4.2 years above the OECD average, and one of the highest in the world. Japan's high life expectancy can largely be explained by their healthy diets, which are low on salt, fat, and red meat. For these reasons, Japan has a low obesity rate, and ultimately low mortality from heart disease and cancers. In cities Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In American cities such as Cincinnati, the life expectancy gap between low income and high-income neighborhoods touches 20 years. Economic circumstances Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only away. A 2013 study found a pronounced relationship between economic inequality and life expectancy. However, in contrast, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general. The authors suggest that when people are working at a more extreme degree during prosperous economic times, they undergo more stress, exposure to pollution, and the likelihood of injury among other longevity-limiting factors. Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expectancies than average. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet, and exercise. Sex differences In the present, female human life expectancy is greater than that of males, despite females having higher morbidity rates (see health survival paradox). There are many potential reasons for this. Traditional arguments tend to favor sociology-environmental factors: historically, men have generally consumed more tobacco, alcohol, and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis, and cirrhosis of the liver. Men are also more likely to die from injuries, whether unintentional (such as occupational, war, or car wrecks) or intentional (suicide). Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. These far outweigh the female mortality rate from breast cancer and cervical cancer. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age. A paper from 2015 found that female foetuses have a higher mortality rate than male foetuses. This finding contradicts papers dating from 2002 and earlier that attribute the male sex to higher in-utero mortality rates. Among the smallest premature babies (those under ), females have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. Data from the United Kingdom shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates. Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger-sized individuals within a species tend, on average, to have shorter lives. This biological difference occurs because women have more resistance to infections and degenerative diseases. In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. However, Kalben's study was restricted to data in Western Europe alone, where the demographic transition occurred relatively early. United Nations statistics from mid-twentieth century onward, show that in all parts of the world, females have a higher life expectancy at age 60 than males. Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. Except for birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors. One recent suggestion is that mitochondrial mutations which shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore, such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation. Another explanation is the unguarded X hypothesis. According to this hypothesis, one reason for why the average lifespan of males is not as long as that of females––by 18% on average, according to the study––is that they have a Y chromosome which cannot protect an individual from harmful genes expressed on the X chromosome, while a duplicate X chromosome, as present in female organisms, can ensure harmful genes are not expressed. In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880, death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. Most of the difference in life expectancy between the sexes is accounted for by differences in the rate of death by cardiovascular diseases among persons aged 50–70. Genetics The heritability of lifespan is estimated to be less than 10%, meaning the majority of variation in lifespan is attributable due to differences in environment rather than genetic variation. However, researchers have identified regions of the genome which can influence the length of life and the number of years lived in good health. For example, a genome-wide association study of 1 million lifespans found 12 genetic loci which influenced lifespan by modifying susceptibility to cardiovascular and smoking-related disease. The locus with the largest effect is APOE. Carriers of the APOE ε4 allele live approximately one year less than average (per copy of the ε4 allele), mainly due to increased risk of Alzheimer's disease. In July 2020, scientists identified 10 genomic loci with consistent effects across multiple lifespan-related traits, including healthspan, lifespan, and longevity. The genes affected by variation in these loci highlighted haem metabolism as a promising candidate for further research within the field. This study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. A follow-up study which investigated the genetics of frailty and self-rated health in addition to healthspan, lifespan, and longevity also highlighted haem metabolism as an important pathway, and found genetic variants which lower blood protein levels of LPA and VCAM1 were associated with increased healthy lifespan. Centenarians In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane Prefecture had an estimated 743 centenarians per million inhabitants. In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants). Mental illness Mental illness is reported to occur in approximately 18% of the average American population. The mentally ill have been shown to have a 10- to 25-year reduction in life expectancy. Generally, the reduction of lifespan in the mentally ill population compared to the mentally stable population has been studied and documented. The greater mortality of people with mental disorders may be due to death from injury, from co-morbid conditions, or medication side effects. For instance, psychiatric medications can increase the risk of developing diabetes. It has been shown that the psychiatric medication olanzapine can increase risk of developing agranulocytosis, among other comorbidities. Psychiatric medicines also affect the gastrointestinal tract; the mentally ill have a four times risk of gastrointestinal disease. As of 2020 and the COVID-19 pandemic, researchers have found an increased risk of death in the mentally ill. Other illnesses The life expectancy of people with diabetes, which is 9.3% of the U.S. population, is reduced by roughly 10–20 years. People over 60 years old with Alzheimer's disease have about a 50% life expectancy of 3–10 years. Other demographics that tend to have a lower life expectancy than average include transplant recipients and the obese. Education Education on all levels has been shown to be strongly associated with increased life expectancy. This association may be due partly to higher income, which can lead to increased life expectancy. Despite the association, among identical twin pairs with different education levels, there is only weak evidence of a relationship between educational attainment and adult mortality. According to a paper from 2015, the mortality rate for the Caucasian population in the United States from 1993 to 2001 is four times higher for those who did not complete high school compared to those who have at least 16 years of education. In fact, within the U.S. adult population, people with less than a high school education have the shortest life expectancies. Preschool education also plays a large role in life expectancy. It was found that high-quality early-stage childhood education had positive effects on health. Researchers discovered this by analyzing the results of the Carolina Abecedarian Project, finding that the disadvantaged children who were randomly assigned to treatment had lower instances of risk factors for cardiovascular and metabolic diseases in their mid-30s. Evolution and aging rate Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms which—by virtue of their defenses or lifestyle—live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span. That finding was supported in a classic study of opossums by Austad; however, the opposite relationship was found in an equally prominent study of guppies by Reznick. One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction. Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. That is the key to why animals like giant tortoises can live so long. Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate. The ability of skin fibroblasts to perform DNA repair after UV irradiation was measured in shrew, mouse, rat, hamster, cow, elephant and human. It was found that DNA repair capability increased systematically with species life span. Since this original study in 1974, at least 14 additional studies were performed on mammals to test this correlation. In all, but two of these studies, lifespan correlated with DNA repair levels, suggesting that DNA repair capability contributes to life expectancy. See DNA damage theory of aging. In a broad survey of zoo animals, no relationship was found between investment of the animal in reproduction and its life span. Calculation In actuarial notation, the probability of surviving from age to age is denoted and the probability of dying during age (i.e. between ages and ) is denoted . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. This probability describes the likelihood of dying at that age, and is not the rate at which people of that age die. It can be shown that The curtate future lifetime, denoted , is a discrete random variable representing the remaining lifetime at age , rounded down to whole years. Life expectancy, more technically called the curtate expected lifetime and denoted , is the mean of —that is to say, the expected number of whole years of life remaining, assuming survival to age . So, Substituting () into the sum and simplifying gives the final result If the assumption is made that, on average, people live a half year on the year of their death, the complete life expectancy at age would be , which is denoted by e̊x, and is the intuitive definition of life expectancy. By definition, life expectancy is an arithmetic mean. It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in the year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies. The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large amount of data is available, a statistical population can be created that allow the age-specific death rates to be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years "exposed to risk" in each data cell). However, it is customary to apply smoothing to remove (as much as possible) the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, but more sophisticated methods are now used. The most common modern methods include: fitting a mathematical formula (such as the Gompertz function, or an extension of it) to the data. looking at an established mortality table derived from a larger population and making a simple adjustment to it (such as multiplying by a constant factor) to fit the data. (In cases of relatively small amounts of data.) looking at the mortality rates actually experienced at each age and applying a piecewise model (such as by cubic splines) to fit the data. (In cases of relatively large amounts of data.) The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, or smokers and non-smokers) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them. The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications (e.g. aircraft), methods like accelerated aging are used to model the life expectancy of a component. The life expectancy statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends, but some models exist to account for the evolution of mortality, like the Lee–Carter model. As discussed above, on an individual basis, some factors correlate with longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use (including smoking and alcohol consumption), disposition, education, environment, sleep, climate, and health care. Healthy life expectancy To assess the quality of these additional years of life, 'healthy life expectancy' has been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in "full health" excluding the years lived in less than full health due to disease and/or injury. Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan "Healthy People 2010". More and more countries are using health expectancy indicators to monitor the health of their population. The long-standing quest for longer life led in the 2010s to a more promising focus on increasing HALE, also known as a person's "healthspan". Besides the benefits of keeping people healthier longer, a goal is to reduce health-care expenses on the many diseases associated with cellular senescence. Approaches being explored include fasting, exercise, and senolytic drugs. Forecasting Forecasting life expectancy and mortality form an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs (like U.S. Social Security and pension) since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer. Life expectancy forecasting is usually based on one of two different approaches: Forecasting the life expectancy directly, generally using ARIMA or other time-series extrapolation procedures. This has the advantage of simplicity, but it cannot account for changes in mortality at specific ages, and the forecast number cannot be used to derive other life table results. Analyses and forecasts using this approach can be done with any common statistical/mathematical software package, like EViews, R, SAS, Stata, Matlab, or SPSS. Forecasting age-specific death rates and computing the life expectancy from the results with life table methods. This is usually more complex than simply forecasting life expectancy because the analyst must deal with correlated age-specific mortality rates, but it seems to be more robust than simple one-dimensional time series approaches. It also yields a set of age-specific rates that may be used to derive other measures, such as survival curves or life expectancies at different ages. The most important approach in this group is the Lee-Carter model, which uses the singular value decomposition on a set of transformed age-specific mortality rates to reduce their dimensionality to a single time series, forecasts that time series, and then recovers a full set of age-specific mortality rates from that forecasted value. The software includes Professor Rob J. Hyndman's R package called 'demography' and UC Berkeley's LCFIT system. Policy uses Life expectancy is one of the factors in measuring the Human Development Index (HDI) of each nation along with adult literacy, education, and standard of living. Life expectancy is used in describing the physical quality of life of an area. It is also used for an individual when the value of a life settlement is determined a life insurance policy is sold for a cash asset. Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialized countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = −0.907). There is a similar relationship among states in the U.S. (r = −0.620). Life expectancy vs. other measures of longevity Life expectancy may be confused with the average age an adult could expect to live, creating the misunderstanding that an adult's lifespan would be unlikely to exceed their life expectancy at birth. This is not the case, as life expectancy is an average of the lifespans of all individuals, including those who die before adulthood. One may compare the life expectancy of the period after childhood to estimate also the life expectancy of an adult. As a measure of the years of life remaining, life expectancy decreases with age after initially rising in early childhood, but the average age to which a person is likely to live increases as they survive to successive higher ages. In the table above, the estimated modern hunter-gatherer average expectation of life at birth of 33 years (often considered an upper-bound for Paleolithic populations) equates to a life expectancy at 15 of 39 years, so that those surviving to age 15 will on average die at 54. In England in the 13th–19th centuries with life expectancy at birth rising from perhaps 25 years to over 40, expectation of life at age 30 has been estimated at 20–30 years, giving an average age at death of about 50–60 for those (a minority at the start of the period but two-thirds at its end) surviving beyond their twenties. The table above gives the life expectancy at birth among 13th-century English nobles as 30–33, but having surviving to the age of 21, a male member of the English aristocracy could expect to live: 1200–1300: to age 64 1300–1400: to age 45 (because of the bubonic plague) 1400–1500: to age 69 1500–1550: to age 71 A further concept is that of modal age at death, the single age when deaths among a population are more numerous than at any other age. In all pre-modern societies the most common age at death is the first year of life: it is only as infant mortality falls below around 33–34 per thousand (roughly a tenth of estimated ancient and medieval levels) that deaths in a later year of life (usually around age 80) become more numerous. While the most common age of death in adulthood among modern hunter-gatherers (often taken as a guide to the likely most favourable Paleolithic demographic experience) is estimated to average 72 years, the number dying at that age is dwarfed by those (over a fifth of all infants) dying in the first year of life, and only around a quarter usually survive to the higher age. Maximum life span is an individual-specific concept, and therefore is an upper bound rather than an average. Science author Christopher Wanjek writes, "[H]as the human race increased its life span? Not at all. This is one of the biggest misconceptions about old age: we are not living any longer." The maximum life span, or oldest age a human can live, may be constant. Further, there are many examples of people living significantly longer than the average life expectancy of their time period, such as Socrates (71), Saint Anthony the Great (105), Michelangelo (88), and John Adams (90). However, anthropologist John D. Hawks criticizes the popular conflation of life span (life expectancy) and maximum life span when popular science writers falsely imply that the average adult human does not live longer than their ancestors. He writes, "[a]ge-specific mortality rates have declined across the adult lifespan. A smaller fraction of adults die at 20, at 30, at 40, at 50, and so on across the lifespan. As a result, we live longer on average... In every way we can measure, human lifespans are longer today than in the immediate past, and longer today than they were 2000 years ago... age-specific mortality rates in adults really have reduced substantially." See also Increasing life expectancy Notes References Further reading External links Charts for all countries Our World In Data – Life Expectancy—Visualizations of how life expectancy around the world has changed historically (by Max Roser). Includes life expectancy for different age groups. Charts for all countries, world maps, and links to more data sources. Global Agewatch has the latest internationally comparable statistics on life expectancy from 195 countries. Rank Order—Life expectancy at birth from the CIA's World Factbook. Annual Life Tables since 1966; Decennial Life Tables since 1890 from the US Centers for Disease Controls and Prevention, National Center for Health Statistics. Life expectancy in Roman times from the University of Texas. Animal lifespans: Animal Lifespans from Tesarta Online (Internet Archive); The Life Span of Animals from Dr. Bob's All Creatures Site. Actuarial science Demographic economics Senescence Demography Population Duration
Life expectancy
[ "Physics", "Chemistry", "Mathematics", "Biology", "Environmental_science" ]
7,298
[ "Duration", "Life expectancy", "Physical quantities", "Time", "Applied mathematics", "Senescence", "Actuarial science", "Cellular processes", "Demography", "Spacetime", "Environmental social science", "Metabolism" ]
18,728
https://en.wikipedia.org/wiki/Law%20of%20dilution
Wilhelm Ostwald’s dilution law is a relationship proposed in 1888 between the dissociation constant and the degree of dissociation of a weak electrolyte. The law takes the form Where the square brackets denote concentration, and is the total concentration of electrolyte. Using , where is the molar conductivity at concentration c and is the limiting value of molar conductivity extrapolated to zero concentration or infinite dilution, this results in the following relation: Derivation Consider a binary electrolyte AB which dissociates reversibly into A+ and B− ions. Ostwald noted that the law of mass action can be applied to such systems as dissociating electrolytes. The equilibrium state is represented by the equation: AB <=> {A+} + B^- If is the fraction of dissociated electrolyte, then is the concentration of each ionic species. must, therefore be the fraction of undissociated electrolyte, and the concentration of same. The dissociation constant may therefore be given as For very weak electrolytes (however, neglecting 'α' for most weak electrolytes yields counterproductive result) , implying that . This gives the following results; Thus, the degree of dissociation of a weak electrolyte is proportional to the inverse square root of the concentration, or the square root of the dilution. The concentration of any one ionic species is given by the root of the product of the dissociation constant and the concentration of the electrolyte. Limitations The Ostwald law of dilution provides a satisfactory description of the concentration dependence of the conductivity of weak electrolytes like CH3COOH and NH4OH. The variation of molar conductivity is essentially due to the incomplete dissociation of weak electrolytes into ions. For strong electrolytes, however, Lewis and Randall recognized that the law fails badly since the supposed equilibrium constant is actually far from constant. This is because the dissociation of strong electrolytes into ions is essentially complete below a concentration threshold value. The decrease in molar conductivity as a function of concentration is actually due to attraction between ions of opposite charge as expressed in the Debye-Hückel-Onsager equation and later revisions. Even for weak electrolytes the equation is not exact. Chemical thermodynamics shows that the true equilibrium constant is a ratio of thermodynamic activities, and that each concentration must be multiplied by an activity coefficient. This correction is important for ionic solutions due to the strong forces between ionic charges. An estimate of their values is given by the Debye–Hückel theory at low concentrations. See also Autosolvolysis Osmotic coefficient Activity coefficient Ion transport number Ion association Molar conductivity Physical chemistry Enzyme kinetics References
Law of dilution
[ "Physics", "Chemistry" ]
586
[ "Applied and interdisciplinary physics", "Enzyme kinetics", "nan", "Chemical kinetics", "Physical chemistry" ]
18,735
https://en.wikipedia.org/wiki/Lycopene
Lycopene is an organic compound classified as a tetraterpene and a carotene. Lycopene (from the Neo-Latin Lycopersicon, the name of a former tomato genus) is a bright red carotenoid hydrocarbon found in tomatoes and other red fruits and vegetables. Occurrence Aside from tomatoes or tomato products like ketchup, it is found in watermelons, grapefruits, red guavas, and baked beans. It has no vitamin A activity. In plants, algae, and other photosynthetic organisms, lycopene is an intermediate in the biosynthesis of many carotenoids, including beta-carotene, which is responsible for yellow, orange, or red pigmentation, photosynthesis, and photoprotection. Like all carotenoids, lycopene is a tetraterpene. It is soluble in fat, but insoluble in water. Eleven conjugated double bonds give lycopene its deep red color. Owing to the strong color, lycopene is used as a food coloring (registered as E160d) and is approved for use in the US, Australia and New Zealand (registered as 160d), and the European Union (E160d). Structure and physical properties Lycopene is a symmetrical tetraterpene because it consists entirely of carbon and hydrogen and is derived from eight isoprene subunits. Isolation procedures for lycopene were first reported in 1910, and the structure of the molecule was determined by 1931. In its natural, all-trans form, the molecule is long and somewhat flat, constrained by its system of 11 conjugated double bonds. The extended conjugation is responsible for its deep red color. Plants and photosynthetic bacteria produce all-trans lycopene. When exposed to light or heat, lycopene can undergo isomerization to any of a number of cis-isomers, which have a less linear shape. Isomers distinct stabilities, with highest stability: 5-cis ≥ all-trans ≥ 9-cis ≥ 13-cis > 15-cis > 7-cis > 11-cis: lowest. In human blood, various cis-isomers constitute more than 60% of the total lycopene concentration, but the biological effects of individual isomers have not been investigated. Carotenoids like lycopene are found in photosynthetic pigment-protein complexes in plants, photosynthetic bacteria, fungi, and algae. They are responsible for the bright orange–red colors of fruits and vegetables, perform various functions in photosynthesis, and protect photosynthetic organisms from excessive light damage. Lycopene is a key intermediate in the biosynthesis of carotenoids, such as beta-carotene, and xanthophylls. Dispersed lycopene molecules can be encapsulated into carbon nanotubes enhancing their optical properties. Efficient energy transfer occurs between the encapsulated dye and nanotube—light is absorbed by the dye and without significant loss is transferred to the nanotube. Encapsulation increases chemical and thermal stability of lycopene molecules; it also allows their isolation and individual characterization. Biosynthesis The unconditioned biosynthesis of lycopene in eukaryotic plants and in prokaryotic cyanobacteria is similar, as are the enzymes involved. Synthesis begins with mevalonic acid, which is converted into dimethylallyl pyrophosphate. This is then condensed with three molecules of isopentenyl pyrophosphate (an isomer of dimethylallyl pyrophosphate), to give the 20-carbon geranylgeranyl pyrophosphate. Two molecules of this product are then condensed in a tail-to-tail configuration to give the 40-carbon phytoene, the first committed step in carotenoid biosynthesis. Through several desaturation steps, phytoene is converted into lycopene. The two terminal isoprene groups of lycopene can be cyclized to produce beta-carotene, which can then be transformed into a wide variety of xanthophylls. Staining and removal Lycopene is the pigment in tomato sauces that turns plastic cookware orange. It is insoluble in plain water, but it can be dissolved in organic solvents and oils. Because of its non-polarity, lycopene in food preparations will stain any sufficiently porous material, including most plastics. To remove this staining, the plastics may be soaked in a solution containing a small amount of chlorine bleach. The bleach oxidizes the lycopene, thus rendering it colourless. Diet Consumption by humans Absorption of lycopene requires that it be combined with bile salts and fat to form micelles. Intestinal absorption of lycopene is enhanced by the presence of fat and by cooking. Lycopene dietary supplements (in oil) may be more efficiently absorbed than lycopene from food. Lycopene is not an essential nutrient for humans, but is commonly found in the diet mainly from dishes prepared from tomatoes. The median and 99th percentile of dietary lycopene intake have been estimated to be 5.2 and 123 mg/d, respectively. Sources Fruits and vegetables that are high in lycopene include autumn olive, gac, tomatoes, watermelon, pink grapefruit, pink guava, papaya, seabuckthorn, wolfberry (goji, a berry relative of tomato), and rosehip. Ketchup is a common dietary source of lycopene. Although gac (Momordica cochinchinensis Spreng) has the highest content of lycopene of any known fruit or vegetable (multiple times more than tomatoes), tomatoes and tomato-based sauces, juices, and ketchup account for more than 85% of the dietary intake of lycopene for most people. The lycopene content of tomatoes depends on variety and increases as the fruit ripens. Unlike other fruits and vegetables, where nutritional content such as vitamin C is diminished upon cooking, processing of tomatoes increases the concentration of bioavailable lycopene. Lycopene in tomato paste is up to four times more bioavailable than in fresh tomatoes. Processed tomato products such as pasteurized tomato juice, soup, sauce, and ketchup contain a higher concentration of bioavailable lycopene compared to raw tomatoes. Cooking and crushing tomatoes (as in the canning process) and serving in oil-rich dishes (such as spaghetti sauce or pizza) greatly increases assimilation from the digestive tract into the bloodstream. Lycopene is fat-soluble, so the oil is said to help absorption. Gac has high lycopene content derived mainly from its seed coats. Cara cara navel, and other citrus fruit, such as pink grapefruit, also contain lycopene. Some foods that do not appear red also contain lycopene, e.g., baked beans. When lycopene is used as a food additive (E160d), it is usually obtained from tomatoes. Adverse effects Lycopene is non-toxic and commonly found in the diet, mainly from tomato products. There are cases of intolerance or allergic reaction to dietary lycopene, which may cause diarrhea, nausea, stomach pain or cramps, gas, and loss of appetite. Lycopene may increase the risk of bleeding when taken with anticoagulant drugs. Because lycopene may cause low blood pressure, interactions with drugs that affect blood pressure may occur. Lycopene may affect the immune system, the nervous system, sensitivity to sunlight, or drugs used for stomach ailments. Lycopenemia is an orange discoloration of the skin that is observed with high intakes of lycopene. The discoloration is expected to fade after discontinuing excessive lycopene intake. Research and potential health effects A 2020 review of randomized controlled trials found conflicting evidence for lycopene having an effect on cardiovascular risk factors, whereas a 2017 review concluded that tomato products and lycopene supplementation reduced blood lipids and blood pressure. A 2015 review found that dietary lycopene was associated with reduced risk of prostate cancer, whereas a 2021 meta-analysis found that dietary lycopene did not affect prostate cancer risk. Other reviews concluded that research has been insufficient to establish whether lycopene consumption affects human health. Regulatory status in Europe and the United States In a review of literature on lycopene and its potential benefit in the diet, the European Food Safety Authority concluded there was insufficient evidence for lycopene having antioxidant effects in humans, particularly in skin, heart function, or vision protection from ultraviolet light. Although lycopene from tomatoes has been tested in humans for cardiovascular diseases and prostate cancer, no effect on any disease was found. The US Food and Drug Administration, in rejecting manufacturers' requests in 2005 to allow "qualified labeling" for lycopene and the reduction of various cancer risks, provided a conclusion that remains in effect : In a review of research through 2024, the US National Cancer Institute concluded that the FDA has not approved the use of lycopene as effective for treating any medical condition, including various types of cancer. See also Lycopene (data page) Lycopane Nutrition Tocopherol Tocotrienol Tomatine References Carotenoids Hydrocarbons Food antioxidants Food colorings Dietary supplements E-number additives
Lycopene
[ "Chemistry", "Biology" ]
2,058
[ "Organic compounds", "Hydrocarbons", "Biomarkers", "Carotenoids" ]
18,881
https://en.wikipedia.org/wiki/Mathematical%20induction
Mathematical induction is a method for proving that a statement is true for every natural number , that is, that the infinitely many cases   all hold. This is done by first proving a simple case, then also showing that if we assume the claim is true for a given case, then the next case is also true. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder: A proof by induction consists of two cases. The first, the base case, proves the statement for without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case , then it must also hold for the next case . These two steps establish that the statement holds for every natural number . The base case does not necessarily begin with , but often with , and possibly with any fixed natural number , establishing the truth of the statement for all natural numbers . The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs. Despite its name, mathematical induction differs fundamentally from inductive reasoning as used in philosophy, in which the examination of many cases results in a probable conclusion. The mathematical method examines infinitely many cases to prove a general statement, but it does so by a finite chain of deductive reasoning involving the variable , which can take infinitely many values. The result is a rigorous proof of the statement, not an assertion of its probability. History In 370 BC, Plato's Parmenides may have contained traces of an early example of an implicit inductive proof, however, the earliest implicit proof by mathematical induction was written by al-Karaji around 1000 AD, who applied it to arithmetic sequences to prove the binomial theorem and properties of Pascal's triangle. Whilst the original work was lost, it was later referenced by Al-Samawal al-Maghribi in his treatise al-Bahir fi'l-jabr (The Brilliant in Algebra) in around 1150 AD. Katz says in his history of mathematics In India, early implicit proofs by mathematical induction appear in Bhaskara's "cyclic method". None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed) was that of Francesco Maurolico in his Arithmeticorum libri duo (1575), who used the technique to prove that the sum of the first odd integers is . The earliest rigorous use of induction was by Gersonides (1288–1344). The first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle arithmétique (1665). Another Frenchman, Fermat, made ample use of a related principle: indirect proof by infinite descent. The induction hypothesis was also employed by the Swiss Jakob Bernoulli, and from then on it became well known. The modern formal treatment of the principle came only in the 19th century, with George Boole, Augustus De Morgan, Charles Sanders Peirce, Giuseppe Peano, and Richard Dedekind. Description The simplest and most common form of mathematical induction infers that a statement involving a natural number (that is, an integer or 1) holds for all values of . The proof consists of two steps: The (or initial case): prove that the statement holds for 0, or 1. The (or inductive step, or step case): prove that for every , if the statement holds for , then it holds for . In other words, assume that the statement holds for some arbitrary natural number , and prove that the statement holds for . The hypothesis in the induction step, that the statement holds for a particular , is called the induction hypothesis or inductive hypothesis. To prove the induction step, one assumes the induction hypothesis for and then uses this assumption to prove that the statement holds for . Authors who prefer to define natural numbers to begin at 0 use that value in the base case; those who define natural numbers to begin at 1 use that value. Examples Sum of consecutive natural numbers Mathematical induction can be used to prove the following statement for all natural numbers . This states a general formula for the sum of the natural numbers less than or equal to a given number; in fact an infinite sequence of statements: , , , etc. Proposition. For every , Proof. Let be the statement We give a proof by induction on . Base case: Show that the statement holds for the smallest natural number . is clearly true: Induction step: Show that for every , if holds, then also holds. Assume the induction hypothesis that for a particular , the single case holds, meaning is true: It follows that: Algebraically, the right hand side simplifies as: Equating the extreme left hand and right hand sides, we deduce that: That is, the statement also holds true, establishing the induction step. Conclusion: Since both the base case and the induction step have been proved as true, by mathematical induction the statement holds for every natural number . Q.E.D. A trigonometric inequality Induction is often used to prove inequalities. As an example, we prove that for any real number and natural number . At first glance, it may appear that a more general version, for any real numbers , could be proven without induction; but the case shows it may be false for non-integer values of . This suggests we examine the statement specifically for natural values of , and induction is the readiest tool. Proposition. For any and , . Proof. Fix an arbitrary real number , and let be the statement . We induce on . Base case: The calculation verifies . Induction step: We show the implication for any natural number . Assume the induction hypothesis: for a given value , the single case is true. Using the angle addition formula and the triangle inequality, we deduce: The inequality between the extreme left-hand and right-hand quantities shows that is true, which completes the induction step. Conclusion: The proposition holds for all natural numbers Q.E.D. Variants In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven. All variants of induction are special cases of transfinite induction; see below. Base case other than 0 or 1 If one wishes to prove a statement, not for all natural numbers, but only for all numbers greater than or equal to a certain number , then the proof by induction consists of the following: Showing that the statement holds when . Showing that if the statement holds for an arbitrary number , then the same statement also holds for . This can be used, for example, to show that for . In this way, one can prove that some statement holds for all , or even for all . This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved is then proving it with these two rules is equivalent with proving for all natural numbers with an induction base case . Example: forming dollar amounts by coins Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to can be formed by a combination of such coins. Let denote the statement " dollars can be formed by a combination of 4- and 5-dollar coins". The proof that is true for all can then be achieved by induction on as follows: Base case: Showing that holds for is simple: take three 4-dollar coins. Induction step: Given that holds for some value of (induction hypothesis), prove that holds, too. Assume is true for some arbitrary . If there is a solution for dollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to make dollars. Otherwise, if only 5-dollar coins are used, must be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to make dollars. In each case, is true. Therefore, by the principle of induction, holds for all , and the proof is complete. In this example, although also holds for , the above proof cannot be modified to replace the minimum amount of dollar to any lower value . For , the base case is actually false; for , the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lower . Induction on more than one counter It is sometimes desirable to prove a statement involving two natural numbers, and , by iterating the induction process. That is, one proves a base case and an induction step for , and in each of those proves a base case and an induction step for . See, for example, the proof of commutativity accompanying addition of natural numbers. More complicated arguments involving three or more counters are also possible. Infinite descent The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement is false for all natural numbers . Its traditional form consists of showing that if is true for some natural number , it also holds for some strictly smaller natural number . Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that cannot be true for any . The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement defined as " is false for all natural numbers less than or equal to ", it follows that holds for all , which means that is false for every natural number . Limited mathematical induction If one wishes to prove that a property holds for all natural numbers less than or equal to , proving satisfies the following conditions suffices: holds for 0, For any natural number less than , if holds for , then holds for Prefix induction The most common form of proof by mathematical induction requires proving in the induction step that whereupon the induction principle "automates" applications of this step in getting from to . This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor. A variant of interest in computational complexity is "prefix induction", in which one proves the following statement in the induction step: or equivalently The induction principle then "automates" log2 n applications of this inference in getting from to . In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of its binary representation. It can also be viewed as an application of traditional induction on the length of that binary representation. If traditional predecessor induction is interpreted computationally as an -step loop, then prefix induction would correspond to a log--step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction. Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding a bounded universal quantifier), so the interesting results relating prefix induction to polynomial-time computation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal and existential quantifiers allowed in the statement. One can take the idea a step further: one must prove whereupon the induction principle "automates" applications of this inference in getting from to . This form of induction has been used, analogously, to study log-time parallel computation. Complete (strong) induction Another variant, called complete induction, course of values induction or strong induction (in contrast to which the basic form of induction is sometimes known as weak induction), makes the induction step easier to prove by using a stronger hypothesis: one proves the statement under the assumption that holds for all natural numbers less than ; by contrast, the basic form only assumes . The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the induction step. In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case, , and it may even be necessary to prove extra-base cases such as before the general argument applies, as in the example below of the Fibonacci number . Although the form just described requires one to prove the base case, this is unnecessary if one can prove (assuming for all lower ) for all . This is a special case of transfinite induction as described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the case , where is proved with no other assumed; this case may need to be handled separately, but sometimes the same argument applies for and , making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof of does not implicitly assume that , e.g. by saying "choose an arbitrary ", or by assuming that a set of elements has an element. Equivalence with ordinary induction Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof of by complete induction. Then, this proof can be transformed into an ordinary induction proof by assuming a stronger inductive hypothesis. Let be the statement " holds for all such that "—this becomes the inductive hypothesis for ordinary induction. We can then show and for assuming only and show that implies . If, on the other hand, had been proven by ordinary induction, the proof would already effectively be one by complete induction: is proved in the base case, using no assumptions, and is proved in the induction step, in which one may assume all earlier cases but need only use the case . Example: Fibonacci numbers Complete induction is most useful when several instances of the inductive hypothesis are required for each induction step. For example, complete induction can be used to show that where is the -th Fibonacci number, and (the golden ratio) and are the roots of the polynomial . By using the fact that for each , the identity above can be verified by direct calculation for if one assumes that it already holds for both and . To complete the proof, the identity must be verified in the two base cases: and . Example: prime factorization Another proof by complete induction uses the hypothesis that the statement holds for all smaller more thoroughly. Consider the statement that "every natural number greater than 1 is a product of (one or more) prime numbers", which is the "existence" part of the fundamental theorem of arithmetic. For proving the induction step, the induction hypothesis is that for a given the statement holds for all smaller . If is prime then it is certainly a product of primes, and if not, then by definition it is a product: , where neither of the factors is equal to 1; hence neither is equal to , and so both are greater than 1 and smaller than . The induction hypothesis now applies to and , so each one is a product of primes. Thus is a product of products of primes, and hence by extension a product of primes itself. Example: dollar amounts revisited We shall look to prove the same example as above, this time with strong induction. The statement remains the same: However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case. Proof. Base case: Show that holds for . The base case holds. Induction step: Given some , assume holds for all with . Prove that holds. Choosing , and observing that shows that holds, by the inductive hypothesis. That is, the sum can be formed by some combination of and dollar coins. Then, simply adding a dollar coin to that combination yields the sum . That is, holds Q.E.D. Forward-backward induction Sometimes, it is more convenient to deduce backwards, proving the statement for , given its validity for . However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example, Augustin Louis Cauchy first used forward (regular) induction to prove the inequality of arithmetic and geometric means for all powers of 2, and then used backwards induction to show it for all natural numbers. Example of error in the induction step The induction step must be proved for all values of . To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color: Base case: in a set of only one horse, there is only one color. Induction step: assume as induction hypothesis that within any set of horses, there is only one color. Now look at any set of horses. Number them: . Consider the sets and . Each is a set of only horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all horses. The base case is trivial, and the induction step is correct in all cases . However, the argument used in the induction step is incorrect for , because the statement that "the two sets overlap" is false for and . Formalization In second-order logic, one can write down the "axiom of induction" as follows: where is a variable for predicates involving one natural number and and are variables for natural numbers. In words, the base case and the induction step (namely, that the induction hypothesis implies ) together imply that for any natural number . The axiom of induction asserts the validity of inferring that holds for any natural number from the base case and the induction step. The first quantifier in the axiom ranges over predicates rather than over individual numbers. This is a second-order quantifier, which means that this axiom is stated in second-order logic. Axiomatizing arithmetic induction in first-order logic requires an axiom schema containing a separate axiom for each possible predicate. The article Peano axioms contains further discussion of this issue. The axiom of structural induction for the natural numbers was first formulated by Peano, who used it to specify the natural numbers together with the following four other axioms: 0 is a natural number. The successor function of every natural number yields a natural number . The successor function is injective. 0 is not in the range of . In first-order ZFC set theory, quantification over predicates is not allowed, but one can still express induction by quantification over sets: may be read as a set representing a proposition, and containing natural numbers, for which the proposition holds. This is not an axiom, but a theorem, given that natural numbers are defined in the language of ZFC set theory by axioms, analogous to Peano's. See construction of the natural numbers using the axiom of infinity and axiom schema of specification. Transfinite induction One variation of the principle of complete induction can be generalized for statements about elements of any well-founded set, that is, a set with an irreflexive relation < that contains no infinite descending chains. Every set representing an ordinal number is well-founded, the set of natural numbers is one of them. Applied to a well-founded set, transfinite induction can be formulated as a single step. To prove that a statement holds for each ordinal number: Show, for each ordinal number , that if holds for all , then also holds. This form of induction, when applied to a set of ordinal numbers (which form a well-ordered and hence well-founded class), is called transfinite induction. It is an important proof technique in set theory, topology and other fields. Proofs by transfinite induction typically distinguish three cases: when is a minimal element, i.e. there is no element smaller than ; when has a direct predecessor, i.e. the set of elements which are smaller than has a largest element; when has no direct predecessor, i.e. is a so-called limit ordinal. Strictly speaking, it is not necessary in transfinite induction to prove a base case, because it is a vacuous special case of the proposition that if is true of all , then is true of . It is vacuously true precisely because there are no values of that could serve as counterexamples. So the special cases are special cases of the general case. Relationship to the well-ordering principle The principle of mathematical induction is usually stated as an axiom of the natural numbers; see Peano axioms. It is strictly stronger than the well-ordering principle in the context of the other Peano axioms. Suppose the following: The trichotomy axiom: For any natural numbers and , is less than or equal to if and only if is not less than . For any natural number , is greater . For any natural number , no natural number is and . No natural number is less than zero. It can then be proved that induction, given the above-listed axioms, implies the well-ordering principle. The following proof uses complete induction and the first and fourth axioms. Proof. Suppose there exists a non-empty set, , of natural numbers that has no least element. Let be the assertion that is not in . Then is true, for if it were false then 0 is the least element of . Furthermore, let be a natural number, and suppose is true for all natural numbers less than . Then if is false is in , thus being a minimal element in , a contradiction. Thus is true. Therefore, by the complete induction principle, holds for all natural numbers ; so is empty, a contradiction. Q.E.D. On the other hand, the set , shown in the picture, is well-ordered by the lexicographic order. Moreover, except for the induction axiom, it satisfies all Peano axioms, where Peano's constant 0 is interpreted as the pair (0, 0), and Peano's successor function is defined on pairs by for all and . As an example for the violation of the induction axiom, define the predicate as or for some and . Then the base case is trivially true, and so is the induction step: if , then . However, is not true for all pairs in the set, since is false. Peano's axioms with the induction principle uniquely model the natural numbers. Replacing the induction principle with the well-ordering principle allows for more exotic models that fulfill all the axioms. It is mistakenly printed in several books and sources that the well-ordering principle is equivalent to the induction axiom. In the context of the other Peano axioms, this is not the case, but in the context of other axioms, they are equivalent; specifically, the well-ordering principle implies the induction axiom in the context of the first two above listed axioms and Every natural number is either 0 or for some natural number . A common mistake in many erroneous proofs is to assume that is a unique and well-defined natural number, a property which is not implied by the other Peano axioms. See also Induction puzzles Proof by exhaustion Notes References Introduction (Ch. 8.) (Section 1.2.1: Mathematical Induction, pp. 11–21.) (Section 3.8: Transfinite induction, pp. 28–29.) History Reprinted (CP 3.252–288), (W 4:299–309) Articles containing proofs Mathematical logic Methods of proof
Mathematical induction
[ "Mathematics" ]
4,949
[ "Proof theory", "Mathematical logic", "Methods of proof", "Articles containing proofs", "Mathematical induction" ]
18,890
https://en.wikipedia.org/wiki/Microsoft%20Windows
Windows is a product line of proprietary graphical operating systems developed and marketed by Microsoft. It is grouped into families and subfamilies that cater to particular sectors of the computing industry – Windows (unqualified) for a consumer or corporate workstation, Windows Server for a server and Windows IoT for an embedded system. Windows is sold as either a consumer retail product or licensed to third-party hardware manufacturers who sell products bundled with Windows. The first version of Windows, Windows 1.0, was released on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs). The name "Windows" is a reference to the windowing system in GUIs. The 1990 release of Windows 3.0 catapulted its market success and led to various other product families, including the now-defunct Windows 9x, Windows Mobile, Windows Phone, and Windows CE/Embedded Compact. Windows is the most popular desktop operating system in the world, with a 70% market share , according to StatCounter; however when including mobile OSes, it is not the most used, in favor of Android. The most recent version of Windows is Windows 11 for consumer PCs and tablets, Windows 11 Enterprise for corporations, and Windows Server 2025 for servers. Still supported are some editions of Windows 10, Windows Server 2016 or later (and exceptionally with paid support down to Windows Server 2008). Product line the only active top-level family is Windows NT. The first version, Windows NT 3.1, was intended for server computing and corporate workstations. It grew into a product line of its own and now consists of four sub-families that tend to be released almost simultaneously and share the same kernel. Windows (unqualified): For a consumer or corporate workstation or tablet. The latest version is Windows 11. Its main competitors are macOS by Apple and Linux for personal computers and iPadOS and Android for tablets (c.f. ). Of note: "Windows" refers to both the overall product line and this sub-family of it. Windows Server: For a server computer. The latest version is Windows Server 2025. Unlike its client sibling, it has adopted a strong naming scheme. The main competitor of this family is Linux. (c.f. ) Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers (especially on many computers at once), recovery, or troubleshooting purposes. The latest version is Windows PE 10. Windows IoT (previously Windows Embedded): For IoT and embedded computers. The latest version is Windows 11 IoT Enterprise. Like Windows Server, the main competitor of this family is Linux. (c.f. ) These top-level Windows families are no longer actively developed: Windows 9x: Intended exclusively for the consumer market. The first version was Windows 95, which was followed by Windows 98. The last version was Windows Me (which was infamously known as the worst operating systems of all time, with PC World labeling it as "Mistake Edition" and placing it 4th in their list of Worst Tech Products in 2006). All versions of the Windows 9x family have a monolithic kernel that uses MS-DOS as a foundation alongside the kernel first used with Windows 95. This line has since been defunct, with Microsoft now catering to the consumer market with Windows NT starting with Windows XP. Windows Mobile: The predecessor to Windows Phone, a mobile phone and PDA operating system. The first version was called Pocket PC 2000. The third version, Windows Mobile 2003, was the first version to adopt the Windows Mobile trademark. The last version was Windows Mobile 6.5. Windows Phone: Sold only to smartphone manufacturers. The first version was Windows Phone 7, followed by Windows Phone 8 and Windows Phone 8.1. It was succeeded by Windows 10 Mobile, which is also defunct. Windows Embedded Compact: Most commonly known by its former name, Windows CE, it is a hybrid kernel operating system optimized for low power and memory systems, with OEMs able to modify the UI to suit their needs. The final version was Windows Embedded Compact 2013, and it is succeeded by Windows IoT. Version history The term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are generally categorized as follows: Early versions The history of Windows dates back to 1981 when Microsoft started work on a program called "Interface Manager". The name "Windows" comes from the fact that the system was one of the first to use graphical boxes to represent programs; in the industry, at the time, these were called "windows" and the underlying software was called "windowing software." It was announced in November 1983 (after the Apple Lisa, but before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. Windows 1.0 was to compete with Apple's operating system, but achieved little popularity. Windows 1.0 is not a complete operating system; rather, it extends MS-DOS. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Calendar, Cardfile, Clipboard Viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal and Write. Windows 1.0 does not allow overlapping windows. Instead, all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, and was more popular than its predecessor. It features several improvements to the user interface and memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights (eventually settled in court in Microsoft's favor in 1993). Windows 2.0 also introduced more sophisticated keyboard shortcuts and could make use of expanded memory. Windows 2.1 was released in two different versions: Windows/286 and Windows/386. Windows/386 uses the virtual 8086 mode of the Intel 80386 to multitask several DOS programs and the paged memory model to emulate expanded memory using available extended memory. Windows/286, in spite of its name, runs on both Intel 8086 and Intel 80286 processors. It runs in real mode but can make use of the high memory area. In addition to full Windows packages, there were runtime-only versions that shipped with early Windows software from third parties and made it possible to run their Windows software on MS-DOS and without the full Windows feature set. The early versions of Windows are often thought of as graphical shells, mostly because they ran on top of MS-DOS and used it for file system services. However, even the earliest Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound). Unlike MS-DOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allows it to run applications larger than available memory: code segments and resources are swapped in and thrown away when memory became scarce; data segments moved in memory when a given application had relinquished processor control. Windows 3.x Windows 3.0, released in 1990, improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) that allow Windows to share arbitrary devices between multi-tasked DOS applications. Windows 3.0 applications can run in protected mode, which gives them access to several megabytes of memory without the obligation to participate in the software virtual memory scheme. They run inside the same address space, where the segmented memory provides a degree of protection. Windows 3.0 also featured improvements to the user interface. Microsoft rewrote critical operations from C into assembly. Windows 3.0 was the first version of Windows to achieve broad commercial success, selling 2 million copies in the first six months. Windows 3.1, made generally available on March 1, 1992, featured a facelift. In October 1992, Windows for Workgroups, a special version with integrated peer-to-peer networking features, was released. It was sold along with Windows 3.1. Support for Windows 3.1 ended on December 31, 2001. Windows 3.2, released in 1994, is an updated version of the Chinese version of Windows 3.1. The update was limited to this language version, as it fixed only issues related to the complex writing system of the Chinese language. Windows 3.2 was generally sold by computer manufacturers with a ten-disk version of MS-DOS that also had Simplified Chinese characters in basic output and some translated utilities. Windows 9x The next major consumer-oriented release of Windows, Windows 95, was released on August 24, 1995. While still remaining MS-DOS-based, Windows 95 introduced support for native 32-bit applications, plug and play hardware, preemptive multitasking, long file names of up to 255 characters, and provided increased stability over its predecessors. Windows 95 also introduced a redesigned, object oriented user interface, replacing the previous Program Manager with the Start menu, taskbar, and Windows Explorer shell. Windows 95 was a major commercial success for Microsoft; Ina Fried of CNET remarked that "by the time Windows 95 was finally ushered off the market in 2001, it had become a fixture on computer desktops around the world." Microsoft published four OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack. The first OSR of Windows 95 was also the first version of Windows to be bundled with Microsoft's web browser, Internet Explorer. Mainstream support for Windows 95 ended on December 31, 2000, and extended support for Windows 95 ended on December 31, 2001. Windows 95 was followed up with the release of Windows 98 on June 25, 1998, which introduced the Windows Driver Model, support for USB composite devices, support for ACPI, hibernation, and support for multi-monitor configurations. Windows 98 also included integration with Internet Explorer 4 through Active Desktop and other aspects of the Windows Desktop Update (a series of enhancements to the Explorer shell which was also made available for Windows 95). In May 1999, Microsoft released Windows 98 Second Edition, an updated version of Windows 98. Windows 98 SE added Internet Explorer 5.0 and Windows Media Player 6.2 amongst other upgrades. Mainstream support for Windows 98 ended on June 30, 2002, and extended support for Windows 98 ended on July 11, 2006. On September 14, 2000, Microsoft released Windows Me (Millennium Edition), the last DOS-based version of Windows. Windows Me incorporated visual interface enhancements from its Windows NT-based counterpart Windows 2000, had faster boot times than previous versions (which however, required the removal of the ability to access a real mode DOS environment, removing compatibility with some older programs), expanded multimedia functionality (including Windows Media Player 7, Windows Movie Maker, and the Windows Image Acquisition framework for retrieving images from scanners and digital cameras), additional system utilities such as System File Protection and System Restore, and updated home networking tools. However, Windows Me was faced with criticism for its speed and instability, along with hardware compatibility issues and its removal of real mode DOS support. PC World considered Windows Me to be one of the worst operating systems Microsoft had ever released, and the fourth worst tech product of all time. Windows NT Version history Early versions (Windows NT 3.1/3.5/3.51/4.0/2000) In November 1988, a new development team within Microsoft (which included former Digital Equipment Corporation developers Dave Cutler and Mark Lucovsky) began work on a revamped version of IBM and Microsoft's OS/2 operating system known as "NT OS/2". NT OS/2 was intended to be a secure, multi-user operating system with POSIX compatibility and a modular, portable kernel with preemptive multitasking and support for multiple processor architectures. However, following the successful release of Windows 3.0, the NT development team decided to rework the project to use an extended 32-bit port of the Windows API known as Win32 instead of those of OS/2. Win32 maintained a similar structure to the Windows APIs (allowing existing Windows applications to easily be ported to the platform), but also supported the capabilities of the existing NT kernel. Following its approval by Microsoft's staff, development continued on what was now Windows NT, the first 32-bit version of Windows. However, IBM objected to the changes, and ultimately continued OS/2 development on its own. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. The first release of the resulting operating system, Windows NT 3.1 (named to associate it with Windows 3.1) was released in July 1993, with versions for desktop workstations and servers. Windows NT 3.5 was released in September 1994, focusing on performance improvements and support for Novell's NetWare, and was followed up by Windows NT 3.51 in May 1995, which included additional improvements and support for the PowerPC architecture. Windows NT 4.0 was released in June 1996, introducing the redesigned interface of Windows 95 to the NT series. On February 17, 2000, Microsoft released Windows 2000, a successor to NT 4.0. The Windows NT name was dropped at this point in order to put a greater focus on the Windows brand. Windows XP The next major version of Windows NT, Windows XP, was released to manufacturing (RTM) on August 24, 2001, and to the general public on October 25, 2001. The introduction of Windows XP aimed to unify the consumer-oriented Windows 9x series with the architecture introduced by Windows NT, a change which Microsoft promised would provide better performance over its DOS-based predecessors. Windows XP would also introduce a redesigned user interface (including an updated Start menu and a "task-oriented" Windows Explorer), streamlined multimedia and networking features, Internet Explorer 6, integration with Microsoft's .NET Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality. At retail, Windows XP was marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014. After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in December 2005, by Windows Server 2003 R2. Windows Vista After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008. Windows 7 On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released to manufacturing (RTM) and released to the public three months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements. Windows 8 and 8.1 Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024×768 pixels, effectively making it unfit for netbooks with 800×600-pixel screens. Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices that utilize the ARM architecture, and a new keyboard shortcut for screenshots. An update to Windows 8, called Windows 8.1, was released on October 17, 2013, and includes features such as new live tile sizes, deeper OneDrive integration, and many other revisions. Windows 8 and Windows 8.1 have been subject to some criticism, such as the removal of the Start menu. Windows 10 On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7). In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300-gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day. In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will end on October 14, 2025. On April 27, 2023, Microsoft announced that version 22H2 would be the last of Windows 10. Windows 11 On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. Windows 11 is a free upgrade to Windows 10 users who meet the system requirements. Windows 365 In July 2021, Microsoft announced it will start selling subscriptions to virtualized Windows desktops as part of a new Windows 365 service in the following month. The new service will allow for cross-platform usage, aiming to make the operating system available for both Apple and Android users. It is a separate service and offers several variations including Windows 365 Frontline, Windows 365 Boot, and the Windows 365 app. The subscription service will be accessible through any operating system with a web browser. The new service is an attempt at capitalizing on the growing trend, fostered during the COVID-19 pandemic, for businesses to adopt a hybrid remote work environment, in which "employees split their time between the office and home". As the service will be accessible through web browsers, Microsoft will be able to bypass the need to publish the service through Google Play or the Apple App Store. Microsoft announced Windows 365 availability to business and enterprise customers on August 2, 2021. Multilingual support Multilingual support has been built into Windows since Windows 3.0. The language for both the keyboard and the interface can be changed through the Region and Language Control Panel. Components for all supported input languages, such as Input Method Editors, are automatically installed during Windows installation (in Windows XP and earlier, files for East Asian languages, such as Chinese, and files for right-to-left scripts, such as Arabic, may need to be installed separately, also from the said Control Panel). Third-party IMEs may also be installed if a user feels that the provided one is insufficient for their needs. Since Windows 2000, English editions of Windows NT have East Asian IMEs (such as Microsoft Pinyin IME and Microsoft Japanese IME) bundled, but files for East Asian languages may be manually installed on Control Panel. Interface languages for the operating system are free for download, but some languages are limited to certain editions of Windows. Language Interface Packs (LIPs) are redistributable and may be downloaded from Microsoft's Download Center and installed for any edition of Windows (XP or later)they translate most, but not all, of the Windows interface, and require a certain base language (the language which Windows originally shipped with). This is used for most languages in emerging markets. Full Language Packs, which translate the complete operating system, are only available for specific editions of Windows (Ultimate and Enterprise editions of Windows Vista and 7, and all editions of Windows 8, 8.1 and RT except Single Language). They do not require a specific base language and are commonly used for more popular languages such as French or Chinese. These languages cannot be downloaded through the Download Center, but are available as optional updates through the Windows Update service (except Windows 8). The interface language of installed applications is not affected by changes in the Windows interface language. The availability of languages depends on the application developers themselves. Windows 8 and Windows Server 2012 introduce a new Language Control Panel where both the interface and input languages can be simultaneously changed, and language packs, regardless of type, can be downloaded from a central location. The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a counterpart settings page for this. Changing the interface language also changes the language of preinstalled Windows Store apps (such as Mail, Maps and News) and certain other Microsoft-developed apps (such as Remote Desktop). The above limitations for language packs are however still in effect, except that full language packs can be installed for any edition except Single Language, which caters to emerging markets. Platform support Windows NT included support for several platforms before the x86-based personal computer became dominant in the professional world. Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of the Windows NT family still ran on IA-32 up to Windows 10 (the server line of the Windows NT family still ran on IA-32 up to Windows Server 2008). With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition (Version 2003), released in 2003, is the last Windows client operating system to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. As of 2024, x64 is still supported. An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture, and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated. Starting from Windows 10 Fall Creators Update (version 1709) and later includes support for ARM-based PCs. Windows CE Windows CE (officially known as Windows Embedded Compact), is an edition of Windows that runs on minimalistic computers, like satellite navigation systems and some mobile phones. Windows Embedded Compact is based on its own dedicated kernel, dubbed Windows CE kernel. Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, while Windows CE provides the technical foundation to do so. Windows CE was used in the Dreamcast along with Sega's own proprietary OS for the console. Windows CE was the core from which Windows Mobile was derived. Its successor, Windows Phone 7, was based on components from both Windows CE 6.0 R3 and Windows CE 7.0. Windows Phone 8 however, is based on the same NT-kernel as Windows 8. Windows Embedded Compact is not to be confused with Windows XP Embedded or Windows NT 4.0 Embedded, modular editions of Windows based on Windows NT kernel. Xbox OS Xbox OS is an unofficial name given to the version of Windows that runs on Xbox consoles. From Xbox One onwards it is an implementation with an emphasis on virtualization (using Hyper-V) as it is three operating systems running at once, consisting of the core operating system, a second implemented for games and a more Windows-like environment for applications. Microsoft updates Xbox One's OS every month, and these updates can be downloaded from the Xbox Live service to the Xbox and subsequently installed, or by using offline recovery images downloaded via a PC. It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as "Windows 10 on Xbox One". Xbox One and Xbox Series operating systems also allow limited (due to licensing restrictions and testing resources) backward compatibility with previous generation hardware, and the Xbox 360's system is backwards compatible with the original Xbox. Version control system Up to and including every version before Windows 2000, Microsoft used an in-house version control system named Source Library Manager (SLM). Shortly after Windows 2000 was released, Microsoft switched to a fork of Perforce named Source Depot. This system was used up until 2017 once the system could not keep up with the size of Windows. Microsoft had begun to integrate Git into Team Foundation Server in 2013, but Windows (and Office) continued to rely on Source Depot. The Windows code was divided among 65 different repositories with a kind of virtualization layer to produce unified view of all of the code. In 2017 Microsoft announced that it would start using Git, an open source version control system created by Linus Torvalds, and in May 2017 they reported that the migration into a new Git repository was complete. VFSForGit Because of its large, decades-long history, however, the Windows codebase is not especially well suited to the decentralized nature of Linux development that Git was originally created to manage. Each Git repository contains a complete history of all the files, which proved unworkable for Windows developers because cloning the whole repository takes several hours. Microsoft has been working on a new project called the Virtual File System for Git (VFSForGit) to address these challenges. In 2021 the VFS for Git was superseded by Scalar. Timeline of releases Usage share and device sales Use of Windows 10 has exceeded Windows 7 globally since early 2018. For desktop and laptop computers, according to Net Applications and StatCounter (which track the use of operating systems in devices that are active on the Web), Windows was the most used operating-system family in August 2021, with around 91% usage share according to Net Applications and around 76% usage share according to StatCounter. Including personal computers of all kinds (e.g., desktops, laptops, mobile devices, and game consoles), Windows OSes accounted for 32.67% of usage share in August 2021, compared to Android (highest, at 46.03%), iOS's 13.76%, iPadOS's 2.81%, and macOS's 2.51%, according to Net Applications and 30.73% of usage share in August 2021, compared to Android (highest, at 42.56%), iOS/iPadOS's 16.53%, and macOS's 6.51%, according to StatCounter. Those statistics do not include servers (including cloud computing, where Linux has significantly more market share than Windows) as Net Applications and StatCounter use web browsing as a proxy for all use. Security Early versions of Windows were designed at a time where malware and networking were less common, and had few built-in security features; they did not provide access privileges to allow a user to prevent other users from accessing their files, and they did not provide memory protection to prevent one process from reading or writing another process's address space or to prevent a process from code or data used by privileged-mode code. While the Windows 9x series offered the option of having profiles for multiple users with separate profiles and home folders, it had no concept of access privileges, allowing any user to edit others' files. In addition, while it ran separate 32-bit applications in separate address spaces, protecting an application's code and data from being read or written by another application, it did not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt. Windows NT was far more secure, implementing access privileges and full memory protection, and, while 32-bit programs meeting the DoD's C2 security rating, yet these advantages were nullified by the fact that, prior to Windows Vista, the default user account created during the setup process was an administrator account; the user, and any program the user launched, had full access to the machine. Though Windows XP did offer an option of turning administrator accounts into limited accounts, the majority of home users did not do so, partially due to the number of programs which required administrator rights to function properly. As a result, most home users still ran as administrator all the time. These architectural flaws, combined with Windows's very high popularity, made Windows a frequent target of computer worm and virus writers. Furthermore, although Windows NT and its successors are designed for security (including on a network) and multi-user PCs, they were not initially designed with Internet security in mind as much, since, when it was first developed in the early 1990s, Internet use was less prevalent. In a 2002 strategy memo entitled "Trustworthy computing" sent to every Microsoft employee, Bill Gates declared that security should become Microsoft's highest priority. Windows Vista introduced a privilege elevation system called User Account Control. When logging in as a standard user, a logon session is created and a token containing only the most basic privileges is assigned. In this way, the new logon session is incapable of making changes that would affect the entire system. When logging in as a user in the Administrators group, two separate tokens are assigned. The first token contains all privileges typically awarded to an administrator, and the second is a restricted token similar to what a standard user would receive. User applications, including the Windows shell, are then started with the restricted token, resulting in a reduced privilege environment even under an Administrator account. When an application requests higher privileges or "Run as administrator" is clicked, UAC will prompt for confirmation and, if consent is given (including administrator credentials if the account requesting the elevation is not a member of the administrators group), start the process using the unrestricted token. Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the CIA to perform electronic surveillance and cyber warfare, such as the ability to compromise operating systems such as Windows. In August 2019, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from Windows XP to the then most recent Windows 10 versions; a patch to correct the flaw is available. Microsoft releases security patches through its Windows Update service approximately once a month (usually the second Tuesday of the month), although critical updates are made available at shorter intervals when necessary. Versions subsequent to Windows 2000 SP3 and Windows XP implemented automatic download and installation of updates, substantially increasing the number of users installing security updates. Windows integrates the Windows Defender antivirus, which is seen as one of the best available. Windows also implements Secure Boot, Control Flow Guard, ransomware protection, BitLocker disk encryption, a firewall, and Windows SmartScreen. In July 2024, Microsoft signalled an intention to limit kernel access and improve overall security, following a highly publicised CrowdStrike update that caused 8.5 million Windows PCs to crash. Part of that initiative is to rewrite parts of Windows in Rust, a memory-safe language. File permissions All Windows versions from Windows NT 3 have been based on a file system permission system referred to as AGDLP (Accounts, Global, Domain Local, Permissions) in which file permissions are applied to the file/folder in the form of a 'local group' which then has other 'global groups' as members. These global groups then hold other groups or users depending on different Windows versions used. This system varies from other vendor products such as Linux and NetWare due to the 'static' allocation of permission being applied directly to the file or folder. However using this process of AGLP/AGDLP/AGUDLP allows a small number of static permissions to be applied and allows for easy changes to the account groups without reapplying the file permissions on the files and folders. Vulnerabilities Sticky keys and filter keys Sticky keys and filter keys are a huge vulnerability of windows. It can allow someone to run any command on the lock screen, including making themselves administrator, just by changing the name of cmd to one of those two programs. WinRE Main article:Windows Preinstallation Environment Windows RE, also known as Windows PE,  is a big vulnerability because it allows people to edit just about any program and execute many commands Alternative implementations Owing to the operating system's popularity, a number of applications have been released that aim to provide compatibility with Windows applications, either as a compatibility layer for another operating system, or as a standalone system that can run software written for Windows out of the box. These include: Wine – a free and open-source implementation of the Windows API, allowing one to run many Windows applications on x86-based platforms, including UNIX, Linux and macOS. Wine developers refer to it as a "compatibility layer" and use Windows-style APIs to emulate Windows environment. CrossOver – a Wine package with licensed fonts. Its developers are regular contributors to Wine. Proton – A fork of Wine by Valve to run Windows games on Linux and other Unix-like OS. ReactOS – an open-source OS intended to run the same software as Windows, originally designed to simulate Windows NT 4.0, later aiming at Windows 7 compatibility. It has been in the development stage since 1996. Freedows OS – an open-source attempt at creating a Windows clone for x86 platforms, intended to be released under the GNU General Public License. Started in 1996 by Reece K. Sellin, the project was never completed, getting only to the stage of design discussions which featured a number of novel concepts until it was suspended in 2002. See also Wintel References External links Official Windows Blog Microsoft Developer Network Windows Developer Center Microsoft Windows History Timeline Pearson Education, InformIT  – History of Microsoft Windows Microsoft Business Software Solutions Windows 10 release Information 1985 software Computing platforms Microsoft franchises Personal computers Windows Operating system families Products introduced in 1985 1985 establishments in the United States
Microsoft Windows
[ "Technology" ]
7,731
[ "Computing platforms", "Microsoft Windows" ]
18,899
https://en.wikipedia.org/wiki/Mendelevium
Mendelevium is a synthetic chemical element; it has symbol Md (formerly Mv) and atomic number 101. A metallic radioactive transuranium element in the actinide series, it is the first element by atomic number that currently cannot be produced in macroscopic quantities by neutron bombardment of lighter elements. It is the third-to-last actinide and the ninth transuranic element and the first transfermium. It can only be produced in particle accelerators by bombarding lighter elements with charged particles. Seventeen isotopes are known; the most stable is 258Md with half-life 51.59 days; however, the shorter-lived 256Md (half-life 77.7 minutes) is most commonly used in chemistry because it can be produced on a larger scale. Mendelevium was discovered by bombarding einsteinium with alpha particles in 1955, the method still used to produce it today. It is named after Dmitri Mendeleev, the father of the periodic table. Using available microgram quantities of einsteinium-253, over a million mendelevium atoms may be made each hour. The chemistry of mendelevium is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. All known isotopes of mendelevium have short half-lives; there are currently no uses for it outside basic scientific research, and only small amounts are produced. Discovery Mendelevium was the ninth transuranic element to be synthesized. It was first synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey, and team leader Stanley G. Thompson in early 1955 at the University of California, Berkeley. The team produced 256Md (half-life of 77.7 minutes) when they bombarded an 253Es target consisting of only a billion (109) einsteinium atoms with alpha particles (helium nuclei) in the Berkeley Radiation Laboratory's 60-inch cyclotron, thus increasing the target's atomic number by two. 256Md thus became the first isotope of any element to be synthesized one atom at a time. In total, seventeen mendelevium atoms were produced. This discovery was part of a program, begun in 1952, that irradiated plutonium with neutrons to transmute it into heavier actinides. This method was necessary as the previous method used to synthesize transuranic elements, neutron capture, could not work because of a lack of known beta decaying isotopes of fermium that would produce isotopes of the next element, mendelevium, and also due to the very short half-life to spontaneous fission of 258Fm that thus constituted a hard limit to the success of the neutron capture process. To predict if the production of mendelevium would be possible, the team made use of a rough calculation. The number of atoms that would be produced would be approximately equal to the product of the number of atoms of target material, the target's cross section, the ion beam intensity, and the time of bombardment; this last factor was related to the half-life of the product when bombarding for a time on the order of its half-life. This gave one atom per experiment. Thus under optimum conditions, the preparation of only one atom of element 101 per experiment could be expected. This calculation demonstrated that it was feasible to go ahead with the experiment. The target material, einsteinium-253, could be produced readily from irradiating plutonium: one year of irradiation would give a billion atoms, and its three-week half-life meant that the element 101 experiments could be conducted in one week after the produced einsteinium was separated and purified to make the target. However, it was necessary to upgrade the cyclotron to obtain the needed intensity of 1014 alpha particles per second; Seaborg applied for the necessary funds. While Seaborg applied for funding, Harvey worked on the einsteinium target, while Thomson and Choppin focused on methods for chemical isolation. Choppin suggested using α-hydroxyisobutyric acid to separate the mendelevium atoms from those of the lighter actinides. The actual synthesis was done by a recoil technique, introduced by Albert Ghiorso. In this technique, the einsteinium was placed on the opposite side of the target from the beam, so that the recoiling mendelevium atoms would get enough momentum to leave the target and be caught on a catcher foil made of gold. This recoil target was made by an electroplating technique, developed by Alfred Chetham-Strode. This technique gave a very high yield, which was absolutely necessary when working with such a rare and valuable product as the einsteinium target material. The recoil target consisted of 109 atoms of 253Es which were deposited electrolytically on a thin gold foil. It was bombarded by 41 MeV alpha particles in the Berkeley cyclotron with a very high beam density of 6×1013 particles per second over an area of 0.05 cm2. The target was cooled by water or liquid helium, and the foil could be replaced. Initial experiments were carried out in September 1954. No alpha decay was seen from mendelevium atoms; thus, Ghiorso suggested that the mendelevium had all decayed by electron capture to fermium and that the experiment should be repeated to search instead for spontaneous fission events. The repetition of the experiment happened in February 1955. On the day of discovery, 19 February, alpha irradiation of the einsteinium target occurred in three three-hour sessions. The cyclotron was in the University of California campus, while the Radiation Laboratory was on the next hill. To deal with this situation, a complex procedure was used: Ghiorso took the catcher foils (there were three targets and three foils) from the cyclotron to Harvey, who would use aqua regia to dissolve it and pass it through an anion-exchange resin column to separate out the transuranium elements from the gold and other products. The resultant drops entered a test tube, which Choppin and Ghiorso took in a car to get to the Radiation Laboratory as soon as possible. There Thompson and Choppin used a cation-exchange resin column and the α-hydroxyisobutyric acid. The solution drops were collected on platinum disks and dried under heat lamps. The three disks were expected to contain respectively the fermium, no new elements, and the mendelevium. Finally, they were placed in their own counters, which were connected to recorders such that spontaneous fission events would be recorded as huge deflections in a graph showing the number and time of the decays. There thus was no direct detection, but by observation of spontaneous fission events arising from its electron-capture daughter 256Fm. The first one was identified with a "hooray" followed by a "double hooray" and a "triple hooray". The fourth one eventually officially proved the chemical identification of the 101st element, mendelevium. In total, five decays were reported up until 4 a.m. Seaborg was notified and the team left to sleep. Additional analysis and further experimentation showed the produced mendelevium isotope to have mass 256 and to decay by electron capture to fermium-256 with a half-life of 157.6 minutes. Being the first of the second hundred of the chemical elements, it was decided that the element would be named "mendelevium" after the Russian chemist Dmitri Mendeleev, father of the periodic table. Because this discovery came during the Cold War, Seaborg had to request permission of the government of the United States to propose that the element be named for a Russian, but it was granted. The name "mendelevium" was accepted by the International Union of Pure and Applied Chemistry (IUPAC) in 1955 with symbol "Mv", which was changed to "Md" in the next IUPAC General Assembly (Paris, 1957). Characteristics Physical In the periodic table, mendelevium is located to the right of the actinide fermium, to the left of the actinide nobelium, and below the lanthanide thulium. Mendelevium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties. The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fns2 configurations, whereas the latter have fn−1d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f126d17s2 configuration over the [Rn]5f137s2 configuration for mendelevium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number. Thermochromatographic studies with trace quantities of mendelevium by Zvara and Hübener from 1976 to 1982 confirmed this prediction. In 1990, Haire and Gibson estimated mendelevium metal to have an enthalpy of sublimation between 134 and 142 kJ/mol. Divalent mendelevium metal should have a metallic radius of around . Like the other divalent late actinides (except the once again trivalent lawrencium), metallic mendelevium should assume a face-centered cubic crystal structure. Mendelevium's melting point has been estimated at 800 °C, the same value as that predicted for the neighboring element nobelium. Its density is predicted to be around . Chemical The chemistry of mendelevium is mostly known only in solution, in which it can take on the +3 or +2 oxidation states. The +1 state has also been reported, but has not yet been confirmed. Before mendelevium's discovery, Seaborg and Katz predicted that it should be predominantly trivalent in aqueous solution and hence should behave similarly to other tripositive lanthanides and actinides. After the synthesis of mendelevium in 1955, these predictions were confirmed, first in the observation at its discovery that it eluted just after fermium in the trivalent actinide elution sequence from a cation-exchange column of resin, and later the 1967 observation that mendelevium could form insoluble hydroxides and fluorides that coprecipitated with trivalent lanthanide salts. Cation-exchange and solvent extraction studies led to the conclusion that mendelevium was a trivalent actinide with an ionic radius somewhat smaller than that of the previous actinide, fermium. Mendelevium can form coordination complexes with 1,2-cyclohexanedinitrilotetraacetic acid (DCTA). In reducing conditions, mendelevium(III) can be easily reduced to mendelevium(II), which is stable in aqueous solution. The standard reduction potential of the E°(Md3+→Md2+) couple was variously estimated in 1967 as −0.10 V or −0.20 V: later 2013 experiments established the value as . In comparison, E°(Md3+→Md0) should be around −1.74 V, and E°(Md2+→Md0) should be around −2.5 V. Mendelevium(II)'s elution behavior has been compared with that of strontium(II) and europium(II). In 1973, mendelevium(I) was reported to have been produced by Russian scientists, who obtained it by reducing higher oxidation states of mendelevium with samarium(II). It was found to be stable in neutral water–ethanol solution and be homologous to caesium(I). However, later experiments found no evidence for mendelevium(I) and found that mendelevium behaved like divalent elements when reduced, not like the monovalent alkali metals. Nevertheless, the Russian team conducted further studies on the thermodynamics of cocrystallizing mendelevium with alkali metal chlorides, and concluded that mendelevium(I) had formed and could form mixed crystals with divalent elements, thus cocrystallizing with them. The status of the +1 oxidation state is still tentative. The electrode potential E°(Md4+→Md3+) was predicted in 1975 to be +5.4 V; 1967 experiments with the strong oxidizing agent sodium bismuthate were unable to oxidize mendelevium(III) to mendelevium(IV). Atomic A mendelevium atom has 101 electrons. They are expected to be arranged in the configuration [Rn]5f137s2 (ground state term symbol 2F7/2), although experimental verification of this electron configuration had not yet been made as of 2006. The fifteen electrons in the 5f and 7s subshells are valence electrons. In forming compounds, three valence electrons may be lost, leaving behind a [Rn]5f12 core: this conforms to the trend set by the other actinides with their [Rn] 5fn electron configurations in the tripositive state. The first ionization potential of mendelevium was measured to be at most (6.58 ± 0.07) eV in 1974, based on the assumption that the 7s electrons would ionize before the 5f ones; this value has since not yet been refined further due to mendelevium's scarcity and high radioactivity. The ionic radius of hexacoordinate Md3+ had been preliminarily estimated in 1978 to be around 91.2 pm; 1988 calculations based on the logarithmic trend between distribution coefficients and ionic radius produced a value of 89.6 pm, as well as an enthalpy of hydration of . Md2+ should have an ionic radius of 115 pm and hydration enthalpy −1413 kJ/mol; Md+ should have ionic radius 117 pm. Isotopes Seventeen isotopes of mendelevium are known, with mass numbers from 244 to 260; all are radioactive. Additionally, 14 nuclear isomers are known. Of these, the longest-lived isotope is 258Md with a half-life of 51.59 days, and the longest-lived isomer is 258mMd with a half-life of 57.0 minutes. Nevertheless, the shorter-lived 256Md (half-life 1.295 hours) is more often used in chemical experimentation because it can be produced in larger quantities from alpha particle irradiation of einsteinium. After 258Md, the next most stable mendelevium isotopes are 260Md with a half-life of 27.8 days, 257Md with a half-life of 5.52 hours, 259Md with a half-life of 1.60 hours, and 256Md with a half-life of 1.295 hours. All of the remaining mendelevium isotopes have half-lives that are less than an hour, and the majority of these have half-lives that are less than 5 minutes. The half-lives of mendelevium isotopes mostly increase smoothly from 244Md onwards, reaching a maximum at 258Md. Experiments and predictions suggest that the half-lives will then decrease, apart from 260Md with a half-life of 27.8 days, as spontaneous fission becomes the dominant decay mode due to the mutual repulsion of the protons posing a limit to the island of relative stability of long-lived nuclei in the actinide series. In addition, mendelevium is the element with the highest atomic number that has a known isotope with a half-life longer than one day. Mendelevium-256, the chemically most important isotope of mendelevium, decays through electron capture 90% of the time and alpha decay 10% of the time. It is most easily detected through the spontaneous fission of its electron capture daughter fermium-256, but in the presence of other nuclides that undergo spontaneous fission, alpha decays at the characteristic energies for mendelevium-256 (7.205 and 7.139 MeV) can provide more useful identification. Production and isolation The lightest isotopes (244Md to 247Md) are mostly produced through bombardment of bismuth targets with argon ions, while slightly heavier ones (248Md to 253Md) are produced by bombarding plutonium and americium targets with ions of carbon and nitrogen. The most important and most stable isotopes are in the range from 254Md to 258Md and are produced through bombardment of einsteinium with alpha particles: einsteinium-253, −254, and −255 can all be used. 259Md is produced as a daughter of 259No, and 260Md can be produced in a transfer reaction between einsteinium-254 and oxygen-18. Typically, the most commonly used isotope 256Md is produced by bombarding either einsteinium-253 or −254 with alpha particles: einsteinium-254 is preferred when available because it has a longer half-life and therefore can be used as a target for longer. Using available microgram quantities of einsteinium, femtogram quantities of mendelevium-256 may be produced. The recoil momentum of the produced mendelevium-256 atoms is used to bring them physically far away from the einsteinium target from which they are produced, bringing them onto a thin foil of metal (usually beryllium, aluminium, platinum, or gold) just behind the target in a vacuum. This eliminates the need for immediate chemical separation, which is both costly and prevents reusing of the expensive einsteinium target. The mendelevium atoms are then trapped in a gas atmosphere (frequently helium), and a gas jet from a small opening in the reaction chamber carries the mendelevium along. Using a long capillary tube, and including potassium chloride aerosols in the helium gas, the mendelevium atoms can be transported over tens of meters to be chemically analyzed and have their quantity determined. The mendelevium can then be separated from the foil material and other fission products by applying acid to the foil and then coprecipitating the mendelevium with lanthanum fluoride, then using a cation-exchange resin column with a 10% ethanol solution saturated with hydrochloric acid, acting as an eluant. However, if the foil is made of gold and thin enough, it is enough to simply dissolve the gold in aqua regia before separating the trivalent actinides from the gold using anion-exchange chromatography, the eluant being 6 M hydrochloric acid. Mendelevium can finally be separated from the other trivalent actinides using selective elution from a cation-exchange resin column, the eluant being ammonia α-HIB. Using the gas-jet method often renders the first two steps unnecessary. The above procedure is the most commonly used one for the separation of transeinsteinium elements. Another possible way to separate the trivalent actinides is via solvent extraction chromatography using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column, so that the heavier actinides elute later. The mendelevium separated by this method has the advantage of being free of organic complexing agent compared to the resin column; the disadvantage is that mendelevium then elutes very late in the elution sequence, after fermium. Another method to isolate mendelevium exploits the distinct elution properties of Md2+ from those of Es3+ and Fm3+. The initial steps are the same as above, and employs HDEHP for extraction chromatography, but coprecipitates the mendelevium with terbium fluoride instead of lanthanum fluoride. Then, 50 mg of chromium is added to the mendelevium to reduce it to the +2 state in 0.1 M hydrochloric acid with zinc or mercury. The solvent extraction then proceeds, and while the trivalent and tetravalent lanthanides and actinides remain on the column, mendelevium(II) does not and stays in the hydrochloric acid. It is then reoxidized to the +3 state using hydrogen peroxide and then isolated by selective elution with 2 M hydrochloric acid (to remove impurities, including chromium) and finally 6 M hydrochloric acid (to remove the mendelevium). It is also possible to use a column of cationite and zinc amalgam, using 1 M hydrochloric acid as an eluant, reducing Md(III) to Md(II) where it behaves like the alkaline earth metals. Thermochromatographic chemical isolation could be achieved using the volatile mendelevium hexafluoroacetylacetonate: the analogous fermium compound is also known and is also volatile. Toxicity Though few people come in contact with mendelevium, the International Commission on Radiological Protection has set annual exposure limits for the most stable isotope. For mendelevium-258, the ingestion limit was set at 9×105 becquerels (1 Bq = 1 decay per second). Given the half-life of this isotope, this is only 2.48 ng (nanograms). The inhalation limit is at 6000 Bq or 16.5 pg (picogram). Notes References Bibliography Further reading Hoffman, D.C., Ghiorso, A., Seaborg, G. T. The transuranium people: the inside story, (2000), 201–229 Morss, L. R., Edelstein, N. M., Fuger, J., The chemistry of the actinide and transactinide element, 3, (2006), 1630–1636 A Guide to the Elements – Revised Edition, Albert Stwertka, (Oxford University Press; 1998) External links Los Alamos National Laboratory – Mendelevium It's Elemental – Mendelevium Mendelevium at The Periodic Table of Videos (University of Nottingham) Environmental Chemistry – Md info Chemical elements Chemical elements with face-centered cubic structure Actinides Synthetic elements
Mendelevium
[ "Physics", "Chemistry" ]
4,876
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
18,906
https://en.wikipedia.org/wiki/Microfluidics
Microfluidics refers to a system that manipulates a small amount of fluids (10−9 to 10−18 liters) using small channels with sizes of ten to hundreds of micrometres. It is a multidisciplinary field that involves molecular analysis, molecular biology, and microelectronics. It has practical applications in the design of systems that process low volumes of fluids to achieve multiplexing, automation, and high-throughput screening. Microfluidics emerged in the beginning of the 1980s and is used in the development of inkjet printheads, DNA chips, lab-on-a-chip technology, micro-propulsion, and micro-thermal technologies. Typically, micro means one of the following features: Small volumes (μL, nL, pL, fL) Small size Low energy consumption Microdomain effects Typically microfluidic systems transport, mix, separate, or otherwise process fluids. Various applications rely on passive fluid control using capillary forces, in the form of capillary flow modifying elements, akin to flow resistors and flow accelerators. In some applications, external actuation means are additionally used for a directed transport of the media. Examples are rotary drives applying centrifugal forces for the fluid transport on the passive chips. Active microfluidics refers to the defined manipulation of the working fluid by active (micro) components such as micropumps or microvalves. Micropumps supply fluids in a continuous manner or are used for dosing. Microvalves determine the flow direction or the mode of movement of pumped liquids. Often, processes normally carried out in a lab are miniaturised on a single chip, which enhances efficiency and mobility, and reduces sample and reagent volumes. Microscale behaviour of fluids The behaviour of fluids at the microscale can differ from "macrofluidic" behaviour in that factors such as surface tension, energy dissipation, and fluidic resistance start to dominate the system. Microfluidics studies how these behaviours change, and how they can be worked around, or exploited for new uses. At small scales (channel size of around 100 nanometers to 500 micrometers) some interesting and sometimes unintuitive properties appear. In particular, the Reynolds number (which compares the effect of the momentum of a fluid to the effect of viscosity) can become very low. A key consequence is co-flowing fluids do not necessarily mix in the traditional sense, as flow becomes laminar rather than turbulent; molecular transport between them must often be through diffusion. High specificity of chemical and physical properties (concentration, pH, temperature, shear force, etc.) can also be ensured resulting in more uniform reaction conditions and higher grade products in single and multi-step reactions. Various kinds of microfluidic flows Microfluidic flows need only be constrained by geometrical length scale – the modalities and methods used to achieve such a geometrical constraint are highly dependent on the targeted application. Traditionally, microfluidic flows have been generated inside closed channels with the channel cross section being in the order of 10 μm x 10 μm. Each of these methods has its own associated techniques to maintain robust fluid flow which have matured over several years. Open microfluidics The behavior of fluids and their control in open microchannels was pioneered around 2005 and applied in air-to-liquid sample collection and chromatography. In open microfluidics, at least one boundary of the system is removed, exposing the fluid to air or another interface (i.e. liquid). Advantages of open microfluidics include accessibility to the flowing liquid for intervention, larger liquid-gas surface area, and minimized bubble formation. Another advantage of open microfluidics is the ability to integrate open systems with surface-tension driven fluid flow, which eliminates the need for external pumping methods such as peristaltic or syringe pumps. Open microfluidic devices are also easy and inexpensive to fabricate by milling, thermoforming, and hot embossing. In addition, open microfluidics eliminates the need to glue or bond a cover for devices, which could be detrimental to capillary flows. Examples of open microfluidics include open-channel microfluidics, rail-based microfluidics, paper-based, and thread-based microfluidics. Disadvantages to open systems include susceptibility to evaporation, contamination, and limited flow rate. Continuous-flow microfluidics Continuous flow microfluidics rely on the control of a steady state liquid flow through narrow channels or porous media predominantly by accelerating or hindering fluid flow in capillary elements. In paper based microfluidics, capillary elements can be achieved through the simple variation of section geometry. In general, the actuation of liquid flow is implemented either by external pressure sources, external mechanical pumps, integrated mechanical micropumps, or by combinations of capillary forces and electrokinetic mechanisms. Continuous-flow microfluidic operation is the mainstream approach because it is easy to implement and less sensitive to protein fouling problems. Continuous-flow devices are adequate for many well-defined and simple biochemical applications, and for certain tasks such as chemical separation, but they are less suitable for tasks requiring a high degree of flexibility or fluid manipulations. These closed-channel systems are inherently difficult to integrate and scale because the parameters that govern flow field vary along the flow path making the fluid flow at any one location dependent on the properties of the entire system. Permanently etched microstructures also lead to limited reconfigurability and poor fault tolerance capability. Computer-aided design automation approaches for continuous-flow microfluidics have been proposed in recent years to alleviate the design effort and to solve the scalability problems. Process monitoring capabilities in continuous-flow systems can be achieved with highly sensitive microfluidic flow sensors based on MEMS technology, which offers resolutions down to the nanoliter range. Droplet-based microfluidics Droplet-based microfluidics is a subcategory of microfluidics in contrast with continuous microfluidics; droplet-based microfluidics manipulates discrete volumes of fluids in immiscible phases with low Reynolds number and laminar flow regimes. Interest in droplet-based microfluidics systems has been growing substantially in past decades. Microdroplets allow for handling miniature volumes (μL to fL) of fluids conveniently, provide better mixing, encapsulation, sorting, and sensing, and suit high throughput experiments. Exploiting the benefits of droplet-based microfluidics efficiently requires a deep understanding of droplet generation to perform various logical operations such as droplet manipulation, droplet sorting, droplet merging, and droplet breakup. Digital microfluidics Alternatives to the above closed-channel continuous-flow systems include novel open structures, where discrete, independently controllable droplets are manipulated on a substrate using electrowetting. Following the analogy of digital microelectronics, this approach is referred to as digital microfluidics. Le Pesant et al. pioneered the use of electrocapillary forces to move droplets on a digital track. The "fluid transistor" pioneered by Cytonix also played a role. The technology was subsequently commercialised by Duke University. By using discrete unit-volume droplets, a microfluidic function can be reduced to a set of repeated basic operations, i.e., moving one unit of fluid over one unit of distance. This "digitisation" method facilitates the use of a hierarchical and cell-based approach for microfluidic biochip design. Therefore, digital microfluidics offers a flexible and scalable system architecture as well as high fault-tolerance capability. Moreover, because each droplet can be controlled independently, these systems also have dynamic reconfigurability, whereby groups of unit cells in a microfluidic array can be reconfigured to change their functionality during the concurrent execution of a set of bioassays. Although droplets are manipulated in confined microfluidic channels, since the control on droplets is not independent, it should not be confused as "digital microfluidics". One common actuation method for digital microfluidics is electrowetting-on-dielectric (EWOD). Many lab-on-a-chip applications have been demonstrated within the digital microfluidics paradigm using electrowetting. However, recently other techniques for droplet manipulation have also been demonstrated using magnetic force, surface acoustic waves, optoelectrowetting, mechanical actuation, etc. Paper-based microfluidics Paper-based microfluidic devices fill a growing niche for portable, cheap, and user-friendly medical diagnostic systems. Paper based microfluidics rely on the phenomenon of capillary penetration in porous media. To tune fluid penetration in porous substrates such as paper in two and three dimensions, the pore structure, wettability and geometry of the microfluidic devices can be controlled while the viscosity and evaporation rate of the liquid play a further significant role. Many such devices feature hydrophobic barriers on hydrophilic paper that passively transport aqueous solutions to outlets where biological reactions take place. Paper-based microfluidics are considered as portable point-of-care biosensors used in a remote setting where advanced medical diagnostic tools are not accessible. Current applications include portable glucose detection and environmental testing, with hopes of reaching areas that lack advanced medical diagnostic tools. Particle detection microfluidics One application area that has seen significant academic effort and some commercial effort is in the area of particle detection in fluids. Particle detection of small fluid-borne particles down to about 1 μm in diameter is typically done using a Coulter counter, in which electrical signals are generated when a weakly-conducting fluid such as in saline water is passed through a small (~100 μm diameter) pore, so that an electrical signal is generated that is directly proportional to the ratio of the particle volume to the pore volume. The physics behind this is relatively simple, described in a classic paper by DeBlois and Bean, and the implementation first described in Coulter's original patent. This is the method used to e.g. size and count erythrocytes (red blood cells) as well as leukocytes (white blood cells) for standard blood analysis. The generic term for this method is resistive pulse sensing (RPS); Coulter counting is a trademark term. However, the RPS method does not work well for particles below 1 μm diameter, as the signal-to-noise ratio falls below the reliably detectable limit, set mostly by the size of the pore in which the analyte passes and the input noise of the first-stage amplifier. The limit on the pore size in traditional RPS Coulter counters is set by the method used to make the pores, which while a trade secret, most likely uses traditional mechanical methods. This is where microfluidics can have an impact: The lithography-based production of microfluidic devices, or more likely the production of reusable molds for making microfluidic devices using a molding process, is limited to sizes much smaller than traditional machining. Critical dimensions down to 1 μm are easily fabricated, and with a bit more effort and expense, feature sizes below 100 nm can be patterned reliably as well. This enables the inexpensive production of pores integrated in a microfluidic circuit where the pore diameters can reach sizes of order 100 nm, with a concomitant reduction in the minimum particle diameters by several orders of magnitude. As a result, there has been some university-based development of microfluidic particle counting and sizing with the accompanying commercialization of this technology. This method has been termed microfluidic resistive pulse sensing (MRPS). Microfluidic-assisted magnetophoresis One major area of application for microfluidic devices is the separation and sorting of different fluids or cell types. Recent developments in the microfluidics field have seen the integration of microfluidic devices with magnetophoresis: the migration of particles by a magnetic field. This can be accomplished by sending a fluid containing at least one magnetic component through a microfluidic channel that has a magnet positioned along the length of the channel. This creates a magnetic field inside the microfluidic channel which draws magnetically active substances towards it, effectively separating the magnetic and non-magnetic components of the fluid. This technique can be readily utilized in industrial settings where the fluid at hand already contains magnetically active material. For example, a handful of metallic impurities can find their way into certain consumable liquids, namely milk and other dairy products. Conveniently, in the case of milk, many of these metal contaminants exhibit paramagnetism. Therefore, before packaging, milk can be flowed through channels with magnetic gradients as a means of purifying out the metal contaminants. Other, more research-oriented applications of microfluidic-assisted magnetophoresis are numerous and are generally targeted towards cell separation. The general way this is accomplished involves several steps. First, a paramagnetic substance (usually micro/nanoparticles or a paramagnetic fluid) needs to be functionalized to target the cell type of interest. This can be accomplished by identifying a transmembranal protein unique to the cell type of interest and subsequently functionalizing magnetic particles with the complementary antigen or antibody. Once the magnetic particles are functionalized, they are dispersed in a cell mixture where they bind to only the cells of interest. The resulting cell/particle mixture can then be flowed through a microfluidic device with a magnetic field to separate the targeted cells from the rest. Conversely, microfluidic-assisted magnetophoresis may be used to facilitate efficient mixing within microdroplets or plugs. To accomplish this, microdroplets are injected with paramagnetic nanoparticles and are flowed through a straight channel which passes through rapidly alternating magnetic fields. This causes the magnetic particles to be quickly pushed from side to side within the droplet and results in the mixing of the microdroplet contents. This eliminates the need for tedious engineering considerations that are necessary for traditional, channel-based droplet mixing. Other research has also shown that the label-free separation of cells may be possible by suspending cells in a paramagnetic fluid and taking advantage of the magneto-Archimedes effect. While this does eliminate the complexity of particle functionalization, more research is needed to fully understand the magneto-Archimedes phenomenon and how it can be used to this end. This is not an exhaustive list of the various applications of microfluidic-assisted magnetophoresis; the above examples merely highlight the versatility of this separation technique in both current and future applications. Key application areas Microfluidic structures include micropneumatic systems, i.e. microsystems for the handling of off-chip fluids (liquid pumps, gas valves, etc.), and microfluidic structures for the on-chip handling of nanoliter (nl) and picoliter (pl) volumes. To date, the most successful commercial application of microfluidics is the inkjet printhead. Additionally, microfluidic manufacturing advances mean that makers can produce the devices in low-cost plastics and automatically verify part quality. Advances in microfluidics technology are revolutionizing molecular biology procedures for enzymatic analysis (e.g., glucose and lactate assays), DNA analysis (e.g., polymerase chain reaction and high-throughput sequencing), proteomics, and in chemical synthesis. The basic idea of microfluidic biochips is to integrate assay operations such as detection, as well as sample pre-treatment and sample preparation on one chip. An emerging application area for biochips is clinical pathology, especially the immediate point-of-care diagnosis of diseases. In addition, microfluidics-based devices, capable of continuous sampling and real-time testing of air/water samples for biochemical toxins and other dangerous pathogens, can serve as an always-on "bio-smoke alarm" for early warning. Microfluidic technology has led to the creation of powerful tools for biologists to control the complete cellular environment, leading to new questions and discoveries. Many diverse advantages of this technology for microbiology are listed below: General single cell studies including growth Cellular aging: microfluidic devices such as the "mother machine" allow tracking of thousands of individual cells for many generations until they die Microenvironmental control: ranging from mechanical environment to chemical environment Precise spatiotemporal concentration gradients by incorporating multiple chemical inputs to a single device Force measurements of adherent cells or confined chromosomes: objects trapped in a microfluidic device can be directly manipulated using optical tweezers or other force-generating methods Confining cells and exerting controlled forces by coupling with external force-generation methods such as Stokes flow, optical tweezer, or controlled deformation of the PDMS (Polydimethylsiloxane) device Electric field integration Plant on a chip and plant tissue culture Antibiotic resistance: microfluidic devices can be used as heterogeneous environments for microorganisms. In a heterogeneous environment, it is easier for a microorganism to evolve. This can be useful for testing the acceleration of evolution of a microorganism / for testing the development of antibiotic resistance. Viral fusion: these devices also allow the study of the several steps and conditions required for viruses to bind and enter host cells. Information regarding efficiency, kinetics and specific steps of the binding and fusion processes can be obtained using microfluidic flow cells. Some of these areas are further elaborated in the sections below: DNA chips (microarrays) Early biochips were based on the idea of a DNA microarray, e.g., the GeneChip DNAarray from Affymetrix, which is a piece of glass, plastic or silicon substrate, on which pieces of DNA (probes) are affixed in a microscopic array. Similar to a DNA microarray, a protein array is a miniature array where a multitude of different capture agents, most frequently monoclonal antibodies, are deposited on a chip surface; they are used to determine the presence and/or amount of proteins in biological samples, e.g., blood. A drawback of DNA and protein arrays is that they are neither reconfigurable nor scalable after manufacture. Digital microfluidics has been described as a means for carrying out Digital PCR. Molecular biology In addition to microarrays, biochips have been designed for two-dimensional electrophoresis, transcriptome analysis, and PCR amplification. Other applications include various electrophoresis and liquid chromatography applications for proteins and DNA, cell separation, in particular, blood cell separation, protein analysis, cell manipulation and analysis including cell viability analysis and microorganism capturing. Evolutionary biology By combining microfluidics with landscape ecology and nanofluidics, a nano/micro fabricated fluidic landscape can be constructed by building local patches of bacterial habitat and connecting them by dispersal corridors. The resulting landscapes can be used as physical implementations of an adaptive landscape, by generating a spatial mosaic of patches of opportunity distributed in space and time. The patchy nature of these fluidic landscapes allows for the study of adapting bacterial cells in a metapopulation system. The evolutionary ecology of these bacterial systems in these synthetic ecosystems allows for using biophysics to address questions in evolutionary biology. Cell behavior The ability to create precise and carefully controlled chemoattractant gradients makes microfluidics the ideal tool to study motility, chemotaxis and the ability to evolve / develop resistance to antibiotics in small populations of microorganisms and in a short period of time. These microorganisms including bacteria and the broad range of organisms that form the marine microbial loop, responsible for regulating much of the oceans' biogeochemistry. Microfluidics has also greatly aided the study of durotaxis by facilitating the creation of durotactic (stiffness) gradients. Cellular biophysics By rectifying the motion of individual swimming bacteria, microfluidic structures can be used to extract mechanical motion from a population of motile bacterial cells. This way, bacteria-powered rotors can be built. Optics The merger of microfluidics and optics is typical known as optofluidics. Examples of optofluidic devices are tunable microlens arrays and optofluidic microscopes. Microfluidic flow enables fast sample throughput, automated imaging of large sample populations, as well as 3D capabilities, or superresolution. Photonics Lab on a Chip (PhLOC) Due to the increase in safety concerns and operating costs of common analytic methods (ICP-MS, ICP-AAS, and ICP-OES), the Photonics Lab on a Chip (PhLOC) is becoming an increasingly popular tool for the analysis of actinides and nitrates in spent nuclear waste. The PhLOC is based on the simultaneous application of Raman and UV-Vis-NIR spectroscopy, which allows for the analysis of more complex mixtures which contain several actinides at different oxidation states. Measurements made with these methods have been validated at the bulk level for industrial tests, and are observed to have a much lower variance at the micro-scale. This approach has been found to have molar extinction coefficients (UV-Vis) in line with known literature values over a comparatively large concentration span for 150 μL via elongation of the measurement channel, and obeys Beer's Law at the micro-scale for U(IV). Through the development of a spectrophotometric approach to analyzing spent fuel, an on-line method for measurement of reactant quantities is created, increasing the rate at which samples can be analyzed and thus decreasing the size of deviations detectable within reprocessing. Through the application of the PhLOC, flexibility and safety of operational methods are increased. Since the analysis of spent nuclear fuel involves extremely harsh conditions, the application of disposable and rapidly produced devices (Based on castable and/or engravable materials such as PDMS, PMMA, and glass) is advantageous, although material integrity must be considered under specific harsh conditions. Through the usage of fiber optic coupling, the device can be isolated from instrumentation, preventing irradiative damage and minimizing the exposure of lab personnel to potentially harmful radiation, something not possible on the lab scale nor with the previous standard of analysis. The shrinkage of the device also allows for lower amounts of analyte to be used, decreasing the amount of waste generated and exposure to hazardous materials. Expansion of the PhLOC to miniaturize research of the full nuclear fuel cycle is currently being evaluated, with steps of the PUREX process successfully being demonstrated at the micro-scale. Likewise, the microfluidic technology developed for the analysis of spent nuclear fuel is predicted to expand horizontally to analysis of other actinide, lanthanides, and transition metals with little to no modification. High Performance Liquid Chromatography (HPLC) HPLC in the field of microfluidics comes in two different forms. Early designs included running liquid through the HPLC column then transferring the eluted liquid to microfluidic chips and attaching HPLC columns to the microfluidic chip directly. The early methods had the advantage of easier detection from certain machines like those that measure fluorescence. More recent designs have fully integrated HPLC columns into microfluidic chips. The main advantage of integrating HPLC columns into microfluidic devices is the smaller form factor that can be achieved, which allows for additional features to be combined within one microfluidic chip. Integrated chips can also be fabricated from multiple different materials, including glass and polyimide which are quite different from the standard material of PDMS used in many different droplet-based microfluidic devices. This is an important feature because different applications of HPLC microfluidic chips may call for different pressures. PDMS fails in comparison for high-pressure uses compared to glass and polyimide. High versatility of HPLC integration ensures robustness by avoiding connections and fittings between the column and chip. The ability to build off said designs in the future allows the field of microfluidics to continue expanding its potential applications. The potential applications surrounding integrated HPLC columns within microfluidic devices have proven expansive over the last 10–15 years. The integration of such columns allows for experiments to be run where materials were in low availability or very expensive, like in biological analysis of proteins. This reduction in reagent volumes allows for new experiments like single-cell protein analysis, which due to size limitations of prior devices, previously came with great difficulty. The coupling of HPLC-chip devices with other spectrometry methods like mass-spectrometry allow for enhanced confidence in identification of desired species, like proteins. Microfluidic chips have also been created with internal delay-lines that allow for gradient generation to further improve HPLC, which can reduce the need for further separations. Some other practical applications of integrated HPLC chips include the determination of drug presence in a person through their hair and the labeling of peptides through reverse phase liquid chromatography. Acoustic droplet ejection (ADE) Acoustic droplet ejection uses a pulse of ultrasound to move low volumes of fluids (typically nanoliters or picoliters) without any physical contact. This technology focuses acoustic energy into a fluid sample to eject droplets as small as a millionth of a millionth of a litre (picoliter = 10−12 litre). ADE technology is a very gentle process, and it can be used to transfer proteins, high molecular weight DNA and live cells without damage or loss of viability. This feature makes the technology suitable for a wide variety of applications including proteomics and cell-based assays. Fuel cells Microfluidic fuel cells can use laminar flow to separate the fuel and its oxidant to control the interaction of the two fluids without the physical barrier that conventional fuel cells require. Astrobiology To understand the prospects for life to exist elsewhere in the universe, astrobiologists are interested in measuring the chemical composition of extraplanetary bodies. Because of their small size and wide-ranging functionality, microfluidic devices are uniquely suited for these remote sample analyses. From an extraterrestrial sample, the organic content can be assessed using microchip capillary electrophoresis and selective fluorescent dyes. These devices are capable of detecting amino acids, peptides, fatty acids, and simple aldehydes, ketones, and thiols. These analyses coupled together could allow powerful detection of the key components of life, and hopefully inform our search for functioning extraterrestrial life. Food science Microfluidic techniques such as droplet microfluidics, paper microfluidics, and lab-on-a-chip are used in the realm of food science in a variety of categories. Research in nutrition, food processing, and food safety benefit from microfluidic technique because experiments can be done with less reagents. Food processing requires the ability to enable shelf stability in foods, such as emulsions or additions of preservatives. Techniques such as droplet microfluidics are used to create emulsions that are more controlled and complex than those created by traditional homogenization due to the precision of droplets that is achievable. Using microfluidics for emulsions is also more energy efficient compared to homogenization in which “only 5% of the supplied energy is used to generate the emulsion, with the rest dissipated as heat” . Although these methods have benefits, they currently lack the ability to be produced at large scale that is needed for commercialization. Microfluidics are also used in research as they allow for innovation in food chemistry and food processing. An example in food engineering research is a novel micro-3D-printed device fabricated to research production of droplets for potential food processing industry use, particularly in work with enhancing emulsions. Paper and droplet microfluidics allow for devices that can detect small amounts of unwanted bacteria or chemicals, making them useful in food safety and analysis. Paper-based microfluidic devices are often referred to as microfluidic paper-based analytical devices (μPADs) and can detect such things as nitrate, preservatives, or antibiotics in meat by a colorimetric reaction that can be detected with a smartphone. These methods are being researched because they use less reactants, space, and time compared to traditional techniques such as liquid chromatography. μPADs also make home detection tests possible, which is of interest to those with allergies and intolerances. In addition to paper-based methods, research demonstrates droplet-based microfluidics shows promise in drastically shortening the time necessary to confirm viable bacterial contamination in agricultural waters in the domestic and international food industry. Future directions Microfluidics for personalized cancer treatment Personalized cancer treatment is a tuned method based on the patient's diagnosis and background. Microfluidic technology offers sensitive detection with higher throughput, as well as reduced time and costs. For personalized cancer treatment, tumor composition and drug sensitivities are very important. A patient's drug response can be predicted based on the status of biomarkers, or the severity and progression of the disease can be predicted based on the atypical presence of specific cells. Drop-qPCR is a droplet microfluidic technology in which droplets are transported in a reusable capillary and alternately flow through two areas maintained at different constant temperatures and fluorescence detection. It can be efficient with a low contamination risk to detect Her2. A digital droplet‐based PCR method can be used to detect the KRAS mutations with TaqMan probes, to enhance detection of the mutative gene ratio. In addition, accurate prediction of postoperative disease progression in breast or prostate cancer patients is essential for determining post-surgery treatment. A simple microfluidic chamber, coated with a carefully formulated extracellular matrix mixture is used for cells obtained from tumor biopsy after 72 hours of growth and a thorough evaluation of cells by imaging. Microfluidics is also suitable for circulating tumor cells (CTCs) and non-CTCs liquid biopsy analysis. Beads conjugate to anti‐epithelial cell adhesion molecule (EpCAM) antibodies for positive selection in the CTCs isolation chip (iCHIP). CTCs can also be detected by using the acidification of the tumor microenvironment and the difference in membrane capacitance. CTCs are isolated from blood by a microfluidic device, and are cultured on-chip, which can be a method to capture more biological information in a single analysis. For example, it can be used to test the cell survival rate of 40 different drugs or drug combinations. Tumor‐derived extracellular vesicles can be isolated from urine and detected by an integrated double‐filtration microfluidic device; they also can be isolated from blood and detected by electrochemical sensing method with a two‐level amplification enzymatic assay. Tumor materials can directly be used for detection through microfluidic devices. To screen primary cells for drugs, it is often necessary to distinguish cancerous cells from non-cancerous cells. A microfluidic chip based on the capacity of cells to pass small constrictions can sort the cell types, metastases. Droplet‐based microfluidic devices have the potential to screen different drugs or combinations of drugs, directly on the primary tumor sample with high accuracy. To improve this strategy, the microfluidic program with a sequential manner of drug cocktails, coupled with fluorescent barcodes, is more efficient. Another advanced strategy is detecting growth rates of single-cell by using suspended microchannel resonators, which can predict drug sensitivities of rare CTCs. Microfluidics devices also can simulate the tumor microenvironment, to help to test anticancer drugs. Microfluidic devices with 2D or 3D cell cultures can be used to analyze spheroids for different cancer systems (such as lung cancer and ovarian cancer), and are essential for multiple anti-cancer drugs and toxicity tests. This strategy can be improved by increasing the throughput and production of spheroids. For example, one droplet-based microfluidic device for 3D cell culture produces 500 spheroids per chip. These spheroids can be cultured longer in different surroundings to analyze and monitor. The other advanced technology is organs‐on‐a‐chip, and it can be used to simulate several organs to determine the drug metabolism and activity based on vessels mimicking, as well as mimic pH, oxygen... to analyze the relationship between drugs and human organ surroundings. A recent strategy is single-cell chromatin immunoprecipitation (ChiP)‐Sequencing in droplets, which operates by combining droplet‐based single cell RNA sequencing with DNA‐barcoded antibodies, possibly to explore the tumor heterogeneity by the genotype and phenotype to select the personalized anti-cancer drugs and prevent the cancer relapse. Advancements in Capillary Electrophoresis (CE) Systems One significant advancement in the field is the development of integrated capillary electrophoresis (CE) systems on microchips, as demonstrated by Z. Hugh Fan and D. Jed. Harrison. They created a planar glass chip incorporating a sample injector and separation channels using micromachining techniques. This setup allowed for the rapid separation of amino acids in just a few seconds, achieving high separation efficiencies with up to 6800 theoretical plates. The use of high electric fields, possible due to the thermal mass and conductivity of glass, minimized Joule heating effects, making the system highly efficient and fast. Such innovations highlight the potential of microfluidic devices in analytical chemistry, particularly in applications requiring quick and precise analyses. See also Advanced Simulation Library Droplet-based microfluidics Fluidics Induced-charge electrokinetics Integrated fluidic circuit Lab-on-a-chip Microfluidic cell culture Microfluidic modulation spectroscopy Microphysiometry Micropumps Microvalves uFluids@Home Paper-based microfluidics References Further reading Review papers Books Folch, Albert. Hidden in Plain Sight: The History, Science, and Engineering of Microfluidic Technology (MIT Press, 2022) online review Education Microfluidics Fluid dynamics Nanotechnology Biotechnology Gas technologies
Microfluidics
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
7,281
[ "Microfluidics", "Microtechnology", "Chemical engineering", "Materials science", "Biotechnology", "nan", "Piping", "Nanotechnology", "Fluid dynamics" ]
18,909
https://en.wikipedia.org/wiki/Magnesium
Magnesium is a chemical element; it has symbol Mg and atomic number 12. It is a shiny gray metal having a low density, low melting point and high chemical reactivity. Like the other alkaline earth metals (group 2 of the periodic table) it occurs naturally only in combination with other elements and almost always has an oxidation state of +2. It reacts readily with air to form a thin passivation coating of magnesium oxide that inhibits further corrosion of the metal. The free metal burns with a brilliant-white light. The metal is obtained mainly by electrolysis of magnesium salts obtained from brine. It is less dense than aluminium and is used primarily as a component in strong and lightweight alloys that contain aluminium. In the cosmos, magnesium is produced in large, aging stars by the sequential addition of three helium nuclei to a carbon nucleus. When such stars explode as supernovas, much of the magnesium is expelled into the interstellar medium where it may recycle into new star systems. Magnesium is the eighth most abundant element in the Earth's crust and the fourth most common element in the Earth (after iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is the third most abundant element dissolved in seawater, after sodium and chlorine. This element is the eleventh most abundant element by mass in the human body and is essential to all cells and some 300 enzymes. Magnesium ions interact with polyphosphate compounds such as ATP, DNA, and RNA. Hundreds of enzymes require magnesium ions to function. Magnesium compounds are used medicinally as common laxatives and antacids (such as milk of magnesia), and to stabilize abnormal nerve excitation or blood vessel spasm in such conditions as eclampsia. Characteristics Physical properties Elemental magnesium is a gray-white lightweight metal, two-thirds the density of aluminium. Magnesium has the lowest melting () and the lowest boiling point () of all the alkaline earth metals. Pure polycrystalline magnesium is brittle and easily fractures along shear bands. It becomes much more malleable when alloyed with small amounts of other metals, such as 1% aluminium. The malleability of polycrystalline magnesium can also be significantly improved by reducing its grain size to about 1 μm or less. When finely powdered, magnesium reacts with water to produce hydrogen gas: Mg(s) + 2 H2O(g) → Mg(OH)2(aq) + H2(g) + 1203.6 kJ/mol However, this reaction is much less dramatic than the reactions of the alkali metals with water, because the magnesium hydroxide builds up on the surface of the magnesium metal and inhibits further reaction. Chemical properties Oxidation The principal property of magnesium metal is its reducing power. One hint is that it tarnishes slightly when exposed to air, although, unlike the heavier alkaline earth metals, an oxygen-free environment is unnecessary for storage because magnesium is protected by a thin layer of oxide that is fairly impermeable and difficult to remove. Direct reaction of magnesium with air or oxygen at ambient pressure forms only the "normal" oxide MgO. However, this oxide may be combined with hydrogen peroxide to form magnesium peroxide, MgO2, and at low temperature the peroxide may be further reacted with ozone to form magnesium superoxide Mg(O2)2. Magnesium reacts with nitrogen in the solid state if it is powdered and heated to just below the melting point, forming Magnesium nitride Mg3N2. Magnesium reacts with water at room temperature, though it reacts much more slowly than calcium, a similar group 2 metal. When submerged in water, hydrogen bubbles form slowly on the surface of the metal; this reaction happens much more rapidly with powdered magnesium. The reaction also occurs faster with higher temperatures (see ). Magnesium's reversible reaction with water can be harnessed to store energy and run a magnesium-based engine. Magnesium also reacts exothermically with most acids such as hydrochloric acid (HCl), producing magnesium chloride and hydrogen gas, similar to the HCl reaction with aluminium, zinc, and many other metals. Although it is difficult to ignite in mass or bulk, magnesium metal will ignite. Magnesium may also be used as an igniter for thermite, a mixture of aluminium and iron oxide powder that ignites only at a very high temperature. Organic chemistry Organomagnesium compounds are widespread in organic chemistry. They are commonly found as Grignard reagents, formed by reaction of magnesium with haloalkanes. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. The Grignard reagents function as a common nucleophile, attacking the electrophilic group such as the carbon atom that is present within the polar bond of a carbonyl group. A prominent organomagnesium reagent beyond Grignard reagents is magnesium anthracene, which is used as a source of highly active magnesium. The related butadiene-magnesium adduct serves as a source for the butadiene dianion. Complexes of dimagnesium(I) have been observed. Detection in solution The presence of magnesium ions can be detected by the addition of ammonium chloride, ammonium hydroxide and monosodium phosphate to an aqueous or dilute HCl solution of the salt. The formation of a white precipitate indicates the presence of magnesium ions. Azo violet dye can also be used, turning deep blue in the presence of an alkaline solution of magnesium salt. The color is due to the adsorption of azo violet by Mg(OH)2. Forms Alloys As of 2013, magnesium alloys consumption was less than one million tonnes per year, compared with 50 million tonnes of aluminium alloys. Their use has been historically limited by the tendency of Mg alloys to corrode, creep at high temperatures, and combust. Corrosion In magnesium alloys, the presence of iron, nickel, copper, or cobalt strongly activates corrosion. In more than trace amounts, these metals precipitate as intermetallic compounds, and the precipitate locales function as active cathodic sites that reduce water, causing the loss of magnesium. Controlling the quantity of these metals improves corrosion resistance. Sufficient manganese overcomes the corrosive effects of iron. This requires precise control over composition, increasing costs. Adding a cathodic poison captures atomic hydrogen within the structure of a metal. This prevents the formation of free hydrogen gas, an essential factor of corrosive chemical processes. The addition of about one in three hundred parts arsenic reduces the corrosion rate of magnesium in a salt solution by a factor of nearly ten. High-temperature creep and flammability Magnesium's tendency to creep (gradually deform) at high temperatures is greatly reduced by alloying with zinc and rare-earth elements. Flammability is significantly reduced by a small amount of calcium in the alloy. By using rare-earth elements, it may be possible to manufacture magnesium alloys that are able to not catch fire at higher temperatures compared to magnesium's liquidus and in some cases potentially pushing it close to magnesium's boiling point. Compounds Magnesium forms a variety of compounds important to industry and biology, including magnesium carbonate, magnesium chloride, magnesium citrate, magnesium hydroxide (milk of magnesia), magnesium oxide, magnesium sulfate, and magnesium sulfate heptahydrate (Epsom salts). As recently as 2020, magnesium hydride was under investigation as a way to store hydrogen. Isotopes Magnesium has three stable isotopes: , and . All are present in significant amounts in nature (see table of isotopes above). About 79% of Mg is . The isotope is radioactive and in the 1950s to 1970s was produced by several nuclear power plants for use in scientific experiments. This isotope has a relatively short half-life (21 hours) and its use was limited by shipping times. The nuclide has found application in isotopic geology, similar to that of aluminium. is a radiogenic daughter product of , which has a half-life of 717,000 years. Excessive quantities of stable have been observed in the Ca-Al-rich inclusions of some carbonaceous chondrite meteorites. This anomalous abundance is attributed to the decay of its parent in the inclusions, and researchers conclude that such meteorites were formed in the solar nebula before the had decayed. These are among the oldest objects in the Solar System and contain preserved information about its early history. It is conventional to plot / against an Al/Mg ratio. In an isochron dating plot, the Al/Mg ratio plotted is /. The slope of the isochron has no age significance, but indicates the initial / ratio in the sample at the time when the systems were separated from a common reservoir. Production Occurrence Magnesium is the eighth-most-abundant element in the Earth's crust by mass and tied in seventh place with iron in molarity. It is found in large deposits of magnesite, dolomite, and other minerals, and in mineral waters, where magnesium ion is soluble. Although magnesium is found in more than 60 minerals, only dolomite, magnesite, brucite, carnallite, talc, and olivine are of commercial importance. The cation is the second-most-abundant cation in seawater (about the mass of sodium ions in a given sample), which makes seawater and sea salt attractive commercial sources for Mg. To extract the magnesium, calcium hydroxide is added to the seawater to precipitate magnesium hydroxide. + → + Magnesium hydroxide (brucite) is poorly soluble in water and can be collected by filtration. It reacts with hydrochloric acid to magnesium chloride. + 2 HCl → + 2 From magnesium chloride, electrolysis produces magnesium. Production quantities World production was approximately 1,100 kt in 2017, with the bulk being produced in China (930 kt) and Russia (60 kt). The United States was in the 20th century the major world supplier of this metal, supplying 45% of world production even as recently as 1995. Since the Chinese mastery of the Pidgeon process the US market share is at 7%, with a single US producer left as of 2013: US Magnesium, a Renco Group company located on the shores of the Great Salt Lake. In September 2021, China took steps to reduce production of magnesium as a result of a government initiative to reduce energy availability for manufacturing industries, leading to a significant price increase. Pidgeon and Bolzano processes The Pidgeon process and the Bolzano process are similar. In both, magnesium oxide is the precursor to magnesium metal. The magnesium oxide is produced as a solid solution with calcium oxide by calcining the mineral dolomite, which is a solid solution of calcium and magnesium carbonates: Reduction occurs at high temperatures with silicon. A ferrosilicon alloy is used rather than pure silicon as it is more economical. The iron component has no bearing on the reaction, having the simplified equation: The calcium oxide combines with silicon as the oxygen scavenger, yielding the very stable calcium silicate. The Mg/Ca ratio of the precursors can be adjusted by the addition of MgO or CaO. The Pidgeon and the Bolzano process differ in the details of the heating and the configuration of the reactor. Both generate gaseous Mg that is condensed and collected. The Pidgeon process dominates the worldwide production. The Pidgeon method is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable. China is almost completely reliant on the silicothermic Pidgeon process. Dow process Besides the Pigeon process, the second most used process for magnesium production is electrolysis. This is a two step process. The first step is to prepare feedstock containing magnesium chloride and the second step is to dissociate the compound in electrolytic cells as magnesium metal and chlorine gas. The basic reaction is as follows: The temperatures at which this reaction is operated is between 680 and 750 °C. The magnesium chloride can be obtained using the Dow process, a process that mixes sea water and dolomite in a flocculator or by dehydration of magnesium chloride brines. The electrolytic cells are partially submerged in a molten salt electrolyte to which the produced magnesium chloride is added in concentrations between 6–18%. This process does have its share of disadvantages including production of harmful chlorine gas and the overall reaction being very energy intensive, creating environmental risks. The Pidgeon process is more advantageous regarding its simplicity, shorter construction period, low power consumption and overall good magnesium quality compared to the electrolysis method. In the United States, magnesium was once obtained principally with the Dow process in Corpus Christi TX, by electrolysis of fused magnesium chloride from brine and sea water. A saline solution containing ions is first treated with lime (calcium oxide) and the precipitated magnesium hydroxide is collected: (aq) + (s) + (l) → (aq) + (s) The hydroxide is then converted to magnesium chloride by treatment with hydrochloric acid and heating of the product to eliminate water: The salt is then electrolyzed in the molten state. At the cathode, the ion is reduced by two electrons to magnesium metal: + 2 → Mg At the anode, each pair of ions is oxidized to chlorine gas, releasing two electrons to complete the circuit: 2 → (g) + 2 Carbothermic process The carbothermic route to magnesium has been recognized as a low energy, yet high productivity path to magnesium extraction. The chemistry is as follows: A disadvantage of this method is that slow cooling the vapour can cause the reaction to quickly revert. To prevent this from happening, the magnesium can be dissolved directly in a suitable metal solvent before reversion starts happening. Rapid quenching of the vapour can also be performed to prevent reversion. YSZ process A newer process, solid oxide membrane technology, involves the electrolytic reduction of MgO. At the cathode, ion is reduced by two electrons to magnesium metal. The electrolyte is yttria-stabilized zirconia (YSZ). The anode is a liquid metal. At the YSZ/liquid metal anode is oxidized. A layer of graphite borders the liquid metal anode, and at this interface carbon and oxygen react to form carbon monoxide. When silver is used as the liquid metal anode, there is no reductant carbon or hydrogen needed, and only oxygen gas is evolved at the anode. It was reported in 2011 that this method provides a 40% reduction in cost per pound over the electrolytic reduction method. Rieke process Rieke et al. developed a "general approach for preparing highly reactive metal powders by reducing metal salts in ethereal or hydrocarbon solvents using alkali metals as reducing agents" now known as the Rieke process. Rieke finalized the identification of Rieke metals in 1989, one of which was Rieke-magnesium, first produced in 1974. History The name magnesium originates from the Greek word for locations related to the tribe of the Magnetes, either a district in Thessaly called Magnesia or Magnesia ad Sipylum, now in Turkey. It is related to magnetite and manganese, which also originated from this area, and required differentiation as separate substances. See manganese for this history. In 1618, a farmer at Epsom in England attempted to give his cows water from a local well. The cows refused to drink because of the water's bitter taste, but the farmer noticed that the water seemed to heal scratches and rashes. The substance obtained by evaporating the water became known as Epsom salts and its fame spread. It was eventually recognized as hydrated magnesium sulfate, ·7. The metal itself was first isolated by Sir Humphry Davy in England in 1808. He used electrolysis on a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was 'magnium', but the name magnesium is now used in most European languages. Uses Magnesium metal Magnesium is the third-most-commonly-used structural metal, following iron and aluminium. The main applications of magnesium are, in order: aluminium alloys, die-casting (alloyed with zinc), removing sulfur in the production of iron and steel, and the production of titanium in the Kroll process. Magnesium is used in lightweight materials and alloys. For example, when infused with silicon carbide nanoparticles, it has extremely high specific strength. Historically, magnesium was one of the main aerospace construction metals and was used for German military aircraft as early as World War I and extensively for German aircraft in World War II. The Germans coined the name "Elektron" for magnesium alloy, a term which is still used today. In the commercial aerospace industry, magnesium was generally restricted to engine-related components, due to fire and corrosion hazards. Magnesium alloy use in aerospace is increasing in the 21st century, driven by the importance of fuel economy. Magnesium alloys can act as replacements for aluminium and steel alloys in structural applications. Aircraft Wright Aeronautical used a magnesium crankcase in the WWII-era Wright R-3350 Duplex Cyclone aviation engine. This presented a serious problem for the earliest models of the Boeing B-29 Superfortress heavy bomber when an in-flight engine fire ignited the engine crankcase. The resulting combustion was as hot as 5,600 °F (3,100 °C) and could sever the wing spar from the fuselage. Automotive Mercedes-Benz used the alloy Elektron in the bodywork of an early model Mercedes-Benz 300 SLR; these cars competed in the 1955 World Sportscar Championship including a win at the Mille Miglia, and at Le Mans where one was involved in the 1955 Le Mans disaster when spectators were showered with burning fragments of elektron. Porsche used magnesium alloy frames in the 917/053 that won Le Mans in 1971, and continues to use magnesium alloys for its engine blocks due to the weight advantage. Volkswagen Group has used magnesium in its engine components for many years. Mitsubishi Motors uses magnesium for its paddle shifters. BMW used magnesium alloy blocks in their N52 engine, including an aluminium alloy insert for the cylinder walls and cooling jackets surrounded by a high-temperature magnesium alloy AJ62A. The engine was used worldwide between 2005 and 2011 in various 1, 3, 5, 6, and 7 series models; as well as the Z4, X1, X3, and X5. Chevrolet used the magnesium alloy AE44 in the 2006 Corvette Z06. Both AJ62A and AE44 are recent developments in high-temperature low-creep magnesium alloys. The general strategy for such alloys is to form intermetallic precipitates at the grain boundaries, for example by adding mischmetal or calcium. Electronics Because of low density and good mechanical and electrical properties, magnesium is used for manufacturing of mobile phones, laptop and tablet computers, cameras, and other electronic components. It was used as a premium feature because of its light weight in some 2020 laptops. Source of light When burning in air, magnesium produces a brilliant white light that includes strong ultraviolet wavelengths. Magnesium powder (flash powder) was used for subject illumination in the early days of photography. Later, magnesium filament was used in electrically ignited single-use photography flashbulbs. Magnesium powder is used in fireworks and marine flares where a brilliant white light is required. It was also used for various theatrical effects, such as lightning, pistol flashes, and supernatural appearances. Magnesium is flammable, burning at a temperature of approximately , and the autoignition temperature of magnesium ribbon is approximately . Magnesium's high combustion temperature makes it a useful tool for starting emergency fires. Other uses include flash photography, flares, pyrotechnics, fireworks sparklers, and trick birthday candles. Magnesium is also often used to ignite thermite or other materials that require a high ignition temperature. Magnesium continues to be used as an incendiary element in warfare. Flame temperatures of magnesium and magnesium alloys can reach , although flame height above the burning metal is usually less than . Once ignited, such fires are difficult to extinguish because they resist several substances commonly used to put out fires; combustion continues in nitrogen (forming magnesium nitride), in carbon dioxide (forming magnesium oxide and carbon), and in water (forming magnesium oxide and hydrogen, which also combusts due to heat in the presence of additional oxygen). This property was used in incendiary weapons during the firebombing of cities in World War II, where the only practical civil defense was to smother a burning flare under dry sand to exclude atmosphere from the combustion. Chemical reagent In the form of turnings or ribbons, to prepare Grignard reagents, which are useful in organic synthesis. Other As an additive agent in conventional propellants and the production of nodular graphite in cast iron. As a reducing agent to separate uranium and other metals from their salts. As a sacrificial (galvanic) anode to protect boats, underground tanks, pipelines, buried structures, and water heaters. Alloyed with zinc to produce the zinc sheet used in photoengraving plates in the printing industry, dry-cell battery walls, and roofing. Alloyed with aluminium with aluminium-magnesium alloys being used mainly for beverage cans, sports equipment such as golf clubs, fishing reels, and archery bows and arrows. Many car and aircraft manufacturers have made engine and body parts from magnesium. Magnesium batteries have been commercialized as primary batteries, and are an active topic of research for rechargeable batteries. Compounds Magnesium compounds, primarily magnesium oxide (MgO), are used as a refractory material in furnace linings for producing iron, steel, nonferrous metals, glass, and cement. Magnesium oxide and other magnesium compounds are also used in the agricultural, chemical, and construction industries. Magnesium oxide from calcination is used as an electrical insulator in fire-resistant cables. Magnesium reacts with haloalkanes to give Grignard reagents, which are used for a wide variety of organic reactions forming carbon–carbon bonds. Magnesium salts are included in various foods, fertilizers (magnesium is a component of chlorophyll), and microbe culture media. Magnesium sulfite is used in the manufacture of paper (sulfite process). Magnesium phosphate is used to fireproof wood used in construction. Magnesium hexafluorosilicate is used for moth-proofing textiles. Biological roles Mechanism of action The important interaction between phosphate and magnesium ions makes magnesium essential to the basic nucleic acid chemistry of all cells of all known living organisms. More than 300 enzymes require magnesium ions for their catalytic action, including all enzymes using or synthesizing ATP and those that use other nucleotides to synthesize DNA and RNA. The ATP molecule is normally found in a chelate with a magnesium ion. Nutrition Diet Spices, nuts, cereals, cocoa and vegetables are good sources of magnesium. Green leafy vegetables such as spinach are also rich in magnesium. Dietary recommendations In the UK, the recommended daily values for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older. Supplementation Numerous pharmaceutical preparations of magnesium and dietary supplements are available. In two human trials magnesium oxide, one of the most common forms in magnesium dietary supplements because of its high magnesium content per weight, was less bioavailable than magnesium citrate, chloride, lactate or aspartate. Metabolism An adult body has 22–26 grams of magnesium, with 60% in the skeleton, 39% intracellular (20% in skeletal muscle), and 1% extracellular. Serum levels are typically 0.7–1.0 mmol/L or 1.8–2.4 mEq/L. Serum magnesium levels may be normal even when intracellular magnesium is deficient. The mechanisms for maintaining the magnesium level in the serum are varying gastrointestinal absorption and renal excretion. Intracellular magnesium is correlated with intracellular potassium. Increased magnesium lowers calcium and can either prevent hypercalcemia or cause hypocalcemia depending on the initial level. Both low and high protein intake conditions inhibit magnesium absorption, as does the amount of phosphate, phytate, and fat in the gut. Unabsorbed dietary magnesium is excreted in feces; absorbed magnesium is excreted in urine and sweat. Detection in serum and plasma Magnesium status may be assessed by measuring serum and erythrocyte magnesium concentrations coupled with urinary and fecal magnesium content, but intravenous magnesium loading tests are more accurate and practical. A retention of 20% or more of the injected amount indicates deficiency. As of 2004, no biomarker has been established for magnesium. Magnesium concentrations in plasma or serum may be monitored for efficacy and safety in those receiving the drug therapeutically, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdose. The newborn children of mothers who received parenteral magnesium sulfate during labor may exhibit toxicity with normal serum magnesium levels. Deficiency Low plasma magnesium (hypomagnesemia) is common: it is found in 2.5–15% of the general population. From 2005 to 2006, 48 percent of the United States population consumed less magnesium than recommended in the Dietary Reference Intake. Other causes are increased renal or gastrointestinal loss, an increased intracellular shift, and proton-pump inhibitor antacid therapy. Most are asymptomatic, but symptoms referable to neuromuscular, cardiovascular, and metabolic dysfunction may occur. Alcoholism is often associated with magnesium deficiency. Chronically low serum magnesium levels are associated with metabolic syndrome, diabetes mellitus type 2, fasciculation, and hypertension. Therapy Intravenous magnesium is recommended by the ACC/AHA/ESC 2006 Guidelines for Management of Patients With Ventricular Arrhythmias and the Prevention of Sudden Cardiac Death for patients with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome; and for the treatment of patients with digoxin induced arrhythmias. Intravenous magnesium sulfate is used for the management of pre-eclampsia and eclampsia. Hypomagnesemia, including that caused by alcoholism, is reversible by oral or parenteral magnesium administration depending on the degree of deficiency. There is limited evidence that magnesium supplementation may play a role in the prevention and treatment of migraine. Sorted by type of magnesium salt, other therapeutic applications include: Magnesium sulfate, as the heptahydrate called Epsom salts, is used as bath salts, a laxative, and a highly soluble fertilizer. Magnesium hydroxide, suspended in water, is used in milk of magnesia antacids and laxatives. Magnesium chloride, oxide, gluconate, malate, orotate, glycinate, ascorbate and citrate are all used as oral magnesium supplements. Magnesium borate, magnesium salicylate, and magnesium sulfate are used as antiseptics. Magnesium bromide is used as a mild sedative (this action is due to the bromide, not the magnesium). Magnesium stearate is a slightly flammable white powder with lubricating properties. In pharmaceutical technology, it is used in pharmacological manufacture to prevent tablets from sticking to the equipment while compressing the ingredients into tablet form. Magnesium carbonate powder is used by athletes such as gymnasts, weightlifters, and climbers to eliminate palm sweat, prevent sticking, and improve the grip on gymnastic apparatus, lifting bars, and climbing rocks. Overdose Overdose from dietary sources alone is unlikely because excess magnesium in the blood is promptly filtered by the kidneys, and overdose is more likely in the presence of impaired renal function. Overdose is not unlikely in case of excessive intake of supplements. Indeed, megadose therapy has caused death in a young child, and severe hypermagnesemia in a woman and a young girl who had healthy kidneys. The most common symptoms of overdose are nausea, vomiting, and diarrhea; other symptoms include hypotension, confusion, slowed heart and respiratory rates, deficiencies of other minerals, coma, cardiac arrhythmia, and death from cardiac arrest. Function in plants Plants require magnesium to synthesize chlorophyll, essential for photosynthesis. Magnesium in the center of the porphyrin ring in chlorophyll functions in a manner similar to the iron in the center of the porphyrin ring in heme. Magnesium deficiency in plants causes late-season yellowing between leaf veins, especially in older leaves, and can be corrected by either applying epsom salts (which is rapidly leached), or crushed dolomitic limestone, to the soil. Safety precautions Magnesium metal and its alloys can be explosive hazards; they are highly flammable in their pure form when molten or in powder or ribbon form. Burning or molten magnesium reacts violently with water. When working with powdered magnesium, safety glasses with eye protection and UV filters (such as welders use) are employed because burning magnesium produces ultraviolet light that can permanently damage the retina of a human eye. Magnesium is capable of reducing water and releasing highly flammable hydrogen gas: Mg(s) + 2 (l) → (s) + (g) Therefore, water cannot extinguish magnesium fires. The hydrogen gas produced intensifies the fire. Dry sand is an effective smothering agent, but only on relatively level and flat surfaces. Magnesium reacts with carbon dioxide exothermically to form magnesium oxide and carbon: 2 Mg(s) + (g) → 2 MgO(s) + C(s) Hence, carbon dioxide fuels rather than extinguishes magnesium fires. Burning magnesium can be quenched by using a Class D dry chemical fire extinguisher, or by covering the fire with sand or magnesium foundry flux to remove its air source. See also List of countries by magnesium production Magnesium oil Notes References Cited sources External links Magnesium at The Periodic Table of Videos (University of Nottingham) Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Magnesium Chemical elements Alkaline earth metals Dietary minerals Food additives Pyrotechnic fuels Reducing agents Chemical elements with hexagonal close-packed structure
Magnesium
[ "Physics", "Chemistry" ]
6,447
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
19,022
https://en.wikipedia.org/wiki/Measurement
Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales. Measurement is a cornerstone of trade, science, technology and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units (SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology. Measurement is defined as the process of comparison of an unknown quantity with a known or standard quantity. History Methodology The measurement of a property may be categorized by the following criteria: type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements. The level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference. The type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the numerical value of the characterization, usually obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of an artifact used as standard or a natural physical quantity. An uncertainty represents the random and systemic errors of the measurement procedure; it indicates a confidence level in the measurement. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument. Standardization of measurement units Measurements most commonly use the International System of Units (SI) as a comparison framework. The system defines seven fundamental units: kilogram, metre, candela, second, ampere, kelvin, and mole. All of these units are defined without reference to a particular physical object which serves as a standard. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to. The first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce (1839–1914), who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method. Standards With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce. Units of measurement are generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Metre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of the Planck constant and the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as being exactly 0.9144 metres. In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India. Units and systems unit is known or standard quantity in terms of which other physical quantities are measured. Imperial and US customary systems Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, are different for the U.S. units. Many Imperial units remain in use in Britain, which has officially switched to the SI system—with a few exceptions such as road signs, which are still in miles. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated. Metric system The metric system is a decimal system of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes. International System of Units The International System of Units (abbreviated as SI from the French language name Système International d'Unités) is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre–kilogram–second (MKS) system, rather than the centimetre–gram–second (CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are: In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units, for example, the watt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg/m3. Converting prefixes The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100. Length A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, the ruler is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a measure, however common usage calls both instruments rulers and the special name straightedge is used for an unmarked rule. The use of the word measure, in the sense of a measuring instrument, only survives in the phrase tape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing. Time Time is an abstract measurement of elemental changes over a non-spatial continuum. It is denoted by numbers and/or named periods such as hours, days, weeks, months and years. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum. Mass Mass refers to the intrinsic property of all material objects to resist changes in their momentum. Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall, (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass. One device for measuring weight or mass is called a weighing scale or, often, simply a scale. A spring scale measures force but not mass, a balance compares weight, both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall. Economics The measures used in economics are physical measures, nominal price value measures and real price measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements. Survey research In the field of survey research, measures are taken from individual attitudes, values, and behavior using questionnaires as a measurement instrument. As all other measurements, measurement in survey research is also vulnerable to measurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument. In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to be corrected for measurement errors. Exactness designation The following rules generally apply for displaying the exactness of measurements: All non-0 digits and any 0s appearing between them are significant for the exactness of any number. For example, the number 12000 has two significant digits, and has implied limits of 11500 and 12500. Additional 0s may be added after a decimal separator to denote a greater exactness, increasing the number of decimals. For example, 1 has implied limits of 0.5 and 1.5 whereas 1.0 has implied limits 0.95 and 1.05. Difficulties Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (about 39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise: This computation used for the acceleration of gravity . But this measurement is not exact, but only precise to two significant digits. The Earth's gravitational field varies slightly depending on height above sea level and other factors. The computation of 0.45 seconds involved extracting a square root, a mathematical operation that required rounding off to some number of significant digits, in this case two significant digits. Additionally, other sources of experimental error include: carelessness, determining of the exact time at which the object is released and the exact time it hits the ground, measurement of the height and the measurement of the time both involve some error, air resistance, posture of human participants. Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic. Definitions and theories Classical definition In the classical definition, which is standard throughout the physical sciences, measurement is the determination or estimation of ratios of quantities. Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements. Representational theory In the representational theory, measurement is defined as "the correlation of numbers with entities that are not numbers". The most technically elaborated form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens, numbers need only be assigned according to a rule. The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria. Three type of representational theory Empirical relation In science, an empirical relationship is a relationship or correlation based solely on observation rather than theory. An empirical relationship requires only confirmatory data irrespective of theoretical basis. The rule of mapping The real world is the Domain of mapping, and the mathematical world is the range. when we map the attribute to mathematical system, we have many choice for mapping and the range. The representation condition of measurement Theory All data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity." This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. In this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement. Quantum mechanics In quantum mechanics, a measurement is an action that determines a particular property (such as position, momentum, or energy) of a quantum system. Quantum measurements are always statistical samples from a probability distribution; the distribution for many quantum phenomena is discrete, not continuous. Quantum measurements alter quantum states and yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value. The unambiguous meaning of the quantum measurement is an unresolved fundamental problem in quantum mechanics; the most common interpretation is that when a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value. Biology In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized. Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion. Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity. See also Conversion of units Electrical measurements History of measurement ISO 10012, Measurement management systems Levels of measurement List of humorous units of measurement List of unusual units of measurement Measurement in quantum mechanics Measurement uncertainty NCSL International Observable quantity Orders of magnitude Quantification (science) Standard (metrology) Timeline of temperature and pressure measurement technology Timeline of time measurement technology Weights and measures References External links Schlaudt, Oliver 2020: "measurement". In: Kirchhoff, Thomas (ed.): Online Encyclopedia Philosophy of Nature. Heidelberg: Universitätsbibliothek Heidelberg, measurement. Tal, Era 2020: "Measurement in Science". In: Zalta, Edward N. (ed.): The Stanford Encyclopedia of Philosophy (Fall 2020 ed.), Measurement in Science. A Dictionary of Units of Measurement 'Metrology – in short' 3rd ed., July 2008 Accuracy and precision Metrology
Measurement
[ "Physics", "Mathematics" ]
3,524
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
19,042
https://en.wikipedia.org/wiki/Metal
A metal () is a material that, when polished or fractured, shows a lustrous appearance, and conducts electricity and heat relatively well. These properties are all associated with having electrons available at the Fermi level, as against nonmetallic materials which do not. Metals are typically ductile (can be drawn into wires) and malleable (they can be hammered into thin sheets). A metal may be a chemical element such as iron; an alloy such as stainless steel; or a molecular compound such as polymeric sulfur nitride. The general science of metals is called metallurgy, a subtopic of materials science; aspects of the electronic and thermal properties are also within the scope of condensed matter physics and solid-state chemistry, it is a multidisciplinary topic. In colloquial use materials such as steel alloys are referred to as metals, while others such as polymers, wood or ceramics are nonmetallic materials. A metal conducts electricity at a temperature of absolute zero, which is a consequence of delocalized states at the Fermi energy. Many elements and compounds become metallic under high pressures, for example, iodine gradually becomes a metal at a pressure of between 40 and 170 thousand times atmospheric pressure. Sodium becomes a nonmetal at pressure of just under two million times atmospheric pressure, and at even higher pressures it is expected to become a metal again. When discussing the periodic table and some chemical properties the term metal is often used to denote those elements which in pure form and at standard conditions are metals in the sense of electrical conduction mentioned above. The related term metallic may also be used for types of dopant atoms or alloying elements. In astronomy metal refers to all chemical elements in a star that are heavier than helium. In this sense the first four "metals" collecting in stellar cores through nucleosynthesis are carbon, nitrogen, oxygen, and neon. A star fuses lighter atoms, mostly hydrogen and helium, into heavier atoms over its lifetime. The metallicity of an astronomical object is the proportion of its matter made up of the heavier chemical elements. The strength and resilience of some metals has led to their frequent use in, for example, high-rise building and bridge construction, as well as most vehicles, many home appliances, tools, pipes, and railroad tracks. Precious metals were historically used as coinage, but in the modern era, coinage metals have extended to at least 23 of the chemical elements. There is also extensive use of multi-element metals such as titanium nitride or degenerate semiconductors in the semiconductor industry. The history of refined metals is thought to begin with the use of copper about 11,000 years ago. Gold, silver, iron (as meteoric iron), lead, and brass were likewise in use before the first known appearance of bronze in the fifth millennium BCE. Subsequent developments include the production of early forms of steel; the discovery of sodium—the first light metal—in 1809; the rise of modern alloy steels; and, since the end of World War II, the development of more sophisticated alloys. Properties Form and structure Most metals are shiny and lustrous, at least when polished, or fractured. Sheets of metal thicker than a few micrometres appear opaque, but gold leaf transmits green light. This is due to the freely moving electrons which reflect light. Although most elemental metals have higher densities than nonmetals, there is a wide variation in their densities, lithium being the least dense (0.534 g/cm3) and osmium (22.59 g/cm3) the most dense. Some of the 6d transition metals are expected to be denser than osmium, but their known isotopes are too unstable for bulk production to be possible Magnesium, aluminium and titanium are light metals of significant commercial importance. Their respective densities of 1.7, 2.7, and 4.5 g/cm3 can be compared to those of the older structural metals, like iron at 7.9 and copper at 8.9 g/cm3. The most common lightweight metals are aluminium and magnesium alloys. Metals are typically malleable and ductile, deforming under stress without cleaving. The nondirectional nature of metallic bonding contributes to the ductility of most metallic solids, where the Peierls stress is relatively low allowing for dislocation motion, and there are also many combinations of planes and directions for plastic deformation. Due to their having close packed arrangements of atoms the Burgers vector of the dislocations are fairly small, which also means that the energy needed to produce one is small. In contrast, in an ionic compound like table salt the Burgers vectors are much larger and the energy to move a dislocation is far higher. Reversible elastic deformation in metals can be described well by Hooke's Law for the restoring forces, where the stress is linearly proportional to the strain. A temperature change may lead to the movement of structural defects in the metal such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline metals. Internal slip, creep, and metal fatigue may also ensue. The atoms of simple metallic substances are often in one of three common crystal structures, namely body-centered cubic (bcc), face-centered cubic (fcc), and hexagonal close-packed (hcp). In bcc, each atom is positioned at the center of a cube of eight others. In fcc and hcp, each atom is surrounded by twelve others, but the stacking of the layers differs. Some metals adopt different structures depending on the temperature. Many other metals with different elements have more complicated structures, such as rock-salt structure in titanium nitride or perovskite (structure) in some nickelates. Electrical and thermal The electronic structure of metals means they are relatively good conductors of electricity. The electrons all have different momenta, which average to zero when there is no external voltage. When a voltage is applied some move a little faster in a given direction, some a little slower so there is a net drift velocity which leads to an electric current. This involves small changes in which wavefunctions the electrons are in, changing to those with the higher momenta. Quantum mechanics dictates that one can only have one electron in a given state, the Pauli exclusion principle. Therefore there have to be empty delocalized electron states (with the higher momenta) available at the highest occupied energies as sketched in the Figure. In a semiconductor like silicon or a nonmetal like strontium titanate there is an energy gap between the highest filled states of the electrons and the lowest unfilled, so no accessible states with slightly higher momenta. Consequently, semiconductors and nonmetals are poor conductors, although they can carry some current when doped with elements that introduce additional partially occupied energy states at higher temperatures. The elemental metals have electrical conductivity values of from 6.9 × 103 S/cm for manganese to 6.3 × 105 S/cm for silver. In contrast, a semiconducting metalloid such as boron has an electrical conductivity 1.5 × 10−6 S/cm. With one exception, metallic elements reduce their electrical conductivity when heated. Plutonium increases its electrical conductivity when heated in the temperature range of around −175 to +125 °C, with anomalously large thermal expansion coefficient and a phase change from monoclinic to face-centered cubic near 100  °C. There is evidence that this and comparable behavior in transuranic elements is due to more complex relativistic and spin interactions which are not captured in simple models. All of the metallic alloys as well as conducting ceramics and polymers are metals by the same definition; for instance titanium nitride has delocalized states at the Fermi level. They have electrical conductivities similar to those of elemental metals. Liquid forms are also metallic conductors or electricity, for instance mercury. In normal conditions no gases are metallic conductors. However, a plasma (physics) is a metallic conductor and the charged particles in a plasma have many properties in common with those of electrons in elemental metals, particularly for white dwarf stars. Metals are relatively good conductors of heat, which in metals is transported mainly by the conduction electrons. At higher temperatures the electrons can occupy slightly higher energy levels given by Fermi–Dirac statistics. These have slightly higher momenta (kinetic energy) and can pass on thermal energy. The empirical Wiedemann–Franz law states that in many metals the ratio between thermal and electrical conductivities is proportional to temperature, with a proportionality constant that is roughly the same for all metals. The contribution of a metal's electrons to its heat capacity and thermal conductivity, and the electrical conductivity of the metal itself can be approximately calculated from the free electron model. However, this does not take into account the detailed structure of the metal's ion lattice. Taking into account the positive potential caused by the arrangement of the ion cores enables consideration of the electronic band structure and binding energy of a metal. Various models are applicable, the simplest being the nearly free electron model. Modern methods such as density functional theory are typically used. Chemical The elements which form metals usually form cations through electron loss. Most will react with oxygen in the air to form oxides over various timescales (potassium burns in seconds while iron rusts over years) which depend upon whether the native oxide forms a passivation layer that acts as a diffusion barrier. Some others, like palladium, platinum, and gold, do not react with the atmosphere at all; gold can form compounds where it gains an electron (aurides, e.g. caesium auride). The oxides of elemental metals are often basic. However, oxides with very high oxidation states such as CrO3, Mn2O7, and OsO4 often have strictly acidic reactions; and oxides of the less electropositive metals such as BeO, Al2O3, and PbO, can display both basic and acidic properties. The latter are termed amphoteric oxides. Periodic table distribution of elemental metals The elements that form exclusively metallic structures under ordinary conditions are shown in yellow on the periodic table below. The remaining elements either form covalent network structures (light blue), molecular covalent structures (dark blue), or remain as single atoms (violet). Astatine (At), francium (Fr), and the elements from fermium (Fm) onwards are shown in gray because they are extremely radioactive and have never been produced in bulk. Theoretical and experimental evidence suggests that these uninvestigated elements should be metals, except for oganesson (Og) which DFT calculations indicate would be a semiconductor. The situation changes with pressure: at extremely high pressures, all elements (and indeed all substances) are expected to metallize. Arsenic (As) has both a stable metallic allotrope and a metastable semiconducting allotrope at standard conditions. A similar situation affects carbon (C): graphite is metallic, but diamond is not. Alloys In the context of metals, an alloy is a substance having metallic properties which is composed of two or more elements. Often at least one of these is a metallic element; the term "alloy" is sometimes used more generally as in silicon–germanium alloys. An alloy may have a variable or fixed composition. For example, gold and silver form an alloy in which the proportions of gold or silver can be varied; titanium and silicon form an alloy TiSi2 in which the ratio of the two components is fixed (also known as an intermetallic compound). Most pure metals are either too soft, brittle, or chemically reactive for practical use. Combining different ratios of metals and other elements in alloys modifies the properties to produce desirable characteristics, for instance more ductile, harder, resistant to corrosion, or have a more desirable color and luster. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steel) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low-, mid-, and high-carbon steels, with increasing carbon levels reducing ductility and toughness. The addition of silicon will produce cast irons, while the addition of chromium, nickel, and molybdenum to carbon steels (more than 10%) results in stainless steels with enhanced corrosion resistance. Other significant metallic alloys are those of aluminum, titanium, copper, and magnesium. Copper alloys have been known since prehistory—bronze gave the Bronze Age its name—and have many applications today, most importantly in electrical wiring. The alloys of the other three metals have been developed relatively recently; due to their chemical reactivity they need electrolytic extraction processes. The alloys of aluminum, titanium, and magnesium are valued for their high strength-to-weight ratios; magnesium can also provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratio is more important than material cost, such as in aerospace and some automotive applications. Alloys specially designed for highly demanding applications, such as jet engines, may contain more than ten elements. Categories Metals can be categorised by their composition, physical or chemical properties. Categories described in the subsections below include ferrous and non-ferrous metals; brittle metals and refractory metals; white metals; heavy and light metals; base, noble, and precious metals as well as both metallic ceramics and polymers. Ferrous and non-ferrous metals The term "ferrous" is derived from the Latin word meaning "containing iron". This can include pure iron, such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but not exclusively. Non-ferrous metals and alloys lack appreciable amounts of iron. Brittle elemental metal While nearly all elemental metals are malleable or ductile, a few—beryllium, chromium, manganese, gallium, and bismuth—are brittle. Arsenic and antimony, if admitted as metals, are brittle. Low values of the ratio of bulk elastic modulus to shear modulus (Pugh's criterion) are indicative of intrinsic brittleness. A material is brittle if it is hard for dislocations to move, which is often associated with large Burgers vectors and only a limited number of slip planes. Refractory metal A refractory metal is a metal that is very resistant to heat and wear. Which metals belong to this category varies; the most common definition includes niobium, molybdenum, tantalum, tungsten, and rhenium as well as their alloys. They all have melting points above 2000 °C, and a high hardness at room temperature. Several compounds such as titanium nitride are also described as refractory metals. White metal A white metal is any of a range of white-colored alloys with relatively low melting points used mainly for decorative purposes. In Britain, the fine art trade uses the term "white metal" in auction catalogues to describe foreign silver items which do not carry British Assay Office marks, but which are nonetheless understood to be silver and are priced accordingly. Heavy and light metals A heavy metal is any relatively dense metal, either single element or multielement. Magnesium, aluminium and titanium alloys are light metals of significant commercial importance. Their densities of 1.7, 2.7 and 4.5 g/cm3 range from 19 to 56% of the densities of other structural metals, such as iron (7.9) and copper (8.9) and their alloys. Base, noble, and precious metals The term base metal refers to a metal that is easily oxidized or corroded, such as reacting easily with dilute hydrochloric acid (HCl) to form a metal chloride and hydrogen. The term is normally used for the elements, and examples include iron, nickel, lead, and zinc. Copper is considered a base metal as it is oxidized relatively easily, although it does not react with HCl. The term noble metal (also for elements) is commonly used in opposition to base metal. Noble metals are less reactive, resistant to corrosion or oxidation, unlike most base metals. They tend to be precious metals, often due to perceived rarity. Examples include gold, platinum, silver, rhodium, iridium, and palladium. In alchemy and numismatics, the term base metal is contrasted with precious metal, that is, those of high economic value. Most coins today are made of base metals with low intrinsic value; in the past, coins frequently derived their value primarily from their precious metal content; gold, silver, platinum, and palladium each have an ISO 4217 currency code. Currently they have industrial uses such as platinum and palladium in catalytic converters, are used in jewellery and also a role as investments and a store of value. Palladium and platinum, as of summer 2024, were valued at slightly less than half the price of gold, while silver is substantially less expensive. Valve metals In electrochemistry, a valve metal is a metal which passes current in only one direction due to the formation of any insulating oxide later. Metallic ceramics There are many ceramic compounds which have metallic electrical conduction, but are not simple combinations of metallic elements. (They are not the same as cermets which are composites of a non-conducting ceramic and a conducting metal.) One set, the transition metal nitrides has significant ionic character to the bonding, so can be classified as both ceramics and metals. They have partially filled states at the Fermi level so are good thermal and electrical conductors, and there is often significant charge transfer from the transition metal atoms to the nitrogen. However, unlike most elemental metals, ceramic metals are often not particularly ductile. Their uses are widespread, for instance titanium nitride finds use in orthopedic devices and as a wear resistant coating. In many cases their utility depends upon there being effective deposition methods so they can be used as thin film coatings. Metallic polymers There are many polymers which have metallic electrical conduction, typically associated with extended aromatic components such as in the polymers indicated in the Figure. The conduction of the aromatic regions is similar to that of graphite, so is highly directional. Half metal A half-metal is any substance that acts as a conductor to electrons of one spin orientation, but as an insulator or semiconductor to those of the opposite spin. They were first described in 1983, as an explanation for the electrical properties of manganese-based Heusler alloys. Although all half-metals are ferromagnetic (or ferrimagnetic), most ferromagnets are not half-metals. Many of the known examples of half-metals are oxides, sulfides, or Heusler alloys. Semimetal A semimetal is a material with a small energy overlap between the bottom of the conduction band and the top of the valence band, but they do not overlap in momentum space. Unlike a regular metal, semimetals have charge carriers of both types (holes and electrons), although the charge carriers typically occur in much smaller numbers than in a real metal. In this respect they resemble degenerate semiconductors. This explains why the electrical properties of semimetals are partway between those of metals and semiconductors. There are additional types, in particular Weyl and Dirac semimetals. The classic elemental semimetallic elements are arsenic, antimony, bismuth, α-tin (gray tin) and graphite. There are also chemical compounds, such as mercury telluride (HgTe), and some conductive polymers. Lifecycle Formation Metallic elements up to the vicinity of iron (in the periodic table) are largely made via stellar nucleosynthesis. In this process, lighter elements from hydrogen to silicon undergo successive fusion reactions inside stars, releasing light and heat and forming heavier elements with higher atomic numbers. Heavier elements are not usually formed this way since fusion reactions involving such nuclei would consume rather than release energy. Rather, they are largely synthesised (from elements with a lower atomic number) by neutron capture, with the two main modes of this repetitive capture being the s-process and the r-process. In the s-process ("s" stands for "slow"), singular captures are separated by years or decades, allowing the less stable nuclei to beta decay, while in the r-process ("rapid"), captures happen faster than nuclei can decay. Therefore, the s-process takes a more-or-less clear path: for example, stable cadmium-110 nuclei are successively bombarded by free neutrons inside a star until they form cadmium-115 nuclei which are unstable and decay to form indium-115 (which is nearly stable, with a half-life times the age of the universe). These nuclei capture neutrons and form indium-116, which is unstable, and decays to form tin-116, and so on. In contrast, there is no such path in the r-process. The s-process stops at bismuth due to the short half-lives of the next two elements, polonium and astatine, which decay to bismuth or lead. The r-process is so fast it can skip this zone of instability and go on to create heavier elements such as thorium and uranium. Metals condense in planets as a result of stellar evolution and destruction processes. Stars lose much of their mass when it is ejected late in their lifetimes, and sometimes thereafter as a result of a neutron star merger, thereby increasing the abundance of elements heavier than helium in the interstellar medium. When gravitational attraction causes this matter to coalesce and collapse new stars and planets are formed. Abundance and occurrence The Earth's crust is made of approximately 25% of metallic elements by weight, of which 80% are light metals such as sodium, magnesium, and aluminium. Despite the overall scarcity of some heavier metals such as copper, they can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes. Metallic elements are primarily found as lithophiles (rock-loving) or chalcophiles (ore-loving). Lithophile elements are mainly the s-block elements, the more reactive of the d-block elements, and the f-block elements. They have a strong affinity for oxygen and mostly exist as relatively low-density silicate minerals. Chalcophile elements are mainly the less reactive d-block elements, and the period 4–6 p-block metals. They are usually found in (insoluble) sulfide minerals. Being denser than the lithophiles, hence sinking lower into the crust at the time of its solidification, the chalcophiles tend to be less abundant than the lithophiles. On the other hand, gold is a siderophile, or iron-loving element. It does not readily form compounds with either oxygen or sulfur. At the time of the Earth's formation, and as the most noble (inert) of metallic elements, gold sank into the core due to its tendency to form high-density metallic alloys. Consequently, it is relatively rare. Some other (less) noble ones—molybdenum, rhenium, the platinum group metals (ruthenium, rhodium, palladium, osmium, iridium, and platinum), germanium, and tin—can be counted as siderophiles but only in terms of their primary occurrence in the Earth (core, mantle, and crust), rather the crust. These otherwise occur in the crust, in small quantities, chiefly as chalcophiles (less so in their native form). The rotating fluid outer core of the Earth's interior, which is composed mostly of iron, is thought to be the source of Earth's protective magnetic field. The core lies above Earth's solid inner core and below its mantle. If it could be rearranged into a column having a footprint it would have a height of nearly 700 light years. The magnetic field shields the Earth from the charged particles of the solar wind, and cosmic rays that would otherwise strip away the upper atmosphere (including the ozone layer that limits the transmission of ultraviolet radiation). Extraction Metallic elements are often extracted from the Earth by mining ores that are rich sources of the requisite elements, such as bauxite. Ores are located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. In some cases, the sale price of the metal(s) involved make it economically feasible to mine lower concentration sources. Once the ore is mined, the elements must be extracted, usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose. When a metallic ore is an ionic compound, the ore must usually be smelted—heated with a reducing agent—to extract the pure metal. Many common metals, such as iron, are smelted using carbon as a reducing agent. Some metals, such as aluminum and sodium, have no commercially practical reducing agent, and are extracted using electrolysis instead. Sulfide ores are not reduced directly to the metal but are roasted in air to convert them to oxides. Recycling Demand for metals is closely linked to economic growth given their use in infrastructure, construction, manufacturing, and consumer goods. During the 20th century, the variety of metals used in society grew rapidly. Today, the development of major nations, such as China and India, and technological advances, are fueling ever more demand. The result is that mining activities are expanding, and more and more of the world's metal stocks are above ground in use, rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the U.S. rose from 73 g to 238 g per person. Metals are inherently recyclable, so in principle, can be used over and over again, minimizing these negative environmental impacts and saving energy. For example, 95% of the energy used to make aluminum from bauxite ore is saved by using recycled material. Globally, metal recycling is generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme published reports on metal stocks that exist within society and their recycling rates. The authors of the report observed that the metal stocks in society can serve as huge mines above ground. They warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology. History Prehistory Copper, which occurs in native form, may have been the first metal discovered given its distinctive appearance, heaviness, and malleability. Gold, silver, iron (as meteoric iron), and lead were likewise discovered in prehistory. Forms of brass, an alloy of copper and zinc made by concurrently smelting the ores of these metals, originate from this period (although pure zinc was not isolated until the 13th century). The malleability of the solid metals led to the first attempts to craft metal ornaments, tools, and weapons. Meteoric iron containing nickel was discovered from time to time and, in some respects this was superior to any industrial steel manufactured up to the 1880s when alloy steels become prominent. Antiquity The discovery of bronze (an alloy of copper with arsenic or tin) enabled people to create metal objects which were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made of copper and arsenic (forming arsenic bronze) by smelting naturally or artificially mixed ores of copper and arsenic. The earliest artifacts so far known come from the Iranian plateau in the fifth millennium BCE. It was only later that tin was used, becoming the major non-copper ingredient of bronze in the late third millennium BCE. Pure tin itself was first isolated in 1800 BCE by Chinese and Japanese metalworkers. Mercury was known to ancient Chinese and Indians before 2000 BCE, and found in Egyptian tombs dating from 1500 BCE. The earliest known production of steel, an iron-carbon alloy, is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) which are nearly 4,000 years old, dating from 1800 BCE. From about 500 BCE sword-makers of Toledo, Spain, were making early forms of alloy steel by adding a mineral called wolframite, which contained tungsten and manganese, to iron ore (and carbon). The resulting Toledo steel came to the attention of Rome when used by Hannibal in the Punic Wars. It soon became the basis for the weaponry of Roman legions; such swords were, "stronger in composition than any existing sword and, because… [they] would not break, provided a psychological advantage to the Roman soldier." In pre-Columbian America, objects made of tumbaga, an alloy of copper and gold, started being produced in Panama and Costa Rica between 300 and 500 CE. Small metal sculptures were common and an extensive range of tumbaga (and gold) ornaments comprised the usual regalia of persons of high status. At around the same time indigenous Ecuadorians were combining gold with a naturally-occurring platinum alloy containing small amounts of palladium, rhodium, and iridium, to produce miniatures and masks of a white gold-platinum alloy. The metal workers involved heated gold with grains of the platinum alloy until the gold melted. After cooling, the resulting conglomeration was hammered and reheated repeatedly until it became homogenous, equivalent to melting all the metals (attaining the melting points of the platinum group metals concerned was beyond the technology of the day). Middle Ages Arabic and medieval alchemists believed that all metals and matter were composed of the principle of sulfur, the father of all metals and carrying the combustible property, and the principle of mercury, the mother of all metals and carrier of the liquidity, fusibility, and volatility properties. These principles were not necessarily the common substances sulfur and mercury found in most laboratories. This theory reinforced the belief that all metals were destined to become gold in the bowels of the earth through the proper combinations of heat, digestion, time, and elimination of contaminants, all of which could be developed and hastened through the knowledge and methods of alchemy. Arsenic, zinc, antimony, and bismuth became known, although these were at first called semimetals or bastard metals on account of their immalleability. Albertus Magnus is believed to have been the first to isolate arsenic from a compound in 1250, by heating soap together with arsenic trisulfide. Metallic zinc, which is brittle if impure, was isolated in India by 1300 AD. The first description of a procedure for isolating antimony is in the 1540 book De la pirotechnia by Vannoccio Biringuccio. Bismuth was described by Agricola in De Natura Fossilium (c. 1546); it had been confused in early times with tin and lead because of its resemblance to those elements. The Renaissance The first systematic text on the arts of mining and metallurgy was De la Pirotechnia (1540) by Vannoccio Biringuccio, which treats the examination, fusion, and working of metals. Sixteen years later, Georgius Agricola published De Re Metallica in 1556, an account of the profession of mining, metallurgy, and the accessory arts and sciences, an extensive treatise on the chemical industry through the sixteenth century. He gave the following description of a metal in his De Natura Fossilium (1546): Metal is a mineral body, by nature either liquid or somewhat hard. The latter may be melted by the heat of the fire, but when it has cooled down again and lost all heat, it becomes hard again and resumes its proper form. In this respect it differs from the stone which melts in the fire, for although the latter regain its hardness, yet it loses its pristine form and properties. Traditionally there are six different kinds of metals, namely gold, silver, copper, iron, tin, and lead. There are really others, for quicksilver is a metal, although the Alchemists disagree with us on this subject, and bismuth is also. The ancient Greek writers seem to have been ignorant of bismuth, wherefore Ammonius rightly states that there are many species of metals, animals, and plants which are unknown to us. Stibium when smelted in the crucible and refined has as much right to be regarded as a proper metal as is accorded to lead by writers. If when smelted, a certain portion be added to tin, a bookseller's alloy is produced from which the type is made that is used by those who print books on paper. Each metal has its own form which it preserves when separated from those metals which were mixed with it. Therefore neither electrum nor Stannum [not meaning our tin] is of itself a real metal, but rather an alloy of two metals. Electrum is an alloy of gold and silver, Stannum of lead and silver. And yet if silver be parted from the electrum, then gold remains and not electrum; if silver be taken away from Stannum, then lead remains and not Stannum. Whether brass, however, is found as a native metal or not, cannot be ascertained with any surety. We only know of the artificial brass, which consists of copper tinted with the colour of the mineral calamine. And yet if any should be dug up, it would be a proper metal. Black and white copper seem to be different from the red kind. Metal, therefore, is by nature either solid, as I have stated, or fluid, as in the unique case of quicksilver. But enough now concerning the simple kinds. Platinum, the third precious metal after gold and silver, was discovered in Ecuador during the period 1736 to 1744 by the Spanish astronomer Antonio de Ulloa and his colleague the mathematician Jorge Juan y Santacilia. Ulloa was the first person to write a scientific description of the metal, in 1748. In 1789, the German chemist Martin Heinrich Klaproth isolated an oxide of uranium, which he thought was the metal itself. Klaproth was subsequently credited as the discoverer of uranium. It was not until 1841, that the French chemist Eugène-Melchior Péligot, prepared the first sample of uranium metal. Henri Becquerel subsequently discovered radioactivity in 1896 using uranium. In the 1790s, Joseph Priestley and the Dutch chemist Martinus van Marum observed the effect of metal surfaces on the dehydrogenation of alcohol, a development which subsequently led, in 1831, to the industrial scale synthesis of sulphuric acid using a platinum catalyst. In 1803, cerium was the first of the lanthanide metals to be discovered, in Bastnäs, Sweden by Jöns Jakob Berzelius and Wilhelm Hisinger, and independently by Martin Heinrich Klaproth in Germany. The lanthanide metals were regarded as oddities until the 1960s when methods were developed to more efficiently separate them from one another. They have subsequently found uses in cell phones, magnets, lasers, lighting, batteries, catalytic converters, and in other applications enabling modern technologies. Other metals discovered and prepared during this time were cobalt, nickel, manganese, molybdenum, tungsten, and chromium; and some of the platinum group metals, palladium, osmium, iridium, and rhodium. Light metallic elements All elemental metals discovered before 1809 had relatively high densities; their heaviness was regarded as a distinguishing criterion. From 1809 onward, light metals such as sodium, potassium, and strontium were isolated. Their low densities challenged conventional wisdom as to the nature of metals. They behaved chemically as metals however, and were subsequently recognized as such. Aluminium was discovered in 1824 but it was not until 1886 that an industrial large-scale production method was developed. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal many uses at the time. During World War I, major governments demanded large shipments of aluminium for light and strong airframes. While pure metallic titanium (99.9%) was first prepared in 1910 it was not used outside the laboratory until 1932. In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71. Metallic scandium was produced for the first time in 1937. The first pound of 99% pure scandium metal was produced in 1960. Production of aluminium-scandium alloys began in 1971 following a U.S. patent. Aluminium-scandium alloys were also developed in the USSR. The age of steel The modern era in steelmaking began with the introduction of Henry Bessemer's Bessemer process in 1855, the raw material for which was pig iron. His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used. The Gilchrist-Thomas process (or basic Bessemer process) was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus. Due to its high tensile strength and low cost, steel came to be a major component used in buildings, infrastructure, tools, ships, automobiles, machines, appliances, and weapons. In 1872, the Englishmen Clark and Woods patented an alloy that would today be considered a stainless steel. The corrosion resistance of iron-chromium alloys had been recognized in 1821 by French metallurgist Pierre Berthier. He noted their resistance against attack by some acids and suggested their use in cutlery. Metallurgists of the 19th century were unable to produce the combination of low carbon and high chromium found in most modern stainless steels, and the high-chromium alloys they could produce were too brittle to be practical. It was not until 1912 that the industrialization of stainless steel alloys occurred in England, Germany, and the United States. The last stable metallic elements By 1900 three metals with atomic numbers less than lead (#82), the heaviest stable metal, remained to be discovered: elements 71, 72, 75. Von Welsbach, in 1906, proved that the old ytterbium also contained a new element (#71), which he named cassiopeium. Urbain proved this simultaneously, but his samples were very impure and only contained trace quantities of the new element. Despite this, his chosen name lutetium was adopted. In 1908, Ogawa found element 75 in thorianite but assigned it as element 43 instead of 75 and named it nipponium. In 1925 Walter Noddack, Ida Eva Tacke, and Otto Berg announced its separation from gadolinite and gave it the present name, rhenium. Georges Urbain claimed to have found element 72 in rare-earth residues, while Vladimir Vernadsky independently found it in orthite. Neither claim was confirmed due to World War I, and neither could be confirmed later, as the chemistry they reported does not match that now known for hafnium. After the war, in 1922, Coster and Hevesy found it by X-ray spectroscopic analysis in Norwegian zircon. Hafnium was thus the last stable element to be discovered, though rhenium was the last to be correctly recognized. By the end of World War II scientists had synthesized four post-uranium elements, all of which are radioactive (unstable) metals: neptunium (in 1940), plutonium (1940–41), and curium and americium (1944), representing elements 93 to 96. The first two of these were eventually found in nature as well. Curium and americium were by-products of the Manhattan project, which produced the world's first atomic bomb in 1945. The bomb was based on the nuclear fission of uranium, a metal first thought to have been discovered nearly 150 years earlier. Post-World War II developments Superalloys Superalloys composed of combinations of Fe, Ni, Co, and Cr, and lesser amounts of W, Mo, Ta, Nb, Ti, and Al were developed shortly after World War II for use in high performance engines, operating at elevated temperatures (above 650 °C (1,200 °F)). They retain most of their strength under these conditions, for prolonged periods, and combine good low-temperature ductility with resistance to corrosion or oxidation. Superalloys can now be found in a wide range of applications including land, maritime, and aerospace turbines, and chemical and petroleum plants. Transcurium metals The successful development of the atomic bomb at the end of World War II sparked further efforts to synthesize new elements, nearly all of which are, or are expected to be, metals, and all of which are radioactive. It was not until 1949 that element 97 (Berkelium), next after element 96 (Curium), was synthesized by firing alpha particles at an americium target. In 1952, element 100 (Fermium) was found in the debris of the first hydrogen bomb explosion; hydrogen, a nonmetal, had been identified as an element nearly 200 years earlier. Since 1952, elements 101 (Mendelevium) to 118 (Oganesson) have been synthesized. Bulk metallic glasses A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with a disordered atomic-scale structure. Most pure and alloyed metals, in their solid state, have atoms arranged in a highly ordered crystalline structure. In contrast these have a non-crystalline glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity. Amorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. The first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960. More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been produced. Currently, the most important applications rely on the special magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high-efficiency transformers. Theft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties. Shape-memory alloys A shape-memory alloy (SMA) is an alloy that "remembers" its original shape and when deformed returns to its pre-deformed shape when heated. While the shape memory effect had been first observed in 1932, in an Au-Cd alloy, it was not until 1962, with the accidental discovery of the effect in a Ni-Ti alloy that research began in earnest, and another ten years before commercial applications emerged. SMA's have applications in robotics and automotive, aerospace, and biomedical industries. There is another type of SMA, called a ferromagnetic shape-memory alloy (FSMA), that changes shape under strong magnetic fields. These materials are of interest as the magnetic response tends to be faster and more efficient than temperature-induced responses. Quasicrystalline alloys In 1984, Israeli metallurgist Dan Shechtman found an aluminum-manganese alloy having five-fold symmetry, in breach of crystallographic convention at the time which said that crystalline structures could only have two-, three-, four-, or six-fold symmetry. Due to reservation about the scientific community's reaction, it took him two years to publish the results  for which he was awarded the Nobel Prize in Chemistry in 2011. Since this time, hundreds of quasicrystals have been reported and confirmed. They exist in many metallic alloys (and some polymers). Quasicrystals are found most often in aluminum alloys (Al-Li-Cu, Al-Mn-Si, Al-Ni-Co, Al-Pd-Mn, Al-Cu-Fe, Al-Cu-V, etc.), but numerous other compositions are also known (Cd-Yb, Ti-Zr-Ni, Zn-Mg-Ho, Zn-Mg-Sc, In-Ag-Yb, Pd-U-Si, etc.). Quasicrystals effectively have infinitely large unit cells. Icosahedrite Al63Cu24Fe13, the first quasicrystal found in nature, was discovered in 2009. Most quasicrystals have ceramic-like properties including low electrical conductivity (approaching values seen in insulators) and low thermal conductivity, high hardness, brittleness, and resistance to corrosion, and non-stick properties. Quasicrystals have been used to develop heat insulation, LEDs, diesel engines, and new materials that convert heat to electricity. New applications may take advantage of the low coefficient of friction and the hardness of some quasicrystalline materials, for example embedding particles in plastic to make strong, hard-wearing, low-friction plastic gears. Other potential applications include selective solar absorbers for power conversion, broad-wavelength reflectors, and bone repair and prostheses applications where biocompatibility, low friction, and corrosion resistance are required. Complex metallic alloys Complex metallic alloys (CMAs) are intermetallic compounds characterized by large unit cells comprising some tens up to thousands of atoms; the presence of well-defined clusters of atoms (frequently with icosahedral symmetry); and partial disorder within their crystalline lattices. They are composed of two or more metallic elements, sometimes with metalloids or chalcogenides added. They include, for example, NaCd2, with 348 sodium atoms and 768 cadmium atoms in the unit cell. Linus Pauling attempted to describe the structure of NaCd2 in 1923, but did not succeed until 1955. At first called "giant unit cell crystals", interest in CMAs, as they came to be called, did not pick up until 2002, with the publication of a paper called "Structurally Complex Alloy Phases", given at the 8th International Conference on Quasicrystals. Potential applications of CMAs include as heat insulation; solar heating; magnetic refrigerators; using waste heat to generate electricity; and coatings for turbine blades in military engines. High-entropy alloys High entropy alloys (HEAs) such as AlLiMgScTi are composed of equal or nearly equal quantities of five or more metals. Compared to conventional alloys with only one or two base metals, HEAs have considerably better strength-to-weight ratios, higher tensile strength, and greater resistance to fracturing, corrosion, and oxidation. Although HEAs were described as early as 1981, significant interest did not develop until the 2010s; they continue to be a focus of research in materials science and engineering because of their desirable properties. MAX phase In a Max phase, M is an early transition metal, A is an A group element (mostly group IIIA and IVA, or groups 13 and 14), and X is either carbon or nitrogen. Examples are Hf2SnC and Ti4AlN3. Such alloys have high electrical and thermal conductivity, thermal shock resistance, damage tolerance, machinability, high elastic stiffness, and low thermal expansion coefficients. They can be polished to a metallic luster because of their excellent electrical conductivities. During mechanical testing, it has been found that polycrystalline Ti3SiC2 cylinders can be repeatedly compressed at room temperature, up to stresses of 1 GPa, and fully recover upon the removal of the load. Some MAX phases are also highly resistant to chemical attack (e.g. Ti3SiC2) and high-temperature oxidation in air (Ti2AlC, Cr2AlC2, and Ti3AlC2). Potential applications for MAX phase alloys include: as tough, machinable, thermal shock-resistant refractories; high-temperature heating elements; coatings for electrical contacts; and neutron irradiation resistant parts for nuclear applications. See also Bimetal Colored gold Ductility Ferrous metallurgy Metal theft Metal toxicity Metallurgy Metals of antiquity Metalworking Mineral (nutrient) Polymetallic ore Properties of metals, metalloids, and nonmetals Structural steel Transition metal Note References Further reading Choptuik M. W., Lehner L. & Pretorias F. 2015, "Probing strong-field gravity through numerical simulation", in A. Ashtekar, B. K. Berger, J. Isenberg & M. MacCallum (eds), General Relativity and Gravitation: A Centennial Perspective, Cambridge University Press, Cambridge, . Crow J. M. 2016, "Impossible alloys: How to make never-before-seen metals", New Scientist, 12 October Hadhazy A. 2016, "Galactic 'Gold Mine' Explains the Origin of Nature's Heaviest Elements", Science Spotlights, 10 May 2016, accessed 11 July 2016. Hofmann S. 2002, On Beyond Uranium: Journey to the End of the Periodic Table, Taylor & Francis, London, . Padmanabhan T. 2001, Theoretical Astrophysics, vol. 2, Stars and Stellar Systems, Cambridge University Press, Cambridge, . Parish R. V. 1977, The metallic elements, Longman, London, Podosek F. A. 2011, "Noble gases", in H. D. Holland & K. K. Turekian (eds), Isotope Geochemistry: From the Treatise on Geochemistry, Elsevier, Amsterdam, pp. 467–492, . Raymond R. 1984, Out of the fiery furnace: The impact of metals on the history of mankind, Macmillan Australia, Melbourne, Rehder D. 2010, Chemistry in Space: From Interstellar Matter to the Origin of Life, Wiley-VCH, Weinheim, . Russell A. M. & Lee K. L. 2005, Structure–property relations in nonferrous metals, John Wiley & Sons, Hoboken, New Jersey, Street A. & Alexander W. 1998, Metals in the service of man, 11th ed., Penguin Books, London, Wilson A. J. 1994, The living rock: The story of metals since earliest times and their impact on developing civilization, Woodhead Publishing, Cambridge, External links of ASM International (formerly the American Society for Metals) of The Minerals, Metals & Materials Society Chemical physics Condensed matter physics Materials science Metallurgy Solid-state chemistry
Metal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
10,712
[ "Metals", "Applied and interdisciplinary physics", "Metallurgy", "Phases of matter", "Materials science", "Chemical physics", "Condensed matter physics", "nan", "Matter", "Solid-state chemistry" ]
9,741,020
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Tisza
László Tisza (July 7, 1907 – April 15, 2009) was a Hungarian-born American physicist who was Professor of Physics Emeritus at MIT. He was a colleague of famed physicists Edward Teller, Lev Landau and Fritz London, and initiated the two-fluid theory of liquid helium. United States In 1941, Tisza immigrated to the United States and joined the faculty at the Massachusetts Institute of Technology. His research areas included theoretical physics and the history and philosophy of science, specifically on the foundation of thermodynamics and quantum mechanics. He taught at MIT until 1973. Publications Tisza was the author of the 1966 book, Generalized Thermodynamics. The 1982 publication, Physics as Natural Philosophy: Essays in Honor of László Tisza, was written by Tisza's colleagues and former students in honor of his 75th birthday. Affiliations He was a Fellow of The American Physical Society and American Academy of Arts and Sciences, a John Simon Guggenheim Fellow and had been a visiting professor at the University of Paris in Sorbonne. See also Vera and Laszlo Tisza House References External links MIT site – notice of Tisza's death John Simon Guggenheim Fellowship site 1907 births 2009 deaths Scientists from Budapest Hungarian emigrants to the United States 20th-century American educators American science writers 20th-century Hungarian physicists Hungarian men centenarians Massachusetts Institute of Technology School of Science faculty Writers from Cambridge, Massachusetts Thermodynamics Academic staff of the University of Paris 20th-century American non-fiction writers 20th-century American physicists Fellows of the American Physical Society Fellows of the American Academy of Arts and Sciences 20th-century American male writers American male non-fiction writers American men centenarians Leipzig University alumni
László Tisza
[ "Physics", "Chemistry", "Mathematics" ]
342
[ "Thermodynamics", "Dynamical systems" ]
9,742,373
https://en.wikipedia.org/wiki/Consolidated%20rental%20car%20facility
A consolidated rental car facility (CRCF) or consolidated rental car center (CONRAC) is a complex that hosts numerous car rental agencies, typically found at airports in the United States. The most important incentives for building consolidated facilities are greatly reduced traffic congestion in airport pick up and drop off areas and increased convenience for travelers. A single unified fleet of shuttle buses can serve all car rental agencies, instead of each company operating their own individual shuttle buses which may come less frequently. Congestion can be further reduced by connecting the consolidated facility to the airport terminal with a people mover. Consolidated facilities are typically built around two areas: a customer service building where each company operates retail counters to serve renters, and a "ready/return" lot or garage where cars are temporarily parked while ready and awaiting a renter, or when recently returned and in need of servicing before the next rental. Facilities usually also feature a Quick Turn Around (QTA) area either on-site or at a nearby location, where light maintenance of vehicles can be conducted including cleaning, fueling, and inspection of engine fluids. There can be several QTA areas operated by the different companies, or the services can be shared. The first known consolidated facility was built at Sacramento International Airport in 1994. However, as early as 1974, four companies were already sharing facilities and shuttle buses at Dallas/Fort Worth Airport, and in 1988 companies at Minneapolis–Saint Paul airport introduced common shuttle buses. These differed from modern CONRACs in that the majority of rental car companies at Dallas/Fort Worth continued to operate their own off-site facilities and shuttle buses, while at Minneapolis, only the shuttle buses and not the facilities themselves were shared (in other words, a single shuttle bus line served multiple off-site rental car companies). Furthermore, the rental car industry has seen major mergers, creating three major holding companies that now represent ten brands commonly seen at airports, the Avis Budget Group (which operates Avis Car Rental, Budget Rent a Car, Payless Car Rental and Zipcar), Enterprise Holdings (which operates Enterprise Rent-A-Car, Alamo Rent a Car and National Car Rental) and The Hertz Corporation (which operates Hertz Rent A Car, Dollar Rent A Car and Thrifty Car Rental). Because of these mergers, even in cities without a consolidated facility, many of these companies have consolidated all their brands into one location. Locations Facilities under construction The Reno–Tahoe International Airport is currently building a Rental Car and Ground Transportation Center, scheduled to open in 2028. The Gerald R. Ford International Airport in Grand Rapids, MI is currently construction a 4-story ConRAC, scheduled to open in 2026. References Airport infrastructure Car rental
Consolidated rental car facility
[ "Engineering" ]
551
[ "Airport infrastructure", "Aerospace engineering" ]
9,746,001
https://en.wikipedia.org/wiki/ISconf
In computing, ISconf is a software tool to manage a network of servers. ISconf operates on a pull model, meaning even servers that are not up when a change is made will receive the change once they come back up. As of version 4, ISconf requires no central server, though it does expect all servers to start identically, which is easiest to accomplish using some form of automated install which may require a central server. Theory ISconf comes from the "InfraStructure administration" movement which created and defined most of the OS-side backgrounds (in theory terms) of what is now making up the DevOps sphere. It is based on the idea that the best way to keep servers from diverging is to apply the same set of operations in the same order. This is in contrast to the "convergence" theory of system automation, which attempts to "converge" servers to known states from arbitrary states using sets of rules such as "if a package outside of this set is installed, uninstall it", "if package X is not installed, install it", or "if daemon X is not running, start it". According to Steve Traugott, there is no way to guarantee that a given set of rules will actually be able to converge from any given state. ISconf enforces the order of operations by assuming only commands issued through it change the state of the system. As a result, if a package or file is installed on a system manually, it will stay there, which may eventually cause problems such as version conflicts. ISconf is targeted at environments where configurations must remain identical. In such environments, it is typical to give only a few systems administrators root access to hosts. This minimizes the risk of manual changes because it is easy to train a small group of people to only make changes through ISconf. ISconf was inspired by, and originally implemented as, Makefiles. However, Makefiles specify dependencies and not a total ordering of operations. ISconf version 1 dealt with this by making each operation dependent on the previous one, but this was tedious and poorly suited to Make. More recent versions of ISconf use a simple append-only journal. Major versions The major version in common use apparently were ISconf2 and ISconf3, while ISconf4 stayed in a very long beta period. It had in fact been finished and put to use in larger environments but due to the delay saw limited community adoption. ISconf 1 (Makefiles) ISconf 2 (early 200x?) written by Steve Traugott ISconf 3 (2002) was a rewrite of version 2 by Luke Kanies. ISconf 4 was mostly written by the original author, Steve Traugott. Trivia Luke Kanies later switched to CFengine2, until finally authored and released Puppet. As a result, one could consider ISconf an ancestor of Puppet, though both CFengine and Puppet implement the "convergence" model of configuration management, essentially the opposite of the "order of operations" model implemented by at least ISconf versions 1, 2, and 4. See also Comparison of open source configuration management software External links ISconf's web site Bootstrapping an Infrastructure, Steve Traugott and Joel Huddleston's LISA '98 paper about the ideas that led to ISconf (pre-dates ISconf itself) Lukes description of ISconf 3 theoretical background and goals Theory section and mailing list archives for system management automation Github Repository for ISconf4 Configuration management
ISconf
[ "Engineering" ]
747
[ "Systems engineering", "Configuration management" ]
9,748,518
https://en.wikipedia.org/wiki/Toda%20oscillator
In physics, the Toda oscillator is a special kind of nonlinear oscillator. It represents a chain of particles with exponential potential interaction between neighbors. These concepts are named after Morikazu Toda. The Toda oscillator is used as a simple model to understand the phenomenon of self-pulsation, which is a quasi-periodic pulsation of the output intensity of a solid-state laser in the transient regime. Definition The Toda oscillator is a dynamical system of any origin, which can be described with dependent coordinate and independent coordinate , characterized in that the evolution along independent coordinate can be approximated with equation where , and prime denotes the derivative. Physical meaning The independent coordinate has sense of time. Indeed, it may be proportional to time with some relation like , where is constant. The derivative may have sense of velocity of particle with coordinate ; then can be interpreted as acceleration; and the mass of such a particle is equal to unity. The dissipative function may have sense of coefficient of the speed-proportional friction. Usually, both parameters and are supposed to be positive; then this speed-proportional friction coefficient grows exponentially at large positive values of coordinate . The potential is a fixed function, which also shows exponential growth at large positive values of coordinate . In the application in laser physics, may have a sense of logarithm of number of photons in the laser cavity, related to its steady-state value. Then, the output power of such a laser is proportional to and may show pulsation at oscillation of . Both analogies, with a unity mass particle and logarithm of number of photons, are useful in the analysis of behavior of the Toda oscillator. Energy Rigorously, the oscillation is periodic only at . Indeed, in the realization of the Toda oscillator as a self-pulsing laser, these parameters may have values of order of ; during several pulses, the amplitude of pulsation does not change much. In this case, we can speak about the period of pulsation, since the function is almost periodic. In the case , the energy of the oscillator does not depend on , and can be treated as a constant of motion. Then, during one period of pulsation, the relation between and can be expressed analytically: where and are minimal and maximal values of ; this solution is written for the case when . however, other solutions may be obtained using the principle of translational invariance. The ratio is a convenient parameter to characterize the amplitude of pulsation. Using this, we can express the median value as ; and the energy is also an elementary function of . In application, the quantity need not be the physical energy of the system; in these cases, this dimensionless quantity may be called quasienergy. Period of pulsation The period of pulsation is an increasing function of the amplitude . When , the period When , the period In the whole range , the period and frequency can be approximated by to at least 8 significant figures. The relative error of this approximation does not exceed . Decay of pulsation At small (but still positive) values of and , the pulsation decays slowly, and this decay can be described analytically. In the first approximation, the parameters and give additive contributions to the decay; the decay rate, as well as the amplitude and phase of the nonlinear oscillation, can be approximated with elementary functions in a manner similar to the period above. In describing the behavior of the idealized Toda oscillator, the error of such approximations is smaller than the differences between the ideal and its experimental realization as a self-pulsing laser at the optical bench. However, a self-pulsing laser shows qualitatively very similar behavior. Continuous limit The Toda chain equations of motion, in the continuous limit in which the distance between neighbors goes to zero, become the Korteweg–de Vries equation (KdV) equation. Here the index labeling the particle in the chain becomes the new spatial coordinate. In contrast, the Toda field theory is achieved by introducing a new spatial coordinate which is independent of the chain index label. This is done in a relativistically invariant way, so that time and space are treated on equal grounds. This means that the Toda field theory is not a continuous limit of the Toda chain. References Mathematical physics Atomic, molecular, and optical physics
Toda oscillator
[ "Physics", "Chemistry", "Mathematics" ]
924
[ "Applied mathematics", "Theoretical physics", " molecular", "Atomic", "Mathematical physics", " and optical physics" ]
9,750,042
https://en.wikipedia.org/wiki/Chlorine%20monoxide
Chlorine monoxide is a chemical radical with the chemical formula ClO•. It plays an important role in the process of ozone depletion. In the stratosphere, chlorine atoms react with ozone molecules to form chlorine monoxide and oxygen. Cl• + O3 → ClO• + O2 This reaction causes the depletion of the ozone layer. The resulting ClO• radicals can further react: ClO• + O• → Cl• + O2 regenerating the chlorine radical. In this way, the overall reaction for the decomposition of ozone is catalyzed by chlorine, as ultimately chlorine remains unchanged. The overall reaction is: O• + O3 → 2 O2 There has been a significant impact of the use of CFCs on the upper stratosphere, although many countries have agreed to ban the use of CFCs. The nonreactive nature of CFCs allows them to pass into the stratosphere, where they undergo photo-dissociation to form Cl radicals. These then readily form chlorine monoxide, and this cycle can continue until two radicals react to form dichlorine monoxide, terminating the radical reaction. Because the concentration of CFCs in atmosphere is very low, the probability of a terminating reaction is exceedingly low, meaning each radical can decompose many thousands of molecules of ozone. Even though the use of CFCs has been banned in many countries, CFCs can stay in the atmosphere for 50 to 500 years. This causes many chlorine radicals to be produced and hence a significant amount of ozone molecules are decomposed before the chlorine radicals are able to react with chlorine monoxide to form dichlorine monoxide. References Chlorine oxides Free radicals Diatomic molecules
Chlorine monoxide
[ "Physics", "Chemistry", "Biology" ]
374
[ "Molecules", "Free radicals", "Senescence", "Biomolecules", "Diatomic molecules", "Matter" ]
9,752,560
https://en.wikipedia.org/wiki/Numerical%20continuation
Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, The parameter is usually a real scalar and the solution is an n-vector. For a fixed parameter value , maps Euclidean n-space into itself. Often the original mapping is from a Banach space into itself, and the Euclidean n-space is a finite-dimensional Banach space. A steady state, or fixed point, of a parameterized family of flows or maps are of this form, and by discretizing trajectories of a flow or iterating a map, periodic orbits and heteroclinic orbits can also be posed as a solution of . Other forms In some nonlinear systems, parameters are explicit. In others they are implicit, and the system of nonlinear equations is written where is an n-vector, and its image is an n-1 vector. This formulation, without an explicit parameter space is not usually suitable for the formulations in the following sections, because they refer to parameterized autonomous nonlinear dynamical systems of the form: However, in an algebraic system there is no distinction between unknowns and the parameters. Periodic motions A periodic motion is a closed curve in phase space. That is, for some period , The textbook example of a periodic motion is the undamped pendulum. If the phase space is periodic in one or more coordinates, say , with a vector , then there is a second kind of periodic motions defined by for every integer . The first step in writing an implicit system for a periodic motion is to move the period from the boundary conditions to the ODE: The second step is to add an additional equation, a phase constraint, that can be thought of as determining the period. This is necessary because any solution of the above boundary value problem can be shifted in time by an arbitrary amount (time does not appear in the defining equations—the dynamical system is called autonomous). There are several choices for the phase constraint. If is a known periodic orbit at a parameter value near , then, Poincaré used which states that lies in a plane which is orthogonal to the tangent vector of the closed curve. This plane is called a Poincaré section. For a general problem a better phase constraint is an integral constraint introduced by Eusebius Doedel, which chooses the phase so that the distance between the known and unknown orbits is minimized: Homoclinic and heteroclinic motions Definitions Solution component A solution component of the nonlinear system is a set of points which satisfy and are connected to the initial solution by a path of solutions for which and . Numerical continuation A numerical continuation is an algorithm which takes as input a system of parametrized nonlinear equations and an initial solution , , and produces a set of points on the solution component . Regular point A regular point of is a point at which the Jacobian of is full rank . Near a regular point the solution component is an isolated curve passing through the regular point (the implicit function theorem). In the figure above the point is a regular point. Singular point A singular point of is a point at which the Jacobian of F is not full rank. Near a singular point the solution component may not be an isolated curve passing through the regular point. The local structure is determined by higher derivatives of . In the figure above the point where the two blue curves cross is a singular point. In general solution components are branched curves. The branch points are singular points. Finding the solution curves leaving a singular point is called branch switching, and uses techniques from bifurcation theory (singularity theory, catastrophe theory). For finite-dimensional systems (as defined above) the Lyapunov-Schmidt decomposition may be used to produce two systems to which the Implicit Function Theorem applies. The Lyapunov-Schmidt decomposition uses the restriction of the system to the complement of the null space of the Jacobian and the range of the Jacobian. If the columns of the matrix are an orthonormal basis for the null space of and the columns of the matrix are an orthonormal basis for the left null space of , then the system can be rewritten as where is in the complement of the null space of . In the first equation, which is parametrized by the null space of the Jacobian (), the Jacobian with respect to is non-singular. So the implicit function theorem states that there is a mapping such that and . The second equation (with substituted) is called the bifurcation equation (though it may be a system of equations). The bifurcation equation has a Taylor expansion which lacks the constant and linear terms. By scaling the equations and the null space of the Jacobian of the original system a system can be found with non-singular Jacobian. The constant term in the Taylor series of the scaled bifurcation equation is called the algebraic bifurcation equation, and the implicit function theorem applied the bifurcation equations states that for each isolated solution of the algebraic bifurcation equation there is a branch of solutions of the original problem which passes through the singular point. Another type of singular point is a turning point bifurcation, or saddle-node bifurcation, where the direction of the parameter reverses as the curve is followed. The red curve in the figure above illustrates a turning point. Particular algorithms Natural parameter continuation Most methods of solution of nonlinear systems of equations are iterative methods. For a particular parameter value a mapping is repeatedly applied to an initial guess . If the method converges, and is consistent, then in the limit the iteration approaches a solution of . Natural parameter continuation is a very simple adaptation of the iterative solver to a parametrized problem. The solution at one value of is used as the initial guess for the solution at . With sufficiently small the iteration applied to the initial guess should converge. One advantage of natural parameter continuation is that it uses the solution method for the problem as a black box. All that is required is that an initial solution be given (some solvers used to always start at a fixed initial guess). There has been a lot of work in the area of large scale continuation on applying more sophisticated algorithms to black box solvers (see e.g. LOCA). However, natural parameter continuation fails at turning points, where the branch of solutions turns round. So for problems with turning points, a more sophisticated method such as pseudo-arclength continuation must be used (see below). Simplicial or piecewise linear continuation Simplicial Continuation, or Piecewise Linear Continuation (Allgower and Georg) is based on three basic results. The first is {| class="wikitable" |- |If F(x) maps IR^n into IR^(n-1), there is a unique linear interpolant on an (n-1)-dimensional simplex which agrees with the function values at the vertices of the simplex. |} The second result is: {| class="wikitable" |- |An (n-1)-dimensional simplex can be tested to determine if the unique linear interpolant takes on the value 0 inside the simplex. |} Please see the article on piecewise linear continuation for details. With these two operations this continuation algorithm is easy to state (although of course an efficient implementation requires a more sophisticated approach. See [B1]). An initial simplex is assumed to be given, from a reference simplicial decomposition of . The initial simplex must have at least one face which contains a zero of the unique linear interpolant on that face. The other faces of the simplex are then tested, and typically there will be one additional face with an interior zero. The initial simplex is then replaced by the simplex which lies across either face containing zero, and the process is repeated. References: Allgower and Georg [B1] provides a crisp, clear description of the algotihm. Pseudo-arclength continuation This method is based on the observation that the "ideal" parameterization of a curve is arclength. Pseudo-arclength is an approximation of the arclength in the tangent space of the curve. The resulting modified natural continuation method makes a step in pseudo-arclength (rather than ). The iterative solver is required to find a point at the given pseudo-arclength, which requires appending an additional constraint (the pseudo-arclength constraint) to the n by n+1 Jacobian. It produces a square Jacobian, and if the stepsize is sufficiently small the modified Jacobian is full rank. Pseudo-arclength continuation was independently developed by Edward Riks and Gerald Wempner for finite element applications in the late 1960s, and published in journals in the early 1970s by H.B. Keller. A detailed account of these early developments is provided in the textbook by M. A. Crisfield: Nonlinear Finite Element Analysis of Solids and Structures, Vol 1: Basic Concepts, Wiley, 1991. Crisfield was one of the most active developers of this class of methods, which are by now standard procedures of commercial nonlinear finite element programs. The algorithm is a predictor-corrector method. The prediction step finds the point (in IR^(n+1) ) which is a step along the tangent vector at the current pointer. The corrector is usually Newton's method, or some variant, to solve the nonlinear system where is the tangent vector at . The Jacobian of this system is the bordered matrix At regular points, where the unmodified Jacobian is full rank, the tangent vector spans the null space of the top row of this new Jacobian. Appending the tangent vector as the last row can be seen as determining the coefficient of the null vector in the general solution of the Newton system (particular solution plus an arbitrary multiple of the null vector). Gauss–Newton continuation This method is a variant of pseudo-arclength continuation. Instead of using the tangent at the initial point in the arclength constraint, the tangent at the current solution is used. This is equivalent to using the pseudo-inverse of the Jacobian in Newton's method, and allows longer steps to be made. [B17] Continuation in more than one parameter The parameter in the algorithms described above is a real scalar. Most physical and design problems generally have many more than one parameter. Higher-dimensional continuation refers to the case when is a k-vector. The same terminology applies. A regular solution is a solution at which the Jacobian is full rank . A singular solution is a solution at which the Jacobian is less than full rank. A regular solution lies on a k-dimensional surface, which can be parameterized by a point in the tangent space (the null space of the Jacobian). This is again a straightforward application of the Implicit Function Theorem. Applications of numerical continuation techniques Numerical continuation techniques have found a great degree of acceptance in the study of chaotic dynamical systems and various other systems which belong to the realm of catastrophe theory. The reason for such usage stems from the fact that various non-linear dynamical systems behave in a deterministic and predictable manner within a range of parameters which are included in the equations of the system. However, for a certain parameter value the system starts behaving chaotically and hence it became necessary to follow the parameter in order to be able to decipher the occurrences of when the system starts being non-predictable, and what exactly (theoretically) makes the system become unstable. Analysis of parameter continuation can lead to more insights about stable/critical point bifurcations. Study of saddle-node, transcritical, pitch-fork, period doubling, Hopf, secondary Hopf (Neimark) bifurcations of stable solutions allows for a theoretical discussion of the circumstances and occurrences which arise at the critical points. Parameter continuation also gives a more dependable system to analyze a dynamical system as it is more stable than more interactive, time-stepped numerical solutions. Especially in cases where the dynamical system is prone to blow-up at certain parameter values (or combination of values for multiple parameters). It is extremely insightful as to the presence of stable solutions (attracting or repelling) in the study of nonlinear differential equations where time stepping in the form of the Crank Nicolson algorithm is extremely time consuming as well as unstable in cases of nonlinear growth of the dependent variables in the system. The study of turbulence is another field where the Numerical Continuation techniques have been used to study the advent of turbulence in a system starting at low Reynolds numbers. Also, research using these techniques has provided the possibility of finding stable manifolds and bifurcations to invariant-tori in the case of the restricted three-body problem in Newtonian gravity and have also given interesting and deep insights into the behaviour of systems such as the Lorenz equations. Software (Under Construction) See also The SIAM Activity Group on Dynamical Systems' list http://www.dynamicalsystems.org/sw/sw/ AUTO: Computation of the solutions of Two Point Boundary Value Problems (TPBVPs) with integral constraints. https://sourceforge.net/projects/auto-07p/ Available on SourceForge. HOMCONT: Computation of homoclinic and heteroclinic orbits. Included in AUTO MATCONT: Matlab toolbox for numerical continuation and bifurcation Available on SourceForge. DDEBIFTOOL: Computation of solutions of Delay Differential Equations. A MATLAB package. Available from K. U. Leuven PyCont: A Python toolbox for numerical continuation and bifurcation. Native Python algorithms for fixed point continuation, sophisticated interface to AUTO for other types of problem. Included as part of PyDSTool CANDYS/QA: Available from the Universität Potsdam [A16] MANPAK: Available from Netlib [A15] PDDE-CONT: http://seis.bris.ac.uk/~rs1909/pdde/ multifario: http://multifario.sourceforge.net/ LOCA: https://trilinos.org/packages/nox-and-loca/ DSTool GAIO OSCILL8: Oscill8 is a dynamical systems tool that allows a user to explore high-dimensional parameter space of nonlinear ODEs using bifurcation analytic techniques. Available from SourceForge. MANLAB : Computation of equilibrium, periodic and quasi-periodic solution of differential equations using Fourier series (harmonic balance method) developments of the solution and Taylor series developments (asymptotic numerical method) of the solution branch. Available from LMA Marseille. BifurcationKit.jl : This Julia package aims at performing automatic bifurcation analysis of large dimensional equations where by taking advantage of iterative methods, sparse formulation and specific hardwares (e.g. GPU). Examples This problem, of finding the points which F maps into the origin appears in computer graphics as the problems of drawing contour maps (n=2), or isosurface(n=3). The contour with value h is the set of all solution components of F-h=0 See also Homotopy continuation References Books [B1] "Introduction to Numerical Continuation Methods", Eugene L. Allgower and Kurt Georg, SIAM Classics in Applied Mathematics 45. 2003. [B2] "Numerical Methods for Bifurcations of Dynamical Equilibria", Willy J. F. Govaerts, SIAM 2000. [B3] "Lyapunov-Schmidt Methods in Nonlinear Analysis and Applications", Nikolay Sidorov, Boris Loginov, Aleksandr Sinitsyn, and Michail Falaleev, Kluwer Academic Publishers, 2002. [B4] "Methods of Bifurcation Theory", Shui-Nee Chow and Jack K. Hale, Springer-Verlag 1982. [B5] "Elements of Applied Bifurcation Theory", Yuri A. Kunetsov, Springer-Verlag Applied Mathematical Sciences 112, 1995. [B6] "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields", John Guckenheimer and Philip Holmes, Springer-Verlag Applied Mathematical Sciences 42, 1983. [B7] "Elementary Stability and Bifurcation Theory", Gerard Iooss and Daniel D. Joseph, Springer-Verlag Undergraduate Texts in Mathematics, 1980. [B8] "Singularity Theory and an Introduction to Catastrophe Theory", Yung-Chen Lu, Springer-Verlag, 1976. [B9] "Global Bifurcations and Chaos, Analytic Methods", S. Wiggins, Springer-Verlag Applied Mathematical Sciences 73, 1988. [B10] "Singularities and Groups in Bifurcation Theory, volume I", Martin Golubitsky and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 51, 1985. [B11] "Singularities and Groups in Bifurcation Theory, volume II", Martin Golubitsky, Ian Stewart and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 69, 1988. [B12] "Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems", Alexander Morgan, Prentice-Hall, Englewood Cliffs, N.J. 1987. [B13] "Pathways to Solutions, Fixed Points and Equilibria", C. B. Garcia and W. I. Zangwill, Prentice-Hall, 1981. [B14] "The Implicit Function Theorem: History, Theory and Applications", Steven G. Krantz and Harold R. Parks, Birkhauser, 2002. [B15] "Nonlinear Functional Analysis", J. T. Schwartz, Gordon and Breach Science Publishers, Notes on Mathematics and its Applications, 1969. [B16] "Topics in Nonlinear Functional Analysis", Louis Nirenberg (notes by Ralph A. Artino), AMS Courant Lecture Notes in Mathematics 6, 1974. [B17] "Newton Methods for Nonlinear Problems -- Affine Invariance and Adaptive Algorithms", P. Deuflhard, Series Computational Mathematics 35, Springer, 2006. Journal articles [A1] "An Algorithm for Piecewise Linear Approximation of Implicitly Defined Two-Dimensional Surfaces", Eugene L. Allgower and Stefan Gnutzmann, SIAM Journal on Numerical Analysis, Volume 24, Number 2, 452—469, 1987. [A2] "Simplicial and Continuation Methods for Approximations, Fixed Points and Solutions to Systems of Equations", E. L. Allgower and K. Georg, SIAM Review, Volume 22, 28—85, 1980. [A3] "An Algorithm for Piecewise-Linear Approximation of an Implicitly Defined Manifold", Eugene L. Allgower and Phillip H. Schmidt, SIAM Journal on Numerical Analysis, Volume 22, Number 2, 322—346, April 1985. [A4] "Contour Tracing by Piecewise Linear Approximations", David P. Dobkin, Silvio V. F. Levy, William P. Thurston and Allan R. Wilks, ACM Transactions on Graphics, 9(4) 389-423, 1990. [A5] "Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems", H. B. Keller, in "Applications of Bifurcation Theory", P. Rabinowitz ed., Academic Press, 1977. [A6] "A Locally Parameterized Continuation Process", W.C. Rheinboldt and J.V. Burkardt, ACM Transactions on Mathematical Software, Volume 9, 236—246, 1983. [A7] "Nonlinear Numerics" E. Doedel, International Journal of Bifurcation and Chaos, 7(9):2127-2143, 1997. [A8] "Nonlinear Computation", R. Seydel, International Journal of Bifurcation and Chaos, 7(9):2105-2126, 1997. [A9] "On a Moving Frame Algorithm and the Triangulation of Equilibrium Manifolds", W.C. Rheinboldt, In T. Kuper, R. Seydel, and H. Troger eds. "ISNM79: Bifurcation: Analysis, Algorithms, Applications", pages 256-267. Birkhauser, 1987. [A10] "On the Computation of Multi-Dimensional Solution Manifolds of Parameterized Equations", W.C. Rheinboldt, Numerishe Mathematik, 53, 1988, pages 165-181. [A11] "On the Simplicial Approximation of Implicitly Defined Two-Dimensional Manifolds", M. L. Brodzik and W.C. Rheinboldt, Computers and Mathematics with Applications, 28(9): 9-21, 1994. [A12] "The Computation of Simplicial Approximations of Implicitly Defined p-Manifolds", M. L. Brodzik, Computers and Mathematics with Applications, 36(6):93-113, 1998. [A13] "New Algorithm for Two-Dimensional Numerical Continuation", R. Melville and D. S. Mackey, Computers and Mathematics with Applications, 30(1):31-46, 1995. [A14] "Multiple Parameter Continuation: Computing Implicitly Defined k-manifolds", M. E. Henderson, IJBC 12[3]:451-76, 2003. [A15] "MANPACK: a set of algorithms for computations on implicitly defined manifolds", W. C. Rheinboldt, Comput. Math. Applic. 27 pages 15–9, 1996. [A16] "CANDYS/QA - A Software System For Qualitative Analysis Of Nonlinear Dynamical Systems", Feudel, U. and W. Jansen, Int. J. Bifurcation and Chaos, vol. 2 no. 4, pp. 773–794, World Scientific, 1992. Numerical analysis Dynamical systems
Numerical continuation
[ "Physics", "Mathematics" ]
4,595
[ "Computational mathematics", "Mathematical relations", "Mechanics", "Numerical analysis", "Approximations", "Dynamical systems" ]
9,753,151
https://en.wikipedia.org/wiki/Polished%20plaster
Polished plaster is a term for the finish of some plasters and for the description of new and updated forms of traditional Italian plaster finishes. The term covers a whole range of decorative plaster finishes, from the very highly polished Venetian plaster and Marmorino to the rugged look of textured polished plasters. Polished plaster itself tends to consist of slaked lime, marble dust, and/or marble chips, which give each plaster its distinctive look. A lime-based polished plaster may contain over 40% of marble powder. Polished plaster is mainly used internally, on walls and ceilings, to give a finish that looks like polished marble, travertine, or limestone. Such plasters are usually applied over a primer and basecoat base, from one to four layers. They are finished (burnished) with a specialised steel trowel to a smooth glass-like sheen. Polished plaster is usually sealed with a protective layer of wax. History The history of polished plaster can be traced back to ancient times, with evidence of its use in ancient Egyptian, Roman, and Greek architecture. The technique was highly valued for its durability and aesthetic appeal, and it has continued to be used and refined throughout history. Throughout ancient times, lime was a widely employed material for constructing plaster on both interior and exterior walls. The Greeks, in particular, made a remarkable discovery regarding the production of a special adhesive by subjecting limestone rocks to intense heat within expansive ovens. Nevertheless, this transformative process, which involved converting limestone into calcium oxide, carbon dioxide, and steam, posed significant challenges due to the requirement of extremely high temperatures, reaching approximately 2200 °F. The resulting substance, known as quicklime or lump-lime, was subsequently pulverized into a fine powder and combined with water in a process called "slaking." Through this procedure, a fundamental binding agent called "lime putty" was created and utilized for plastering purposes. The slaked lime, a dense and moist substance, would then be stored in a designated pit for several months, or even years, to ensure complete hydration. Historical accounts suggest that the Romans enforced a regulation stipulating that slaked lime could only be employed if it had aged for a minimum of three years. Venetian plaster, a distinctive type of wall covering, boasts a rich historical legacy that traces back to ancient times, with its origins linked to Pompeii and the subsequent Roman Empire. Vitruvius, who lived around 80-70 B.C., documented the process of manufacturing lime plaster in his renowned work "De architecture" or "Ten Books of Architecture." These methods were further elaborated upon by Pliny the Elder in his book "Natural History," dating back approximately 2,000 years. The Romans referred to the finished product as "Marmoratum Opus," meaning "smooth marble." The rediscovery of Venetian plaster can be attributed to the Renaissance period, characterized by a renewed interest in the ancient techniques of Rome. Palladio, a renowned Renaissance architect, referred to the process as "Pietra d'Istria" since the plaster bore a striking resemblance to natural rocks such as marble, granite, and travertine commonly found near Venice. Palladio's architectural creations, although seemingly constructed from stone, were in fact composed of brick and stucco. The plastering process involved the initial application of a coarse layer of plaster known as "arricio," followed by subsequent layers of lime putty blended with powdered marble to achieve a smooth and polished surface. On occasion, pigments were added to the wet plaster to introduce vibrant hues. During the Baroque period, Venetian plaster experienced a decline in popularity, echoing the diminished prominence witnessed after the fall of the Roman Empire. However, in the 1950s, a notable Venetian builder named Carlo Scarpa played a pivotal role in revitalizing the use of Marmorino in contemporary construction.[9] Scarpa not only adhered to the methods outlined by Vitruvius and Palladio but also introduced innovative techniques involving the utilization of animal hides and acrylic resins. Venetian plaster Venetian plaster is a wall and ceiling finish consisting of plaster mixed with marble dust, applied with a spatula or trowel in thin, multiple layers, which are then burnished to create a smooth surface with the illusion of depth and texture. Venetian plaster techniques include marmorino, scagliola, and sgraffito. When left un-burnished, Venetian plaster has a matte finish that is rough and stone-like to the touch. Un-burnished Venetian plaster is also very brittle and damages rather easily. When applied correctly, Venetian plaster can be used to create a highly polished, rock-hard, marble-like finish. Venetian plaster is especially useful on surfaces where marble panels could not be installed easily, and on surfaces that would be too expensive to have carved from real marble such as columns, corbels, and curved walls. Venetian plaster can be tinted, or colored using natural or synthetic colorants. The ability to tint Venetian plaster is especially helpful when a specific color of "marble" is desired, or when a color that does not exist naturally is wanted. Through the application of a top layer of wax sealant, Venetian plaster can also be rendered waterproof. See also Tadelakt Stucco Earthen plaster References Building materials
Polished plaster
[ "Physics", "Engineering" ]
1,097
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
9,754,675
https://en.wikipedia.org/wiki/Microsoft%20PowerToys
Microsoft PowerToys is a set of freeware (later open source) system utilities designed for power users developed by Microsoft for use on the Windows operating system. These programs add or change features to maximize productivity or add more customization. PowerToys are available for Windows 95, Windows XP, Windows 10, and Windows 11 (and explicitly not compatible with Windows Vista, 7, 8, or 8.1). The PowerToys for Windows 10 and Windows 11 are free and open-source software licensed under the MIT License and hosted on GitHub. PowerToys for Windows 95 PowerToys for Windows 95 was the first version of Microsoft PowerToys and included 15 tools for power users. It included Tweak UI, a system utility for tweaking the more obscure settings in Windows. In most cases, Tweak UI exposed settings that were otherwise only accessible by directly modifying Windows Registry. Included components The following PowerToys for Windows 95 were available: CabView opened cabinet files like ordinary folders; CDAutoPlay made AutoPlay work on any non-audio CD; Command Prompt Here allowed the user to start a command prompt from any folder in Windows Explorer by right-clicking (native in Windows Vista onwards); Contents Menu allowed users to access folders and files from a context menu without having to open their folders; Desktop Menu allowed users to open items on the desktop from a menu on the Taskbar; Explore From Here enabled users to open Windows Explorer view from any folder such so that the folder acts as the root level folder; FindX added drag-and-drop capabilities to Find (later called Search) menu; FlexiCD allowed users to play an audio CD from the Taskbar; Quick Res allowed users to quickly change the screen resolution; Round Clock added an analog round clock without a square window; Send To X consisted of Shell extensions which added several commonly accessed locations such as clipboard, desktop, command-line or any folder to the Send To context menu in Explorer; Shortcut Target Menu allowed users to access the target file a shortcut is pointing to from the context menu or directly cut, copy, delete the target, create shortcut to the target or view its properties; Telephony Location Selector allowed mobile computer users to change their dialling location from the Taskbar; TweakUI allowed the user to customize the more obscure settings of the operating system's UI; Xmouse 1.2 made the window focus follow the mouse without requiring to click the window to make it active. PowerToys for Windows 95 were developed by the Windows Shell Development Team. Some of the tools work on later versions of Windows up to Windows XP, but others may interfere with newer built-in features on Windows 98, ME, and XP. Windows 95 Kernel Toys After the success of the Windows 95 PowerToys, the Windows Kernel Development Team released another set of tools for power users called Windows 95 Kernel Toys. Six tools were included in this package: MS-DOS Mode Configuration Wizard Customization Tool allowed users to configure Windows startup files without having to manually edit CONFIG.SYS or AUTOEXEC.BAT; Keyboard Remap reassigned functions to keys on the keyboard; Logo Key Control configured MS-DOS games so that Windows would ignore the Windows logo key while games were running; Conventional Memory Tracker to track and break down the amount of memory being allocated by virtual device drivers; Windows Process Watcher (WinTop) monitored how much of CPU resources were taken by individual programs; Time Zone Editor enabled the user to create and edit time zone entries for the Date/Time Control Panel applet. According to Raymond Chen, he wrote all of the Kernel Toys except for the Time Zone Editor, which came from the Windows NT Resource Kit. PowerToys for Windows XP PowerToys for Windows XP was the second version of the PowerToys set and brought major changes from the Windows 95 version. The tools in this set were available as separate downloads rather than in a single package. Included components , the following PowerToys for Windows XP were available: Alt-Tab Replacement Task Switcher replaced the simpler Alt-Tab switcher with a more visual one which shows live window previews. CD Slide Show Generator generated a slideshow from photos burned to a CD. ClearType Tuner allowed customizing ClearType settings to make it easier to read text on the screen. Color Control Panel Applet allowed managing color profiles, changing color profile associations for devices, viewing detailed properties for color profiles (including a 3D rendering of the color space gamut). HTML Slide Show Wizard generated an HTML slideshow presentation. Image Resizer allowed right-clicking on multiple image files inside Windows Explorer to batch resize them. Open Command Window Here allowed starting a command prompt from any folder in Windows Explorer by right-clicking. Power Calculator was a more advanced graphical calculator application than the built-in Windows Calculator; it could evaluate more complex expressions, draw a Cartesian or polar graph of a function or convert units of measurements. Power Calculator could store and reuse pre-defined functions, of any arity. For example, a function could be set by cube(x) = x * x * x, and later it could be used in an expression like 5 + cube(4). It did not evaluate every time an operator was entered. Rather, the entire expression must be entered for calculation. In the Numeric mode, it presented a visual keypad, in all other modes the expression had to be typed in. A scrolling text area maintained a history of all calculations. The advanced view allowed declaring and graphing functions, along with a list of all the saved functions. A flyout window provided the option of choosing either a Cartesian co-ordinate system or polar co-ordinates. It could also save a list of variables for use in expression. Unit conversions of the following types were supported: length, mass, time, velocity, and temperature. PowerToy Calc had support for typing calculations using Reverse Polish Notation (RPN). It could calculate up to 500 precision levels beyond the decimal point and supported complex numbers. RAW Image Thumbnailer and Viewer provided thumbnails, previews, printing, and metadata display for RAW images from within Windows Explorer. SyncToy allowed synchronizing files and folders. Taskbar Magnifier magnified part of the screen from the taskbar. Tweak UI customized Windows XP's user interface and advanced settings. Virtual Desktop Manager allowed switching between four virtual desktops from the taskbar. Webcam Timershot took pictures at specified time intervals from a webcam. Discontinued components The following PowerToys for Windows XP were discontinued: Background Switcher added a slideshow tab to Display properties and allows automatically changing the desktop wallpaper periodically. Although, Background Switcher is retired, a replacement, Wallpaper Changer, is available from Microsoft. Internet Explorer Find Bar added a toolbar to Internet Explorer that allowed users to search for keywords in a web page. This feature is natively supported by Internet Explorer 8. ISO Image Burner burned ISO images to an Optical disc recorder. This feature is integrated into Windows 7. In addition, Windows Server 2003 Resource Kit includes two similar tools (CDBurn.exe and DVDBurn.exe). Although Microsoft has retired this Power Toy, it is available as the unauthorized ISO Recorder Power Toy. Shell Audio Player was a Windows Media Player-based compact player which allows playing music from the taskbar. Super-Fast User Switcher allowed Fast User Switching or logging on to a different account using the Windows key+Q combination without requiring to switch to the logon screen. Virtual CD-ROM Control Panel could mount an ISO image as a virtual drive. It was designed for Windows XP, but it also worked with Windows Server 2003. It was a free alternative to software such as Alcohol 120%. PowerToys for Windows 10 and Windows 11 Windows 10 received PowerToys four years after its release. On May 8, 2019, Microsoft relaunched PowerToys and made them open-source on GitHub. The first preview release was available in September 2019, which included FancyZones and the Windows key shortcut guide. Included components PowerToys for Windows 10 comes with the following utilities: Always On Top adds the ability to quickly pin windows on top of all other windows with a quick keyboard shortcut. PowerToys Awake adds an ability to keep a computer awake without managing its power & sleep settings. Color Picker adds a tool for color identification (in HEX, RGB, CMYK, HSL and HSV, among others). FancyZones adds a window manager that makes it easier for users to create and use complex window layouts. File Explorer (Preview Panes) adds SVG, Markdown and PDF previews to File Explorer. File Locksmith adds the ability to check which files are in use and by which processes. Host File Editor adds the ability to edit the 'Hosts' file in a convenient way. Image Resizer adds a context menu to File Explorer for resizing images. Keyboard Manager adds options for remapping keys and shortcuts. Mouse utilities adds tools that enhance mouse and cursor functionality on Windows. Currently, the collection consists of Find My Mouse, which focuses on the cursor's position; Mouse Highlighter, which indicates mouse clicks on the screen; and Mouse pointer Crosshairs, which displays crosshairs centered on the mouse pointer Mouse Without Borders adds a tool which allows a user to move their cursor across multiple devices. Paste as Plain Text adds a customizable keyboard shortcut to paste text stripped of text formatting. PowerRename adds an option for users to rename files using search and replace or regular expression in File Explorer. PowerToys Run adds a Spotlight-like tool that allows users to search for folders, files, applications, and other items. Quick Accent adds the ability to type accented characters in an alternative way. Registry Preview adds a tool to preview, compare and edit registry file contents before writing to the Windows Registry. Screen Ruler adds the ability to measure pixel distances on-screen with image edge detection. Shortcut Guide adds a full screen overlay that allows the user to view the windows key shortcuts available in the current window. Text Extractor adds the ability to copy text from anywhere on the screen. Video Conference Mute adds tools to disable/enable the camera and microphone. Compatibility with Windows Vista, 7, and 8 PowerToys did not receive any releases supporting Windows Vista. Making equivalent calls to various Windows APIs were still possible though and enabling third-party applications to be implemented with the same, or a subset, of the original functionality. Additionally, among Windows 7, Windows 8 and Windows 8.1, none received official support. Not accounting for time spent developing Windows Vista, PowerToys was not updated for over 12 years, before being re-released as open source software for Windows 10. PowerToys for other Microsoft products Microsoft also released PowerToys for Windows XP Tablet PC Edition and Windows XP Media Center Edition. A set of PowerToys for Windows Media Player was released as part of the Windows Media Player Bonus Pack (for Windows XP), consisting of five tools to "provide a variety of enhancements to Windows Media Player." Finally, Microsoft has also released PowerToys for Windows Mobile, Visual Studio and OneNote. See also Resource Kit References External links Microsoft PowerToys for Windows 95 Windows XP PowerToys and Add-ins OneNote Testing (Official OneNote blog) OneNote PowerToys Windows PowerToys current versions of PowerToys for Windows Official source code repository for the Windows 10 version on GitHub Free and open-source software PowerToys Software using the MIT license Utilities for Windows Microsoft Microsoft Windows Windows-only free software
Microsoft PowerToys
[ "Technology" ]
2,428
[ "Computing platforms", "Microsoft Windows" ]
373,278
https://en.wikipedia.org/wiki/Lie%20group%20decomposition
In mathematics, Lie group decompositions are used to analyse the structure of Lie groups and associated objects, by showing how they are built up out of subgroups. They are essential technical tools in the representation theory of Lie groups and Lie algebras; they can also be used to study the algebraic topology of such groups and associated homogeneous spaces. Since the use of Lie group methods became one of the standard techniques in twentieth century mathematics, many phenomena can now be referred back to decompositions. The same ideas are often applied to Lie groups, Lie algebras, algebraic groups and p-adic number analogues, making it harder to summarise the facts into a unified theory. List of decompositions The Jordan–Chevalley decomposition of an element in algebraic group as a product of semisimple and unipotent elements The Bruhat decomposition of a semisimple algebraic group into double cosets of a Borel subgroup can be regarded as a generalization of the principle of Gauss–Jordan elimination, which generically writes a matrix as the product of an upper triangular matrix with a lower triangular matrix—but with exceptional cases. It is related to the Schubert cell decomposition of Grassmannians: see Weyl group for more details. The Cartan decomposition writes a semisimple real Lie algebra as the sum of eigenspaces of a Cartan involution. The Iwasawa decomposition of a semisimple group as the product of compact, abelian, and nilpotent subgroups generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix (a consequence of Gram–Schmidt orthogonalization). The Langlands decomposition writes a parabolic subgroup of a Lie group as the product of semisimple, abelian, and nilpotent subgroups. The Levi decomposition writes a finite dimensional Lie algebra as a semidirect product of a solvable ideal and a semisimple subalgebra. The LU decomposition of a dense subset in the general linear group. It can be considered as a special case of the Bruhat decomposition. The Birkhoff decomposition, a special case of the Bruhat decomposition for affine groups. References Lie groups factorization
Lie group decomposition
[ "Mathematics" ]
459
[ "Lie groups", "Mathematical structures", "Arithmetic", "Algebraic structures", "Factorization" ]
373,352
https://en.wikipedia.org/wiki/Galactic%20Center
The Galactic Center is the barycenter of the Milky Way and a corresponding point on the rotational axis of the galaxy. Its central massive object is a supermassive black hole of about 4 million solar masses, which is called Sagittarius A*, a compact radio source which is almost exactly at the galactic rotational center. The Galactic Center is approximately away from Earth in the direction of the constellations Sagittarius, Ophiuchus, and Scorpius, where the Milky Way appears brightest, visually close to the Butterfly Cluster (M6) or the star Shaula, south to the Pipe Nebula. There are around 10 million stars within one parsec of the Galactic Center, dominated by red giants, with a significant population of massive supergiants and Wolf–Rayet stars from star formation in the region around 1 million years ago. The core stars are a small part within the much wider galactic bulge. Discovery Because of interstellar dust along the line of sight, the Galactic Center cannot be studied at visible, ultraviolet, or soft (low-energy) X-ray wavelengths. The available information about the Galactic Center comes from observations at gamma ray, hard (high-energy) X-ray, infrared, submillimetre, and radio wavelengths. Immanuel Kant stated in Universal Natural History and Theory of the Heavens (1755) that a large star was at the center of the Milky Way Galaxy, and that Sirius might be the star. Harlow Shapley stated in 1918 that the halo of globular clusters surrounding the Milky Way seemed to be centered on the star swarms in the constellation of Sagittarius, but the dark molecular clouds in the area blocked the view for optical astronomy. In the early 1940s Walter Baade at Mount Wilson Observatory took advantage of wartime blackout conditions in nearby Los Angeles, to conduct a search for the center with the Hooker Telescope. He found that near the star Alnasl (Gamma Sagittarii), there is a one-degree-wide void in the interstellar dust lanes, which provides a relatively clear view of the swarms of stars around the nucleus of the Milky Way Galaxy. This gap has been known as Baade's Window ever since. At Dover Heights in Sydney, Australia, a team of radio astronomers from the Division of Radiophysics at the CSIRO, led by Joseph Lade Pawsey, used "sea interferometry" to discover some of the first interstellar and intergalactic radio sources, including Taurus A, Virgo A and Centaurus A. By 1954 they had built an fixed dish antenna and used it to make a detailed study of an extended, extremely powerful belt of radio emission that was detected in Sagittarius. They named an intense point-source near the center of this belt Sagittarius A, and realised that it was located at the very center of the Galaxy, despite being some 32 degrees south-west of the conjectured galactic center of the time. In 1958 the International Astronomical Union (IAU) decided to adopt the position of Sagittarius A as the true zero coordinate point for the system of galactic latitude and longitude. In the equatorial coordinate system the location is: RA , Dec (J2000 epoch). In July 2022, astronomers reported the discovery of massive amounts of prebiotic molecules, including some associated with RNA, in the Galactic Center of the Milky Way Galaxy. Distance to the Galactic Center The exact distance between the Solar System and the Galactic Center is not certain, although estimates since 2000 have remained within the range . The latest estimates from geometric-based methods and standard candles yield the following distances to the Galactic Center: or () () () 7.94 or () or () () () () () kpc () An accurate determination of the distance to the Galactic Center as established from variable stars (e.g. RR Lyrae variables) or standard candles (e.g. red-clump stars) is hindered by numerous effects, which include: an ambiguous reddening law; a bias for smaller values of the distance to the Galactic Center because of a preferential sampling of stars toward the near side of the Galactic bulge owing to interstellar extinction; and an uncertainty in characterizing how a mean distance to a group of variable stars found in the direction of the Galactic bulge relates to the distance to the Galactic Center. The nature of the Milky Way's bar, which extends across the Galactic Center, is also actively debated, with estimates for its half-length and orientation spanning between 1–5 kpc (short or a long bar) and 10–50°. Certain authors advocate that the Milky Way features two distinct bars, one nestled within the other. The bar is delineated by red-clump stars (see also red giant); however, RR Lyrae variables do not trace a prominent Galactic bar. The bar may be surrounded by a ring called the 5-kpc ring that contains a large fraction of the molecular hydrogen present in the Milky Way, and most of the Milky Way's star formation activity. Viewed from the Andromeda Galaxy, it would be the brightest feature of the Milky Way. Supermassive black hole The complex astronomical radio source Sagittarius A appears to be located almost exactly at the Galactic Center and contains an intense compact radio source, Sagittarius A*, which coincides with a supermassive black hole at the center of the Milky Way. Accretion of gas onto the black hole, probably involving an accretion disk around it, would release energy to power the radio source, itself much larger than the black hole. A study in 2008 which linked radio telescopes in Hawaii, Arizona and California (Very-long-baseline interferometry) measured the diameter of Sagittarius A* to be 44 million kilometers (0.3 AU). For comparison, the radius of Earth's orbit around the Sun is about 150 million kilometers (1.0 AU), whereas the distance of Mercury from the Sun at closest approach (perihelion) is 46 million kilometers (0.3 AU). Thus, the diameter of the radio source is slightly less than the distance from Mercury to the Sun. Scientists at the Max Planck Institute for Extraterrestrial Physics in Germany using Chilean telescopes have confirmed the existence of a supermassive black hole at the Galactic Center, on the order of 4.3 million solar masses. Later studies have estimated a mass of 3.7 million or 4.1 million solar masses. On 5 January 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers. Gamma- and X-ray emitting Fermi bubbles In November 2010, it was announced that two large elliptical lobe structures of energetic plasma, termed bubbles, which emit gamma- and X-rays, were detected astride the Milky Way galaxy's core. Termed Fermi or eRosita bubbles, they extend up to about 25,000 light years above and below the Galactic Center. The galaxy's diffuse gamma-ray fog hampered prior observations, but the discovery team led by D. Finkbeiner, building on research by G. Dobler, worked around this problem. The 2014 Bruno Rossi Prize went to Tracy Slatyer, Douglas Finkbeiner, and Meng Su "for their discovery, in gamma rays, of the large unanticipated Galactic structure called the Fermi bubbles". The origin of the bubbles is being researched. The bubbles are connected and seemingly coupled, via energy transport, to the galactic core by columnar structures of energetic plasma termed chimneys. In 2020, for the first time, the lobes were seen in visible light and optical measurements were made. By 2022, detailed computer simulations further confirmed that the bubbles were caused by the Sagittarius A* black hole. Stellar population The central cubic parsec around Sagittarius A* contains around 10 million stars. Although most of them are old red giant stars, the Galactic Center is also rich in massive stars. More than 100 OB and Wolf–Rayet stars have been identified there so far. They seem to have all been formed in a single star formation event a few million years ago. The existence of these relatively young stars was a surprise to experts, who expected the tidal forces from the central black hole to prevent their formation. This paradox of youth is even stronger for stars that are on very tight orbits around Sagittarius A*, such as S2 and S0-102. The scenarios invoked to explain this formation involve either star formation in a massive star cluster offset from the Galactic Center that would have migrated to its current location once formed, or star formation within a massive, compact gas accretion disk around the central black-hole. Current evidence favors the latter theory, as formation through a large accretion disk is more likely to lead to the observed discrete edge of the young stellar cluster at roughly 0.5 parsec. Most of these 100 young, massive stars seem to be concentrated within one or two disks, rather than randomly distributed within the central parsec. This observation however does not allow definite conclusions to be drawn at this point. Star formation does not seem to be occurring currently at the Galactic Center, although the Circumnuclear Disk of molecular gas that orbits the Galactic Center at two parsecs seems a fairly favorable site for star formation. Work presented in 2002 by Antony Stark and Chris Martin mapping the gas density in a 400-light-year region around the Galactic Center has revealed an accumulating ring with a mass several million times that of the Sun and near the critical density for star formation. They predict that in approximately 200 million years, there will be an episode of starburst in the Galactic Center, with many stars forming rapidly and undergoing supernovae at a hundred times the current rate. This starburst may also be accompanied by the formation of galactic relativistic jets, as matter falls into the central black hole. It is thought that the Milky Way undergoes a starburst of this sort every 500 million years. In addition to the paradox of youth, there is a "conundrum of old age" associated with the distribution of the old stars at the Galactic Center. Theoretical models had predicted that the old stars—which far outnumber young stars—should have a steeply-rising density near the black hole, a so-called Bahcall–Wolf cusp. Instead, it was discovered in 2009 that the density of the old stars peaks at a distance of roughly 0.5 parsec from Sgr A*, then falls inward: instead of a dense cluster, there is a "hole", or core, around the black hole. Several suggestions have been put forward to explain this puzzling observation, but none is completely satisfactory. For instance, although the black hole would eat stars near it, creating a region of low density, this region would be much smaller than a parsec. Because the observed stars are a fraction of the total number, it is theoretically possible that the overall stellar distribution is different from what is observed, although no plausible models of this sort have been proposed yet. Gallery In May 2021, NASA published new images of the Galactic Center, based on surveys from Chandra X-ray Observatory and other telescopes. Images are about 2.2 degrees (1,000 light years) across and 4.2 degrees (2,000 light years) long. See also Notes and references Further reading Press External links UCLA Galactic Center Group Max Planck Institute for Extraterrestrial Physics Galactic Center Group The Galactic Supermassive Black Hole The Black Hole at the Center of the Milky Way The dark heart of the Milky Way Animation showing orbits of stars near the center of the Milky Way galaxy Zooming in on the center of the Milky Way Dramatic Increase in Supernova Explosions Looms APOD: Journey to the Center of the Galaxy A Galactic Cloud of Antimatter Fast Stars Near the Galactic Center At the Center of the Milky Way Galactic Center Starscape Annotated Galactic Center A simulation of the stars orbiting the Milky Way's central massive black hole Galactic Center on arxiv.org Galactic Center Geometric centers Articles containing video clips
Galactic Center
[ "Physics", "Mathematics" ]
2,553
[ "Point (geometry)", "Geometric centers", "Symmetry" ]
374,215
https://en.wikipedia.org/wiki/Programmed%20cell%20death
Programmed cell death (PCD; sometimes referred to as cellular suicide) is the death of a cell as a result of events inside of a cell, such as apoptosis or autophagy. PCD is carried out in a biological process, which usually confers advantage during an organism's lifecycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits are separate. PCD serves fundamental functions during both plant and animal tissue development. Apoptosis and autophagy are both forms of programmed cell death. Necrosis is the death of a cell caused by external factors such as trauma or infection and occurs in several different forms. Necrosis was long seen as a non-physiological process that occurs as a result of infection or injury, but in the 2000s, a form of programmed necrosis, called necroptosis, was recognized as an alternative form of programmed cell death. It is hypothesized that necroptosis can serve as a cell-death backup to apoptosis when the apoptosis signaling is blocked by endogenous or exogenous factors such as viruses or mutations. Most recently, other types of regulated necrosis have been discovered as well, which share several signaling events with necroptosis and apoptosis. History The concept of "programmed cell-death" was used by Lockshin & Williams in 1964 in relation to insect tissue development, around eight years before "apoptosis" was coined. The term PCD has, however, been a source of confusion and Durand and Ramsey have developed the concept by providing mechanistic and evolutionary definitions. PCD has become the general terms that refers to all the different types of cell death that have a genetic component. The first insight into the mechanism came from studying BCL2, the product of a putative oncogene activated by chromosome translocations often found in follicular lymphoma. Unlike other cancer genes, which promote cancer by stimulating cell proliferation, BCL2 promoted cancer by stopping lymphoma cells from being able to kill themselves. PCD has been the subject of increasing attention and research efforts. This trend has been highlighted with the award of the 2002 Nobel Prize in Physiology or Medicine to Sydney Brenner (United Kingdom), H. Robert Horvitz (US) and John E. Sulston (UK). Types Apoptosis or Type I cell-death. Autophagic or Type II cell-death. (Cytoplasmic: characterized by the formation of large vacuoles that eat away organelles in a specific sequence prior to the destruction of the nucleus.) Apoptosis Apoptosis is the process of programmed cell death (PCD) that may occur in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation. It is now thought that- in a developmental context- cells are induced to positively commit suicide whilst in a homeostatic context; the absence of certain survival factors may provide the impetus for suicide. There appears to be some variation in the morphology and indeed the biochemistry of these suicide pathways; some treading the path of "apoptosis", others following a more generalized pathway to deletion, but both usually being genetically and synthetically motivated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade, however, presumably true apoptosis and programmed cell death must be genetically mediated. It is also becoming clear that mitosis and apoptosis are toggled or linked in some way and that the balance achieved depends on signals received from appropriate growth or survival factors. Extrinsic Vs. Intrinsic Pathways There are two different potential pathways that may be followed when apoptosis is needed. There is the extrinsic pathway and the intrinsic pathway. Both pathways involve the use of caspases - crucial to cell death. Extrinsic Pathway The extrinsic pathway involves specific receptor ligand interaction. Either the FAS ligand binds to the FAS receptor or the TNF-alpha ligand can bind to the TNF receptor. In both situations there is the activation of initiator caspase. The extrinsic pathway can be activated in two ways. The first way is through fast ligan TNF-alpha binding or through a cytotoxic t-cell. The cytotoxic T-cell can attach itself to a membrane, facilitating the release of granzyme B. Granzyme B perforates the target cell membrane and in turn allows the release of perforin. Finally, perforin creates a pore in the membrane, and releases the caspases which leads to the activation of caspase 3. This initiator caspase may cause the cleaving of inactive caspase 3, causing it to become cleaved caspase 3. This is the final molecule needed to trigger cell death. Intrinsic Pathway The intrinsic pathway is caused by cell damage such as DNA damage or UV exposure. This pathway takes place in the mitochondria and is mediated by sensors called Bcl sensors, and two proteins called BAX and BAK. These proteins are found in a majority of higher mammals as they are able to pierce the mitochondrial outer membrane - making them an integral part of mediating cell death by apoptosis. They do this by orchestrating the formation of pores within the membrane - essential to the release of cytochrome c. However, cytochrome c is only released if the mitochondrial membrane is compromised. Once cytochrome c is detected, the apoptosome complex is formed. This complex activates the executioner caspase which causes cell death. This killing of the cells may be essential as it prevents cellular overgrowth which can result in disease such as cancer. There are another two proteins worth mentioning that inhibit the release of cytochrome c in the mitochondria. Bcl-2 and Bcl-xl are anti-apoptotic and therefore prevent cell death. There is a potential mutation that can occur in that causes the overactivity of Bcl-2. It is the translocation between chromosomes 14 and 18. This over activity can result in the development of follicular lymphoma. Autophagy Macroautophagy, often referred to as autophagy, is a catabolic process that results in the autophagosomic-lysosomal degradation of bulk cytoplasmic contents, abnormal protein aggregates, and excess or damaged organelles. Autophagy is generally activated by conditions of nutrient deprivation but has also been associated with physiological as well as pathological processes such as development, differentiation, neurodegenerative diseases, stress, infection and cancer. Mechanism A critical regulator of autophagy induction is the kinase mTOR, which when activated, suppresses autophagy and when not activated promotes it. Three related serine/threonine kinases, UNC-51-like kinase -1, -2, and -3 (ULK1, ULK2, UKL3), which play a similar role as the yeast Atg1, act downstream of the mTOR complex. ULK1 and ULK2 form a large complex with the mammalian homolog of an autophagy-related (Atg) gene product (mAtg13) and the scaffold protein FIP200. Class III PI3K complex, containing hVps34, Beclin-1, p150 and Atg14-like protein or ultraviolet irradiation resistance-associated gene (UVRAG), is required for the induction of autophagy. The ATG genes control the autophagosome formation through ATG12-ATG5 and LC3-II (ATG8-II) complexes. ATG12 is conjugated to ATG5 in a ubiquitin-like reaction that requires ATG7 and ATG10. The Atg12–Atg5 conjugate then interacts non-covalently with ATG16 to form a large complex. LC3/ATG8 is cleaved at its C terminus by ATG4 protease to generate the cytosolic LC3-I. LC3-I is conjugated to phosphatidylethanolamine (PE) also in a ubiquitin-like reaction that requires Atg7 and Atg3. The lipidated form of LC3, known as LC3-II, is attached to the autophagosome membrane. Autophagy and apoptosis are connected both positively and negatively, and extensive crosstalk exists between the two. During nutrient deficiency, autophagy functions as a pro-survival mechanism, however, excessive autophagy may lead to cell death, a process morphologically distinct from apoptosis. Several pro-apoptotic signals, such as TNF, TRAIL, and FADD, also induce autophagy. Additionally, Bcl-2 inhibits Beclin-1-dependent autophagy, thereby functioning both as a pro-survival and as an anti-autophagic regulator. Other types Besides the above two types of PCD, other pathways have been discovered. Called "non-apoptotic programmed cell-death" (or "caspase-independent programmed cell-death" or "necroptosis"), these alternative routes to death are as efficient as apoptosis and can function as either backup mechanisms or the main type of PCD. Other forms of programmed cell death include anoikis, almost identical to apoptosis except in its induction; cornification, a form of cell death exclusive to the epidermis; excitotoxicity; ferroptosis, an iron-dependent form of cell death and Wallerian degeneration. Necroptosis is a programmed form of necrosis, or inflammatory cell death. Conventionally, necrosis is associated with unprogrammed cell death resulting from cellular damage or infiltration by pathogens, in contrast to orderly, programmed cell death via apoptosis. Nemosis is another programmed form of necrosis that takes place in fibroblasts. Eryptosis is a form of suicidal erythrocyte death. Aponecrosis is a hybrid of apoptosis and necrosis and refers to an incomplete apoptotic process that is completed by necrosis. NETosis is the process of cell-death generated by neutrophils, resulting in NETs. Paraptosis is another type of nonapoptotic cell death that is mediated by MAPK through the activation of IGF-1. It's characterized by the intracellular formation of vacuoles and swelling of mitochondria. Pyroptosis, an inflammatory type of cell death, is uniquely mediated by caspase 1, an enzyme not involved in apoptosis, in response to infection by certain microorganisms. Plant cells undergo particular processes of PCD similar to autophagic cell death. However, some common features of PCD are highly conserved in both plants and metazoa. Atrophic factors An atrophic factor is a force that causes a cell to die. Only natural forces on the cell are considered to be atrophic factors, whereas, for example, agents of mechanical or chemical abuse or lysis of the cell are considered not to be atrophic factors. Common types of atrophic factors are: Decreased workload Loss of innervation Diminished blood supply Inadequate nutrition Loss of endocrine stimulation Senility Compression Role in the development of the nervous system The initial expansion of the developing nervous system is counterbalanced by the removal of neurons and their processes. During the development of the nervous system almost 50% of developing neurons are naturally removed by programmed cell death (PCD). PCD in the nervous system was first recognized in 1896 by John Beard. Since then several theories were proposed to understand its biological significance during neural development. Role in neural development PCD in the developing nervous system has been observed in proliferating as well as post-mitotic cells. One theory suggests that PCD is an adaptive mechanism to regulate the number of progenitor cells. In humans, PCD in progenitor cells starts at gestational week 7 and remains until the first trimester. This process of cell death has been identified in the germinal areas of the cerebral cortex, cerebellum, thalamus, brainstem, and spinal cord among other regions. At gestational weeks 19–23, PCD is observed in post-mitotic cells. The prevailing theory explaining this observation is the neurotrophic theory which states that PCD is required to optimize the connection between neurons and their afferent inputs and efferent targets. Another theory proposes that developmental PCD in the nervous system occurs in order to correct for errors in neurons that have migrated ectopically, innervated incorrect targets, or have axons that have gone awry during path finding. It is possible that PCD during the development of the nervous system serves different functions determined by the developmental stage, cell type, and even species. The neurotrophic theory The neurotrophic theory is the leading hypothesis used to explain the role of programmed cell death in the developing nervous system. It postulates that in order to ensure optimal innervation of targets, a surplus of neurons is first produced which then compete for limited quantities of protective neurotrophic factors and only a fraction survive while others die by programmed cell death. Furthermore, the theory states that predetermined factors regulate the amount of neurons that survive and the size of the innervating neuronal population directly correlates to the influence of their target field. The underlying idea that target cells secrete attractive or inducing factors and that their growth cones have a chemotactic sensitivity was first put forth by Santiago Ramon y Cajal in 1892. Cajal presented the idea as an explanation for the "intelligent force" axons appear to take when finding their target but admitted that he had no empirical data. The theory gained more attraction when experimental manipulation of axon targets yielded death of all innervating neurons. This developed the concept of target derived regulation which became the main tenet in the neurotrophic theory. Experiments that further supported this theory led to the identification of the first neurotrophic factor, nerve growth factor (NGF). Peripheral versus central nervous system Different mechanisms regulate PCD in the peripheral nervous system (PNS) versus the central nervous system (CNS). In the PNS, innervation of the target is proportional to the amount of the target-released neurotrophic factors NGF and NT3. Expression of neurotrophin receptors, TrkA and TrkC, is sufficient to induce apoptosis in the absence of their ligands. Therefore, it is speculated that PCD in the PNS is dependent on the release of neurotrophic factors and thus follows the concept of the neurotrophic theory. Programmed cell death in the CNS is not dependent on external growth factors but instead relies on intrinsically derived cues. In the neocortex, a 4:1 ratio of excitatory to inhibitory interneurons is maintained by apoptotic machinery that appears to be independent of the environment. Supporting evidence came from an experiment where interneuron progenitors were either transplanted into the mouse neocortex or cultured in vitro. Transplanted cells died at the age of two weeks, the same age at which endogenous interneurons undergo apoptosis. Regardless of the size of the transplant, the fraction of cells undergoing apoptosis remained constant. Furthermore, disruption of TrkB, a receptor for brain derived neurotrophic factor (Bdnf), did not affect cell death. It has also been shown that in mice null for the proapoptotic factor Bax (Bcl-2-associated X protein) a larger percentage of interneurons survived compared to wild type mice. Together these findings indicate that programmed cell death in the CNS partly exploits Bax-mediated signaling and is independent of BDNF and the environment. Apoptotic mechanisms in the CNS are still not well understood, yet it is thought that apoptosis of interneurons is a self-autonomous process. Nervous system development in its absence Programmed cell death can be reduced or eliminated in the developing nervous system by the targeted deletion of pro-apoptotic genes or by the overexpression of anti-apoptotic genes. The absence or reduction of PCD can cause serious anatomical malformations but can also result in minimal consequences depending on the gene targeted, neuronal population, and stage of development. Excess progenitor cell proliferation that leads to gross brain abnormalities is often lethal, as seen in caspase-3 or caspase-9 knockout mice which develop exencephaly in the forebrain. The brainstem, spinal cord, and peripheral ganglia of these mice develop normally, however, suggesting that the involvement of caspases in PCD during development depends on the brain region and cell type. Knockout or inhibition of apoptotic protease activating factor 1 (APAF1), also results in malformations and increased embryonic lethality. Manipulation of apoptosis regulator proteins Bcl-2 and Bax (overexpression of Bcl-2 or deletion of Bax) produces an increase in the number of neurons in certain regions of the nervous system such as the retina, trigeminal nucleus, cerebellum, and spinal cord. However, PCD of neurons due to Bax deletion or Bcl-2 overexpression does not result in prominent morphological or behavioral abnormalities in mice. For example, mice overexpressing Bcl-2 have generally normal motor skills and vision and only show impairment in complex behaviors such as learning and anxiety. The normal behavioral phenotypes of these mice suggest that an adaptive mechanism may be involved to compensate for the excess neurons. Invertebrates and vertebrates Learning about PCD in various species is essential in understanding the evolutionary basis and reason for apoptosis in development of the nervous system. During the development of the invertebrate nervous system, PCD plays different roles in different species. The similarity of the asymmetric cell death mechanism in the nematode and the leech indicates that PCD may have an evolutionary significance in the development of the nervous system. In the nematode, PCD occurs in the first hour of development leading to the elimination of 12% of non-gonadal cells including neuronal lineages. Cell death in arthropods occurs first in the nervous system when ectoderm cells differentiate and one daughter cell becomes a neuroblast and the other undergoes apoptosis. Furthermore, sex targeted cell death leads to different neuronal innervation of specific organs in males and females. In Drosophila, PCD is essential in segmentation and specification during development. In contrast to invertebrates, the mechanism of programmed cell death is found to be more conserved in vertebrates. Extensive studies performed on various vertebrates show that PCD of neurons and glia occurs in most parts of the nervous system during development. It has been observed before and during synaptogenesis in the central nervous system as well as the peripheral nervous system. However, there are a few differences between vertebrate species. For example, mammals exhibit extensive arborization followed by PCD in the retina while birds do not. Although synaptic refinement in vertebrate systems is largely dependent on PCD, other evolutionary mechanisms also play a role. In plant tissue Programmed cell death in plants has a number of molecular similarities to animal apoptosis, but it also has differences, the most obvious being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. In "APL regulates vascular tissue identity in Arabidopsis", Martin Bonke and his colleagues had stated that one of the two long-distance transport systems in vascular plants, xylem, consists of several cell-types "the differentiation of which involves deposition of elaborate cell-wall thickenings and programmed cell-death." The authors emphasize that the products of plant PCD play an important structural role. Basic morphological and biochemical features of PCD have been conserved in both plant and animal kingdoms. Specific types of plant cells carry out unique cell-death programs. These have common features with animal apoptosis—for instance, nuclear DNA degradation—but they also have their own peculiarities, such as nuclear degradation triggered by the collapse of the vacuole in tracheary elements of the xylem. Janneke Balk and Christopher J. Leaver, of the Department of Plant Sciences, University of Oxford, carried out research on mutations in the mitochondrial genome of sun-flower cells. Results of this research suggest that mitochondria play the same key role in vascular plant PCD as in other eukaryotic cells. PCD in pollen prevents inbreeding During pollination, plants enforce self-incompatibility (SI) as an important means to prevent self-fertilization. Research on the corn poppy (Papaver rhoeas) has revealed that proteins in the pistil on which the pollen lands, interact with pollen and trigger PCD in incompatible (i.e., self) pollen. The researchers, Steven G. Thomas and Vernonica E. Franklin-Tong, also found that the response involves rapid inhibition of pollen-tube growth, followed by PCD. In slime molds The social slime mold Dictyostelium discoideum has the peculiarity of either adopting a predatory amoeba-like behavior in its unicellular form or coalescing into a mobile slug-like form when dispersing the spores that will give birth to the next generation. The stalk is composed of dead cells that have undergone a type of PCD that shares many features of an autophagic cell-death: massive vacuoles forming inside cells, a degree of chromatin condensation, but no DNA fragmentation. The structural role of the residues left by the dead cells is reminiscent of the products of PCD in plant tissue. D. discoideum is a slime mold, part of a branch that might have emerged from eukaryotic ancestors about a billion years before the present. It seems that they emerged after the ancestors of green plants and the ancestors of fungi and animals had differentiated. But, in addition to their place in the evolutionary tree, the fact that PCD has been observed in the humble, simple, six-chromosome D. discoideum has additional significance: It permits the study of a developmental PCD path that does not depend on caspases characteristic of apoptosis. Evolutionary origin of mitochondrial apoptosis The occurrence of programmed cell death in protists is possible, but it remains controversial. Some categorize death in those organisms as unregulated apoptosis-like cell death. Biologists had long suspected that mitochondria originated from bacteria that had been incorporated as endosymbionts ("living together inside") of larger eukaryotic cells. It was Lynn Margulis who from 1967 on championed this theory, which has since become widely accepted. The most convincing evidence for this theory is the fact that mitochondria possess their own DNA and are equipped with genes and replication apparatus. This evolutionary step would have been risky for the primitive eukaryotic cells, which began to engulf the energy-producing bacteria, as well as a perilous step for the ancestors of mitochondria, which began to invade their proto-eukaryotic hosts. This process is still evident today, between human white blood cells and bacteria. Most of the time, invading bacteria are destroyed by the white blood cells; however, it is not uncommon for the chemical warfare waged by prokaryotes to succeed, with the consequence known as infection by its resulting damage. One of these rare evolutionary events, about two billion years before the present, made it possible for certain eukaryotes and energy-producing prokaryotes to coexist and mutually benefit from their symbiosis. Mitochondriate eukaryotic cells live poised between life and death, because mitochondria still retain their repertoire of molecules that can trigger cell suicide. It is not clear why apoptotic machinery is maintained in the extant unicellular organisms. This process has now been evolved to happen only when programmed. to cells (such as feedback from neighbors, stress or DNA damage), mitochondria release caspase activators that trigger the cell-death-inducing biochemical cascade. As such, the cell suicide mechanism is now crucial to all of our lives. DNA damage and apoptosis Repair of DNA damages and apoptosis are two enzymatic processes essential for maintaining genome integrity in humans. Cells that are deficient in DNA repair tend to accumulate DNA damages, and when such cells are also defective in apoptosis they tend to survive even with excess DNA damage. Replication of DNA in such cells leads to mutations and these mutations may cause cancer (see Figure). Several enzymatic pathways have evolved for repairing different kinds of DNA damage, and it has been found that in five well studied DNA repair pathways particular enzymes have a dual role, where one role is to participate in repair of a specific class of damages and the second role is to induce apoptosis if the level of such DNA damage is beyond the cell's repair capability. These dual role proteins tend to protect against development of cancer. Proteins that function in such a dual role for each repair process are: (1) DNA mismatch repair, MSH2, MSH6, MLH1 and PMS2; (2) base excision repair, APEX1 (REF1/APE), poly(ADP-ribose) polymerase (PARP); (3) nucleotide excision repair, XPB, XPD (ERCC2), p53, p33(ING1b); (4) non-homologous end joining, the catalytic subunit of DNA-PK; (5) homologous recombinational repair, BRCA1, ATM, ATR, WRN, BLM, Tip60, p53. Programmed death of entire organisms Clinical significance ABL The BCR-ABL oncogene has been found to be involved in the development of cancer in humans. c-Myc c-Myc is involved in the regulation of apoptosis via its role in downregulating the Bcl-2 gene. Its role the disordered growth of tissue. Metastasis A molecular characteristic of metastatic cells is their altered expression of several apoptotic genes. See also Anoikis Apoptosis-inducing factor Apoptosis versus Pseudoapoptosis Apoptosome Apoptotic DNA fragmentation Autolysis (biology) Autophagy Autoschizis Bcl-2 BH3 interacting domain death agonist (BID) Calpains Caspases Cell damage Cornification Cytochrome c Cytotoxicity Diablo homolog Entosis Excitotoxicity Ferroptosis Inflammasome Mitochondrial permeability transition pore Mitotic catastrophe Necrobiology Necroptosis Necrosis p53 upregulated modulator of apoptosis (PUMA) Paraptosis Parthanatos Pyroptosis RIP kinases Wallerian degeneration Notes and references Srivastava, R. E. in Molecular Mechanisms (Humana Press, 2007). Kierszenbaum, A. L. & Tres, L. L. (ed Madelene Hyde) (ELSEVIER SAUNDERS, Philadelphia, 2012). External links Apoptosis and Cell Death Labs International Cell Death Society The Bcl-2 Family Database Mitochondria Cellular senescence Apoptosis
Programmed cell death
[ "Chemistry", "Biology" ]
5,892
[ "Mitochondria", "Signal transduction", "Senescence", "Cellular senescence", "Cellular processes", "Apoptosis", "Programmed cell death", "Metabolism" ]
374,220
https://en.wikipedia.org/wiki/Inverse%20trigonometric%20functions
In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. Notation Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: , , , etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships: when measuring in radians, an angle of radians will correspond to an arc whose length is , where is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is " is the same as "the angle whose cosine is ", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms , , . The notations , , , etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established , , – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: However, this might appear to conflict logically with the common semantics for expressions such as (although only , without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function. The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, . Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “” superscript: , , , etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by , , etc., or, better, by , , etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case. Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions. Basic concepts Principal values Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions. For example, using in the sense of multivalued functions, just as the square root function could be defined from the function is defined so that For a given real number with there are multiple (in fact, countably infinitely many) numbers such that ; for example, but also etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each in the domain, the expression will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. Note: Some authors define the range of arcsecant to be or because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, whereas with the range or we would have to write since tangent is nonnegative on but nonpositive on For a similar reason, the same authors define the range of arccosecant to be or Domains If is allowed to be a complex number, then the range of applies only to its real part. Solutions to elementary trigonometric equations Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of Sine and cosecant begin their period at (where is an integer), finish it at and then reverse themselves over to Cosine and secant begin their period at finish it at and then reverse themselves over to Tangent begins its period at finishes it at and then repeats it (forward) over to Cotangent begins its period at finishes it at and then repeats it (forward) over to This periodicity is reflected in the general inverses, where is some integer. The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions. It is assumed that the given values and all lie within appropriate ranges so that the relevant expressions below are well-defined. Note that "for some " is just another way of saying "for some integer " The symbol is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept). where the first four solutions can be written in expanded form as: For example, if then for some While if then for some where will be even if and it will be odd if The equations and have the same solutions as and respectively. In all equations above for those just solved (i.e. except for / and /), the integer in the solution's formula is uniquely determined by (for fixed and ). With the help of integer parity it is possible to write a solution to that doesn't involve the "plus or minus" symbol: if and only if for some And similarly for the secant function, if and only if for some where equals when the integer is even, and equals when it's odd. Detailed example and explanation of the "plus or minus" symbol The solutions to and involve the "plus or minus" symbol whose meaning is now clarified. Only the solution to will be discussed since the discussion for is the same. We are given between and we know that there is an angle in some interval that satisfies We want to find this The table above indicates that the solution is which is a shorthand way of saying that (at least) one of the following statement is true: for some integer or for some integer As mentioned above, if (which by definition only happens when ) then both statements (1) and (2) hold, although with different values for the integer : if is the integer from statement (1), meaning that holds, then the integer for statement (2) is (because ). However, if then the integer is unique and completely determined by If (which by definition only happens when ) then (because and so in both cases is equal to ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold). Having considered the cases and we now focus on the case where and So assume this from now on. The solution to is still which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because and statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about is needed to determine which one holds. For example, suppose that and that that is known about is that (and nothing more is known). Then and moreover, in this particular case (for both the case and the case) and so consequently, This means that could be either or Without additional information it is not possible to determine which of these values has. An example of some additional information that could determine the value of would be knowing that the angle is above the -axis (in which case ) or alternatively, knowing that it is below the -axis (in which case ). Equal identical trigonometric functions Set of all solutions to elementary trigonometric equations Thus given a single solution to an elementary trigonometric equation ( is such an equation, for instance, and because always holds, is always a solution), the set of all solutions to it are: Transforming equations The equations above can be transformed by using the reflection and shift identities: These formulas imply, in particular, that the following hold: where swapping swapping and swapping gives the analogous equations for respectively. So for example, by using the equality the equation can be transformed into which allows for the solution to the equation (where ) to be used; that solution being: which becomes: where using the fact that and substituting proves that another solution to is: The substitution may be used express the right hand side of the above formula in terms of instead of Relationships between trigonometric functions and inverse trigonometric functions Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation. Relationships among the inverse trigonometric functions Complementary angles: Negative arguments: Reciprocal arguments: The identities above can be used with (and derived from) the fact that and are reciprocals (i.e. ), as are and and and Useful identities if one only has a fragment of a sine table: Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). A useful form that follows directly from the table above is . It is obtained by recognizing that . From the half-angle formula, , we get: Arctangent addition formula This is derived from the tangent addition formula by letting In calculus Derivatives of inverse trigonometric functions The derivatives for complex values of z are as follows: Only for real values of x: These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if , then so Expression as definite integrals Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral: When x equals 1, the integrals with limited domains are improper integrals, but still well-defined. Infinite series Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative in a geometric series, and applying the integral definition above (see Leibniz series). Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, , , and so on. Another series is given by: Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series: (The term in the sum for n = 0 is the empty product, so is 1.) Alternatively, this can be expressed as Another series for the arctangent function is given by where is the imaginary unit. Continued fractions for arctangent Two alternatives to the power series for arctangent are these generalized continued fractions: The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series. Indefinite integrals of inverse trigonometric functions For real and complex values of z: For real x ≥ 1: For all real x not between -1 and 1: The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions: The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above. All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above. Example Using (i.e. integration by parts), set Then which by the simple substitution yields the final result: Extension to the complex plane Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is: where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from for the upper branch cut and from for the lower branch cut. The arcsine function may then be defined as: where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets; which has the same cut as arcsin; which has the same cut as arctan; where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets; which has the same cut as arcsec. Logarithmic forms These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts. Generalization Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us: or where is the adjacent side, is the opposite side, and is the hypotenuse. From here, we can solve for . or Simply taking the imaginary part works for any real-valued and , but if or is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of also removes from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for that result from plugging the values into the equations above and simplifying. The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the and principal branch for every function except arccotangent in the column. Arccotangent in the column will output on its usual principal branch by using the and convention. In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function. Example proof Using the exponential definition of sine, and letting (the positive branch is chosen) Applications Finding the angle of a right triangle Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: where is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed. For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows: In computer science and engineering Two-argument variant of arctangent The two-argument atan2 function computes the arctangent of y / x given y and x, but with a range of (−, ]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering. In terms of the standard arctan function, that is with range of (−, ), it can be expressed as follows: It also equals the principal value of the argument of the complex number x + iy. This limited version of the function above may also be defined using the tangent half-angle formulae as follows: provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use. The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. These variations are detailed at atan2. Arctangent function with location parameter In many applications the solution of the equation is to come as close as possible to a given value . The adequate solution is produced by the parameter modified arctangent function The function rounds to the nearest integer. Numerical accuracy For angles near 0 and , arccosine is ill-conditioned, and similarly with arcsine for angles near −/2 and /2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods. See also Arcsine distribution Inverse exsecant Inverse versine Inverse hyperbolic functions List of integrals of inverse trigonometric functions List of trigonometric identities Trigonometric function Trigonometric functions of matrices Notes References External links Trigonometry Elementary special functions Mathematical relations Ratios Dimensionless numbers
Inverse trigonometric functions
[ "Mathematics" ]
4,244
[ "Dimensionless numbers", "Mathematical analysis", "Predicate logic", "Mathematical objects", "Basic concepts in set theory", "Arithmetic", "Mathematical relations", "Numbers", "Ratios" ]
374,331
https://en.wikipedia.org/wiki/Neurotransmitter%20receptor
A neurotransmitter receptor (also known as a neuroreceptor) is a membrane receptor protein that is activated by a neurotransmitter. Chemicals on the outside of the cell, such as a neurotransmitter, can bump into the cell's membrane, in which there are receptors. If a neurotransmitter bumps into its corresponding receptor, they will bind and can trigger other events to occur inside the cell. Therefore, a membrane receptor is part of the molecular machinery that allows cells to communicate with one another. A neurotransmitter receptor is a class of receptors that specifically binds with neurotransmitters as opposed to other molecules. In postsynaptic cells, neurotransmitter receptors receive signals that trigger an electrical signal, by regulating the activity of ion channels. The influx of ions through ion channels opened due to the binding of neurotransmitters to specific receptors can change the membrane potential of a neuron. This can result in a signal that runs along the axon (see action potential) and is passed along at a synapse to another neuron and possibly on to a neural network. On presynaptic cells, there are receptors known as autoreceptors that are specific to the neurotransmitters released by that cell, which provide feedback and mediate excessive neurotransmitter release from it. There are two major types of neurotransmitter receptors: ionotropic and metabotropic. Ionotropic means that ions can pass through the receptor, whereas metabotropic means that a second messenger inside the cell relays the message (i.e. metabotropic receptors do not have channels). There are several kinds of metabotropic receptors, including G protein-coupled receptors. Ionotropic receptors are also called ligand-gated ion channels and they can be activated by neurotransmitters (ligands) like glutamate and GABA, which then allow specific ions through the membrane. Sodium ions (that are, for example, allowed passage by the glutamate receptor) excite the post-synaptic cell, while chloride ions (that are, for example, allowed passage by the GABA receptor) inhibit the post-synaptic cell. Inhibition reduces the chance that an action potential will occur, while excitation increases the chance. Conversely, G-protein-coupled receptors are neither excitatory nor inhibitory. Rather, they can have a broad number of functions such as modulating the actions of excitatory and inhibitory ion channels or triggering a signalling cascade that releases calcium from stores inside the cell. Most neurotransmitters receptors are G-protein coupled. Localization Neurotransmitter (NT) receptors are located on the surface of neuronal and glial cells. At a synapse, one neuron sends messages to the other neuron via neurotransmitters. Therefore, the postsynaptic neuron, the one receiving the message, clusters NT receptors at this specific place in its membrane. NT receptors can be inserted into any region of the neuron's membrane such as dendrites, axons, and the cell body. Receptors can be located in different parts of the body to act as either an inhibitor or an excitatory receptor for a specific Neurotransmitter An example of this are the receptors for the neurotransmitter Acetylcholine (ACh), one receptor is located at the neuromuscular junction in skeletal muscle to facilitate muscle contraction (excitation), while the other receptor is located in the heart to slow down heart rate (inhibitory) Ionotropic receptors: neurotransmitter-gated ion channels Ligand-gated ion channels (LGICs) are one type of ionotropic receptor or channel-linked receptor. They are a group of transmembrane ion channels that are opened or closed in response to the binding of a chemical messenger (i.e., a ligand), such as a neurotransmitter. The binding site of endogenous ligands on LGICs protein complexes are normally located on a different portion of the protein (an allosteric binding site) compared to where the ion conduction pore is located. The direct link between ligand binding and opening or closing of the ion channel, which is characteristic of ligand-gated ion channels, is contrasted with the indirect function of metabotropic receptors, which use second messengers. LGICs are also different from voltage-gated ion channels (which open and close depending on membrane potential), and stretch-activated ion channels (which open and close depending on mechanical deformation of the cell membrane). Metabotropic receptors: G-protein coupled receptors G protein-coupled receptors (GPCRs), also known as seven-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptor, and G protein-linked receptors (GPLR), comprise a large protein family of transmembrane receptors that sense molecules outside the cell and activate inside signal transduction pathways and, ultimately, cellular responses. G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases, and are also the target of approximately 30% of all modern medicinal drugs. There are two principal signal transduction pathways involving the G protein-coupled receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G-protein by exchanging its bound GDP for a GTP. The G-protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13). Desensitization and neurotransmitter concentration Neurotransmitter receptors are subject to ligand-induced desensitization: That is, they can become unresponsive upon prolonged exposure to their neurotransmitter. Neurotransmitter receptors are present on both postsynaptic neurons and presynaptic neurons with the former being used to receive neurotransmitters and the latter for the purpose of preventing further release of a given neurotransmitter. In addition to being found in neuron cells, neurotransmitter receptors are also found in various immune and muscle tissues. Many neurotransmitter receptors are categorized as a serpentine receptor or G protein-coupled receptor because they span the cell membrane not once, but seven times. Neurotransmitter receptors are known to become unresponsive to the type of neurotransmitter they receive when exposed for extended periods of time. This phenomenon is known as ligand-induced desensitization or downregulation. Example neurotransmitter receptors The following are some major classes of neurotransmitter receptors: Adrenergic: α1A, α1b, α1c, α1d, α2a, α2b, α2c, α2d, β1, β2, β3 Cholinergic: Muscarinic: M1, M2, M3, M4, M5 Nicotinic: muscle, neuronal (α-bungarotoxin-insensitive), neuronal (α-bungarotoxin-sensitive) Dopaminergic: D1, D2, D3, D4, D5 GABAergic: GABAA, GABAB1a, GABAB1δ, GABAB2, GABAC Glutamatergic: NMDA, AMPA, Kainate, mGluR1, mGluR2, mGluR3, mGluR4, mGluR5, mGluR6, mGluR7 Glycinergic: Glycine Histaminergic: H1, H2, H3 Opioidergic: μ, δ1, δ2, κ Serotonergic: 5-HT1A, 5-HT1B, 5-HT1D, 5-HT1E, 5-HT1F, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT3, 5-HT4, 5-HT5, 5-HT6, 5-HT7 See also Autoreceptor Catecholamines Cholinergic agonists and antagonists Heteroreceptor Imidazoline receptor Neuromuscular transmission Synaptic transmission Notes and references External links Brain Explorer Neurotransmitters Postsynaptic Receptors Snyder (2009) Neurotransmitters, Receptors, and Second Messengers Galore in 40 Years. Journal of Neuroscience. 29(41): 12717-12721. Snyder and Bennett (1976) Neurotransmitter Receptors in the Brain: Biochemical Identification. Annual Review of Physiology. Vol. 38: 153-175 Neuroscience for Kids: Neurotransmitters Library of Congress Authorities and Vocabularies: Neurotransmitter Receptors Neurotransmitter Receptors, Transporters, & Ion Channels Receptors Neurochemistry
Neurotransmitter receptor
[ "Chemistry", "Biology" ]
2,101
[ "Biochemistry", "Receptors", "Neurochemistry", "Signal transduction" ]
374,337
https://en.wikipedia.org/wiki/Metabotropic%20receptor
A metabotropic receptor, also referred to by the broader term G-protein-coupled receptor, is a type of membrane receptor that initiates a number of metabolic steps to modulate cell activity. The nervous system utilizes two types of receptors: metabotropic and ionotropic receptors. While ionotropic receptors form an ion channel pore, metabotropic receptors are indirectly linked with ion channels through signal transduction mechanisms, such as G proteins. Both receptor types are activated by specific chemical ligands. When an ionotropic receptor is activated, it opens a channel that allows ions such as Na+, K+, or Cl− to flow. In contrast, when a metabotropic receptor is activated, a series of intracellular events are triggered that can also result in ion channels opening or other intracellular events, but involve a range of second messenger chemicals. Mechanism Chemical messengers bind to metabotropic receptors to initiate a diversity of effects caused by biochemical signaling cascades. G protein-coupled receptors are all metabotropic receptors. When a ligand binds to a G protein-coupled receptor, a guanine nucleotide-binding protein, or G protein, activates a second messenger cascade which can alter gene transcription, regulate other proteins in the cell, release intracellular Ca2+, or directly affect ion channels on the membrane. These receptors can remain open from seconds to minutes and are associated with long-lasting effects, such as modifying synaptic strength and modulating short- and long-term synaptic plasticity. Metabotropic receptors have a diversity of ligands, including but not limited to: small molecule transmitters, monoamines, peptides, hormones, and even gases. In comparison to fast-acting neurotransmitters, these ligands are not taken up again or degraded quickly. They can also enter the circulatory system to globalize a signal. Most metabotropic ligands have unique receptors. Some examples include: metabotropic glutamate receptors, muscarinic acetylcholine receptors, GABAB receptors. Structure The G protein-coupled receptors have seven hydrophobic transmembrane domains. Most of them are monomeric proteins, although GABAB receptors require heterodimerization to function properly. The protein's N terminus is located on the extracellular side of the membrane and its C terminus is on the intracellular side. The 7 transmembrane spanning domains, with an external amino terminus, are often claimed as being alpha helix shaped, and the polypeptide chain is said to be composed of around 450–550 amino acids. References Further reading Zimmerberg, B. 2002. Dopamine receptors: A representative family of metabotropic receptors. Multimedia Neuroscience Education Project G protein-coupled receptors Neurochemistry Signal transduction Transmembrane receptors
Metabotropic receptor
[ "Chemistry", "Biology" ]
599
[ "Transmembrane receptors", "Signal transduction", "G protein-coupled receptors", "Biochemistry", "Neurochemistry" ]
374,338
https://en.wikipedia.org/wiki/NMDA%20receptor
The N-methyl-D-aspartate receptor (also known as the NMDA receptor or NMDAR), is a glutamate receptor and predominantly Ca2+ ion channel found in neurons. The NMDA receptor is one of three types of ionotropic glutamate receptors, the other two being AMPA and kainate receptors. Depending on its subunit composition, its ligands are glutamate and glycine (or D-serine). However, the binding of the ligands is typically not sufficient to open the channel as it may be blocked by Mg2+ ions which are only removed when the neuron is sufficiently depolarized. Thus, the channel acts as a "coincidence detector" and only once both of these conditions are met, the channel opens and it allows positively charged ions (cations) to flow through the cell membrane. The NMDA receptor is thought to be very important for controlling synaptic plasticity and mediating learning and memory functions. The NMDA receptor is ionotropic, meaning it is a protein which allows the passage of ions through the cell membrane. The NMDA receptor is so named because the agonist molecule N-methyl-D-aspartate (NMDA) binds selectively to it, and not to other glutamate receptors. Activation of NMDA receptors results in the opening of the ion channel that is nonselective to cations, with a combined reversal potential near 0 mV. While the opening and closing of the ion channel is primarily gated by ligand binding, the current flow through the ion channel is voltage-dependent. Specifically located on the receptor, extracellular magnesium (Mg2+) and zinc (Zn2+) ions can bind and prevent other cations from flowing through the open ion channel. A voltage-dependent flow of predominantly calcium (Ca2+), sodium (Na+), and potassium (K+) ions into and out of the cell is made possible by the depolarization of the cell, which displaces and repels the Mg2+ and Zn2+ ions from the pore. Ca2+ flux through NMDA receptors in particular is thought to be critical in synaptic plasticity, a cellular mechanism for learning and memory, due to proteins which bind to and are activated by Ca2+ ions. Activity of the NMDA receptor is blocked by many psychoactive drugs such as phencyclidine (PCP), alcohol (ethanol) and dextromethorphan (DXM). The anaesthetic and analgesic effects of the drugs ketamine and nitrous oxide are also partially due to their effects at blocking NMDA receptor activity. In contrast, overactivation of NMDAR by NMDA agonists increases the cytosolic concentrations of calcium and zinc, which significantly contributes to neural death, an effect known to be prevented by cannabinoids, mediated by activation of the CB1 receptor, which leads HINT1 protein to counteract the toxic effects of NMDAR-mediated NO production and zinc release. As well as preventing methamphetamine-induced neurotoxicity via inhibition of nitric oxide synthase (nNOS) expression and astrocyte activation, it is seen to reduce methamphetamine induced brain damage through CB1-dependent and independent mechanisms, respectively, and inhibition of methamphetamine induced astrogliosis is likely to occur through a CB2 receptor dependent mechanism for THC. Since 1989, memantine has been recognized to be an uncompetitive antagonist of the NMDA receptor, entering the channel of the receptor after it has been activated and thereby blocking the flow of ions. Overactivation of the receptor, causing excessive influx of Ca2+ can lead to excitotoxicity which is implied to be involved in some neurodegenerative disorders. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. However, hypofunction of NMDA receptors (due to glutathione deficiency or other causes) may be involved in impairment of synaptic plasticity and could have other negative repercussions. The main problem with the utilization of NMDA receptor antagonists for neuroprotection is that the physiological actions of the NMDA receptor are essential for normal neuronal function. To be clinically useful NMDA antagonists need to block excessive activation without interfering with normal functions. Memantine has this property. History The discovery of NMDA receptors was followed by the synthesis and study of N-methyl-D-aspartic acid (NMDA) in the 1960s by Jeff Watkins and colleagues. In the early 1980s, NMDA receptors were shown to be involved in several central synaptic pathways. Receptor subunit selectivity was discovered in the early 1990s, which led to recognition of a new class of compounds that selectively inhibit the NR2B subunit. These findings led to vigorous campaign in the pharmaceutical industry. From this it was considered that NMDA receptors were associated with a variety of neurological disorders such as epilepsy, Parkinson's, Alzheimer's, Huntington's and other CNS disorders. In 2002, it was discovered by Hilmar Bading and co-workers that the cellular consequences of NMDA receptor stimulation depend on the receptor's location on the neuronal cell surface. Synaptic NMDA receptors promote gene expression, plasticity-related events, and acquired neuroprotection. Extrasynaptic NMDA receptors promote death signaling; they cause transcriptional shut-off, mitochondrial dysfunction, and structural disintegration. This pathological triad of extrasynaptic NMDA receptor signaling represents a common conversion point in the etiology of several acute and chronic neurodegenerative conditions. The molecular basis for toxic extrasynaptic NMDA receptor signaling was uncovered by Hilmar Bading and co-workers in 2020. Extrasynaptic NMDA receptors form a death signaling complex with TRPM4. NMDAR/TRPM4 interaction interface inhibitors (also known as interface inhibitors) disrupt the NMDAR/TRPM4 complex and detoxify extrasynaptic NMDA receptors. A fortuitous finding was made in 1968 when a woman was taking amantadine as flu medicine and experienced remarkable remission of her Parkinson's symptoms. This finding, reported by Scawab et al., was the beginning of medicinal chemistry of adamantane derivatives in the context of diseases affecting the CNS. Before this finding, memantine, another adamantane derivative, had been synthesized by Eli Lilly and Company in 1963. The purpose was to develop a hypoglycemic drug, but it showed no such efficacy. It was not until 1972 that a possible therapeutic importance of memantine for treating neurodegenerative disorders was discovered. From 1989 memantine has been recognized to be an uncompetitive antagonist of the NMDA receptor. Structure Functional NMDA receptors are heterotetramers comprising different combinations of the GluN1, GluN2 (A-D), and GluN3 (A-B) subunits derived from distinct gene families (Grin1-Grin3). All NMDARs contain two of the obligatory GluN1 subunits, which when assembled with GluN2 subunits of the same type, give rise to canonical diheteromeric (d-) NMDARs (e.g., GluN1-2A-1-2A). Triheteromeric NMDARs, by contrast, contain three different types of subunits (e.g., GluN1-2A-1-2B), and include receptors that are composed of one or more subunits from each of the three gene families, designated t-NMDARs (e.g., GluN1-2A-3A-2A). There is one GluN1, four GluN2, and two GluN3 subunit encoding genes, and each gene may produce more than one splice variant. GluN1 – GRIN1 GluN2 GluN2A – GRIN2A GluN2B – GRIN2B GluN2C – GRIN2C GluN2D – GRIN2D GluN3 GluN3A – GRIN3A GluN3B – GRIN3B Gating The NMDA receptor is a glutamate and ion channel protein receptor that is activated when glycine and glutamate bind to it. The receptor is a highly complex and dynamic heteromeric protein that interacts with a multitude of intracellular proteins via three distinct subunits, namely GluN1, GluN2, and GluN3. The GluN1 subunit, which is encoded by the GRIN1 gene, exhibits eight distinct isoforms owing to alternative splicing. On the other hand, the GluN2 subunit, of which there are four different types (A-D), as well as the GluN3 subunit, of which there are two types (A and B), are each encoded by six separate genes. This intricate molecular structure and genetic diversity enable the receptor to carry out a wide range of physiological functions within the nervous system. All the subunits share a common membrane topology that is dominated by a large extracellular N-terminus, a membrane region comprising three transmembrane segments, a re-entrant pore loop, an extracellular loop between the transmembrane segments that are structurally not well known, and an intracellular C-terminus, which are different in size depending on the subunit and provide multiple sites of interaction with many intracellular proteins. Figure 1 shows a basic structure of GluN1/GluN2 subunits that forms the binding site for memantine, Mg2+ and ketamine. Mg2+ blocks the NMDA receptor channel in a voltage-dependent manner. The channels are also highly permeable to Ca2+. Activation of the receptor depends on glutamate binding, D-serine or glycine binding at its GluN1-linked binding site and AMPA receptor-mediated depolarization of the postsynaptic membrane, which relieves the voltage-dependent channel block by Mg2+. Activation and opening of the receptor channel thus allows the flow of K+, Na+ and Ca2+ ions, and the influx of Ca2+ triggers intracellular signaling pathways. Allosteric receptor binding sites for zinc, proteins and the polyamines spermidine and spermine are also modulators for the NMDA receptor channels. The GluN2B subunit has been involved in modulating activity such as learning, memory, processing and feeding behaviors, as well as being implicated in number of human derangements. The basic structure and functions associated with the NMDA receptor can be attributed to the GluN2B subunit. For example, the glutamate binding site and the control of the Mg2+ block are formed by the GluN2B subunit. The high affinity sites for glycine antagonist are also exclusively displayed by the GluN1/GluN2B receptor. GluN1/GluN2B transmembrane segments are considered to be the part of the receptor that forms the binding pockets for uncompetitive NMDA receptor antagonists, but the transmembrane segments structures are not fully known as stated above. It is claimed that three binding sites within the receptor, A644 on the GluNB subunit and A645 and N616 on the GluN1 subunit, are important for binding of memantine and related compounds as seen in figure 2. The NMDA receptor forms a heterotetramer between two GluN1 and two GluN2 subunits (the subunits were previously denoted as GluN1 and GluN2), two obligatory GluN1 subunits and two regionally localized GluN2 subunits. A related gene family of GluN3 A and B subunits have an inhibitory effect on receptor activity. Multiple receptor isoforms with distinct brain distributions and functional properties arise by selective splicing of the GluN1 transcripts and differential expression of the GluN2 subunits. Each receptor subunit has modular design and each structural module, also represents a functional unit: The extracellular domain contains two globular structures: a modulatory domain and a ligand-binding domain. GluN1 subunits bind the co-agonist glycine and GluN2 subunits bind the neurotransmitter glutamate. The agonist-binding module links to a membrane domain, which consists of three transmembrane segments and a re-entrant loop reminiscent of the selectivity filter of potassium channels. The membrane domain contributes residues to the channel pore and is responsible for the receptor's high-unitary conductance, high-calcium permeability, and voltage-dependent magnesium block. Each subunit has an extensive cytoplasmic domain, which contain residues that can be directly modified by a series of protein kinases and protein phosphatases, as well as residues that interact with a large number of structural, adaptor, and scaffolding proteins. The glycine-binding modules of the GluN1 and GluN3 subunits and the glutamate-binding module of the GluN2A subunit have been expressed as soluble proteins, and their three-dimensional structure has been solved at atomic resolution by x-ray crystallography. This has revealed a common fold with amino acid-binding bacterial proteins and with the glutamate-binding module of AMPA-receptors and kainate-receptors. Mechanism of action NMDA receptors are a crucial part of the development of the central nervous system. The processes of learning, memory, and neuroplasticity rely on the mechanism of NMDA receptors. NMDA receptors are glutamate-gated cation channels that allow for an increase of calcium permeability. Channel activation of NMDA receptors is a result of the binding of two co agonists, glycine and glutamate. Overactivation of NMDA receptors, causing excessive influx of Ca2+ can lead to excitotoxicity. Excitotoxicity is implied to be involved in some neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease and Huntington's disease. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. It is, however, important to preserve physiological NMDA receptor activity while trying to block its excessive, excitotoxic activity. This can possibly be achieved by uncompetitive antagonists, blocking the receptors ion channel when excessively open. Uncompetitive NMDA receptor antagonists, or channel blockers, enter the channel of the NMDA receptor after it has been activated and thereby block the flow of ions. MK-801, ketamine, amantadine and memantine are examples of such antagonists, see figure 1. The off-rate of an antagonist from the receptors channel is an important factor as too slow off-rate can interfere with normal function of the receptor and too fast off-rate may give ineffective blockade of an excessively open receptor. Memantine is an example of an uncompetitive channel blocker of the NMDA receptor, with a relatively rapid off-rate and low affinity. At physiological pH its amine group is positively charged and its receptor antagonism is voltage-dependent. It thereby mimics the physiological function of Mg2+ as channel blocker. Memantine only blocks NMDA receptor associated channels during prolonged activation of the receptor, as it occurs under excitotoxic conditions, by replacing magnesium at the binding site. During normal receptor activity the channels only stay open for several milliseconds and under those circumstances memantine is unable to bind within the channels and therefore does not interfere with normal synaptic activity. Variants GluN1 There are eight variants of the GluN1 subunit produced by alternative splicing of GRIN1: GluN1-1a, GluN1-1b; GluN1-1a is the most abundantly expressed form. GluN1-2a, GluN1-2b; GluN1-3a, GluN1-3b; GluN1-4a, GluN1-4b; GluN2 While a single GluN2 subunit is found in invertebrate organisms, four distinct isoforms of the GluN2 subunit are expressed in vertebrates and are referred to with the nomenclature GluN2A through GluN2D (encoded by GRIN2A, GRIN2B, GRIN2C, GRIN2D). Strong evidence shows that the genes encoding the GluN2 subunits in vertebrates have undergone at least two rounds of gene duplication. They contain the binding-site for glutamate. More importantly, each GluN2 subunit has a different intracellular C-terminal domain that can interact with different sets of signaling molecules. Unlike GluN1 subunits, GluN2 subunits are expressed differentially across various cell types and developmental timepoints and control the electrophysiological properties of the NMDA receptor. In classic circuits, GluN2B is mainly present in immature neurons and in extrasynaptic locations such as growth cones, and contains the binding-site for the selective inhibitor ifenprodil. However, in pyramidal cell synapses in the newly evolved primate dorsolateral prefrontal cortex, GluN2B are exclusively within the postsynaptic density, and mediate higher cognitive operations such as working memory. This is consistent with the expansion in GluN2B actions and expression across the cortical hierarchy in monkeys and humans and across primate cortex evolution. GluN2B to GluN2A switch While GluN2B is predominant in the early postnatal brain, the number of GluN2A subunits increases during early development; eventually, GluN2A subunits become more numerous than GluN2B. This is called the GluN2B-GluN2A developmental switch, and is notable because of the different kinetics each GluN2 subunit contributes to receptor function. For instance, greater ratios of the GluN2B subunit leads to NMDA receptors which remain open longer compared to those with more GluN2A. This may in part account for greater memory abilities in the immediate postnatal period compared to late in life, which is the principle behind genetically altered 'doogie mice'. The detailed time course of this switch in the human cerebellum has been estimated using expression microarray and RNA seq and is shown in the figure on the right. There are three hypothetical models to describe this switch mechanism: Increase in synaptic GluN2A along with decrease in GluN2B Extrasynaptic displacement of GluN2B away from the synapse with increase in GluN2A Increase of GluN2A diluting the number of GluN2B without the decrease of the latter. The GluN2B and GluN2A subunits also have differential roles in mediating excitotoxic neuronal death. The developmental switch in subunit composition is thought to explain the developmental changes in NMDA neurotoxicity. Homozygous disruption of the gene for GluN2B in mice causes perinatal lethality, whereas disruption of the GluN2A gene produces viable mice, although with impaired hippocampal plasticity. One study suggests that reelin may play a role in the NMDA receptor maturation by increasing the GluN2B subunit mobility. GluN2B to GluN2C switch Granule cell precursors (GCPs) of the cerebellum, after undergoing symmetric cell division in the external granule-cell layer (EGL), migrate into the internal granule-cell layer (IGL) where they down-regulate GluN2B and activate GluN2C, a process that is independent of neuregulin beta signaling through ErbB2 and ErbB4 receptors. Role in excitotoxicity NMDA receptors have been implicated by a number of studies to be strongly involved with excitotoxicity. Because NMDA receptors play an important role in the health and function of neurons, there has been much discussion on how these receptors can affect both cell survival and cell death. Recent evidence supports the hypothesis that overstimulation of extrasynaptic NMDA receptors has more to do with excitotoxicity than stimulation of their synaptic counterparts. In addition, while stimulation of extrasynaptic NMDA receptors appear to contribute to cell death, there is evidence to suggest that stimulation of synaptic NMDA receptors contributes to the health and longevity of the cell. There is ample evidence to support the dual nature of NMDA receptors based on location, and the hypothesis explaining the two differing mechanisms is known as the "localization hypothesis". Differing cascade pathways In order to support the localization hypothesis, it would be necessary to show differing cellular signaling pathways are activated by NMDA receptors based on its location within the cell membrane. Experiments have been designed to stimulate either synaptic or non-synaptic NMDA receptors exclusively. These types of experiments have shown that different pathways are being activated or regulated depending on the location of the signal origin. Many of these pathways use the same protein signals, but are regulated oppositely by NMDARs depending on its location. For example, synaptic NMDA excitation caused a decrease in the intracellular concentration of p38 mitogen-activated protein kinase (p38MAPK). Extrasynaptic stimulation NMDARs regulated p38MAPK in the opposite fashion, causing an increase in intracellular concentration. Experiments of this type have since been repeated with the results indicating these differences stretch across many pathways linked to cell survival and excitotoxicity. Two specific proteins have been identified as a major pathway responsible for these different cellular responses ERK1/2, and Jacob. ERK1/2 is responsible for phosphorylation of Jacob when excited by synaptic NMDARs. This information is then transported to the nucleus. Phosphorylation of Jacob does not take place with extrasynaptic NMDA stimulation. This allows the transcription factors in the nucleus to respond differently based in the phosphorylation state of Jacob. Neural plasticity NMDA receptors (NMDARs) critically influence the induction of synaptic plasticity. NMDARs trigger both long-term potentiation (LTP) and long-term depression (LTD) via fast synaptic transmission. Experimental data suggest that extrasynaptic NMDA receptors inhibit LTP while producing LTD. Inhibition of LTP can be prevented with the introduction of a NMDA antagonist. A theta burst stimulation that usually induces LTP with synaptic NMDARs, when applied selectively to extrasynaptic NMDARs produces a LTD. Experimentation also indicates that extrasynaptic activity is not required for the formation of LTP. In addition, both synaptic and extrasynaptic activity are involved in expressing a full LTD. Role of differing subunits Another factor that seems to affect NMDAR induced toxicity is the observed variation in subunit makeup. NMDA receptors are heterotetramers with two GluN1 subunits and two variable subunits. Two of these variable subunits, GluN2A and GluN2B, have been shown to preferentially lead to cell survival and cell death cascades respectively. Although both subunits are found in synaptic and extrasynaptic NMDARs there is some evidence to suggest that the GluN2B subunit occurs more frequently in extrasynaptic receptors. This observation could help explain the dualistic role that NMDA receptors play in excitotoxicity. t-NMDA receptors have been implicated in excitotoxicity-mediated death of neurons in temporal lobe epilepsy. Despite the compelling evidence and the relative simplicity of these two theories working in tandem, there is still disagreement about the significance of these claims. Some problems in proving these theories arise with the difficulty of using pharmacological means to determine the subtypes of specific NMDARs. In addition, the theory of subunit variation does not explain how this effect might predominate, as it is widely held that the most common tetramer, made from two GluN1 subunits and one of each subunit GluN2A and GluN2B, makes up a high percentage of the NMDARs. The subunit composition of t-NMDA receptors has recently been visualized in brain tissue. Excitotoxicity in a clinical setting Excitotoxicity has been thought to play a role in the degenerative properties of neurodegenerative conditions since the late 1950s. NMDA receptors seem to play an important role in many of these degenerative diseases affecting the brain. Most notably, excitotoxic events involving NMDA receptors have been linked to Alzheimer's disease and Huntington's disease, as well as with other medical conditions such as strokes and epilepsy. Treating these conditions with one of the many known NMDA receptor antagonists, however, leads to a variety of unwanted side effects, some of which can be severe. These side effects are, in part, observed because the NMDA receptors do not just signal for cell death but also play an important role in its vitality. Treatment for these conditions might be found in blocking NMDA receptors not found at the synapse. One class of excitotoxicity in disease includes gain-of-function mutations in GRIN2B and GRIN1 associated with cortical malformations, such as polymicrogyria. D-serine, an antagonist/inverse co-agonist of t-NMDA receptors, which is made in the brain, has been shown to mitigate neuron loss in an animal model of temporal lobe epilepsy. Ligands Agonists Activation of NMDA receptors requires binding of glutamate or aspartate (aspartate does not stimulate the receptors as strongly). In addition, NMDARs also require the binding of the co-agonist glycine for the efficient opening of the ion channel, which is a part of this receptor. D-Serine has also been found to co-agonize the NMDA receptor with even greater potency than glycine. It is produced by serine racemase, and is enriched in the same areas as NMDA receptors. Removal of D-serine can block NMDA-mediated excitatory neurotransmission in many areas. Recently, it has been shown that D-serine can be released both by neurons and astrocytes to regulate NMDA receptors. Note that D-serine has also been shown to work as an antagonist / inverse co-agonist for t-NMDA receptors. NMDA receptor (NMDAR)-mediated currents are directly related to membrane depolarization. NMDA agonists therefore exhibit fast Mg2+ unbinding kinetics, increasing channel open probability with depolarization. This property is fundamental to the role of the NMDA receptor in memory and learning, and it has been suggested that this channel is a biochemical substrate of Hebbian learning, where it can act as a coincidence detector for membrane depolarization and synaptic transmission. Examples Some known NMDA receptor agonists include: Amino acids and amino acid derivatives Aspartic acid (aspartate) (D-aspartic acid, L-aspartic acid) – endogenous glutamate site agonist. The word N-methyl-D-aspartate (NMDA) is partially derived from D-aspartate. Glutamic acid (glutamate) – endogenous glutamate site agonist Tetrazolylglycine – synthetic glutamate site agonist Homocysteic acid – endogenous glutamate site agonist Ibotenic acid – naturally occurring glutamate site agonist found in Amanita muscaria Quinolinic acid (quinolinate) – endogenous glutamate site agonist Glycine – endogenous glycine site agonist Alanine (D-alanine, L-alanine) – endogenous glycine site agonist Milacemide – synthetic glycine site agonist; prodrug of glycine Sarcosine (monomethylglycine) – endogenous glycine site agonist Serine (D-serine, L-serine) – endogenous glycine site agonist Positive allosteric modulators Cerebrosterol – endogenous weak positive allosteric modulator Cholesterol – endogenous weak positive allosteric modulator Dehydroepiandrosterone (DHEA) – endogenous weak positive allosteric modulator Dehydroepiandrosterone sulfate (DHEA-S) – endogenous weak positive allosteric modulator Nebostinel (neboglamine) – synthetic positive allosteric modulator of the glycine site Pregnenolone sulfate – endogenous weak positive allosteric modulator Polyamines Spermidine – endogenous polyamine site agonist Spermine – endogenous polyamine site agonist Neramexane An example of memantine derivative is neramexane which was discovered by studying number of aminoalkyl cyclohexanes, with memantine as the template, as NMDA receptor antagonists. Neramexane binds to the same site as memantine within the NMDA receptor associated channel and with comparable affinity. It does also show very similar bioavailability and blocking kinetics in vivo as memantine. Neramexane went to clinical trials for four indications, including Alzheimer's disease. Partial agonists N-Methyl-D-aspartic acid (NMDA), which the NMDA receptor was named after, is a partial agonist of the active or glutamate recognition site. 3,5-Dibromo-L-phenylalanine, a naturally occurring halogenated derivative of L-phenylalanine, is a weak partial NMDA receptor agonist acting on the glycine site. 3,5-Dibromo-L-phenylalanine has been proposed a novel therapeutic drug candidate for treatment of neuropsychiatric disorders and diseases such as schizophrenia, and neurological disorders such as ischemic stroke and epileptic seizures. Other partial agonists of the NMDA receptor acting on novel sites such as rapastinel (GLYX-13) and apimostinel (NRX-1074) are now viewed for the development of new drugs with antidepressant and analgesic effects without obvious psychotomimetic activities. Examples Aminocyclopropanecarboxylic acid (ACC) – synthetic glycine site partial agonist Cycloserine (D-cycloserine) – naturally occurring glycine site partial agonist found in Streptomyces orchidaceus HA-966 and L-687,414 – synthetic glycine site weak partial agonists Homoquinolinic acid – synthetic glutamate site partial agonist N-Methyl-D-aspartic acid (NMDA) – synthetic glutamate site partial agonist Positive allosteric modulators include: Zelquistinel (GATE-251) – synthetic novel site partial agonist Apimostinel (GATE-202) – synthetic novel site partial agonist Rapastinel (GLYX-13) – synthetic novel site partial agonist Antagonists Antagonists of the NMDA receptor are used as anesthetics for animals and sometimes humans, and are often used as recreational drugs due to their hallucinogenic properties, in addition to their unique effects at elevated dosages such as dissociation. When certain NMDA receptor antagonists are given to rodents in large doses, they can cause a form of brain damage called Olney's lesions. NMDA receptor antagonists that have been shown to induce Olney's lesions include ketamine, phencyclidine, and dextrorphan (a metabolite of dextromethorphan), as well as some NMDA receptor antagonists used only in research environments. So far, the published research on Olney's lesions is inconclusive in its occurrence upon human or monkey brain tissues with respect to an increase in the presence of NMDA receptor antagonists. Most NMDAR antagonists are uncompetitive or noncompetitive blockers of the channel pore or are antagonists of the glycine co-regulatory site rather than antagonists of the active/glutamate site. Examples Common agents in which NMDA receptor antagonism is the primary or a major mechanism of action: 4-Chlorokynurenine (AV-101) – glycine site antagonist; prodrug of 7-chlorokynurenic acid 7-Chlorokynurenic acid – glycine site antagonist Agmatine – endogenous polyamine site antagonist Argiotoxin-636 – naturally occurring dizocilpine or related site antagonist found in Argiope venom AP5 – glutamate site antagonist AP7 – glutamate site antagonist CGP-37849 – glutamate site antagonist D-serine - t-NMDA receptor antagonist / inverse co-agonist Delucemine (NPS-1506) – dizocilpine or related site antagonist; derived from argiotoxin-636 Dextromethorphan (DXM) – dizocilpine site antagonist; prodrug of dextrorphan Dextrorphan (DXO) – dizocilpine site antagonist Dexanabinol – dizocilpine-related site antagonist Diethyl ether – unknown site antagonist Diphenidine – dizocilpine site antagonist Dizocilpine (MK-801) – dizocilpine site antagonist Eliprodil – ifenprodil site antagonist Esketamine – dizocilpine site antagonist Hodgkinsine – undefined site antagonist Ifenprodil – ifenprodil site antagonist Kaitocephalin – naturally occurring glutamate site antagonist found in Eupenicillium shearii Ketamine – dizocilpine site antagonist Kynurenic acid – endogenous glycine site antagonist Lanicemine – low-trapping dizocilpine site antagonist LY-235959 – glutamate site antagonist Memantine – low-trapping dizocilpine site antagonist Methoxetamine – dizocilpine site antagonist Midafotel – glutamate site antagonist Nitrous oxide (N2O) – undefined site antagonist PEAQX – glutamate site antagonist Perzinfotel – glutamate site antagonist Phencyclidine (PCP) – dizocilpine site antagonist Phenylalanine - a naturally occurring amino acid, glycine site antagonist Psychotridine – undefined site antagonist Selfotel – glutamate site antagonist Tiletamine – dizocilpine site antagonist Traxoprodil – ifenprodil site antagonist Xenon – unknown site antagonist Some common agents in which weak NMDA receptor antagonism is a secondary or additional action include: Amantadine – an antiviral and antiparkinsonian drug; low-trapping dizocilpine site antagonist Atomoxetine – a norepinephrine reuptake inhibitor used to treat Dextropropoxyphene – an opioid analgesic Ethanol (alcohol) – a euphoriant, sedative, and anxiolytic used recreationally; unknown site antagonist Guaifenesin – an expectorant Huperzine A – a naturally occurring acetylcholinesterase inhibitor and potential antidementia agent Ibogaine – a naturally occurring hallucinogen and antiaddictive agent Ketobemidone – an opioid analgesic Methadone – an opioid analgesic Minocycline – an antibiotic Tramadol – an atypical opioid analgesic and serotonin releasing agent Nitromemantine The NMDA receptor is regulated via nitrosylation and aminoadamantane can be used as a target-directed shuttle to bring nitrogen oxide (NO) close to the site within the NMDA receptor where it can nitrosylate and regulate the ion channel conductivity. A NO donor that can be used to decrease NMDA receptor activity is the alkyl nitrate nitroglycerin. Unlike many other NO donors, alkyl nitrates do not have potential NO associated neurotoxic effects. Alkyl nitrates donate NO in the form of a nitro group as seen in figure 7, -NO2-, which is a safe donor that avoids neurotoxicity. The nitro group must be targeted to the NMDA receptor, otherwise other effects of NO such as dilatation of blood vessels and consequent hypotension could result. Nitromemantine is a second-generation derivative of memantine, it reduces excitotoxicity mediated by overactivation of the glutamatergic system by blocking NMDA receptor without sacrificing safety. Provisional studies in animal models show that nitromemantines are more effective than memantine as neuroprotectants, both in vitro and in vivo. Memantine and newer derivatives could become very important weapons in the fight against neuronal damage. Negative allosteric modulators include: 25-Hydroxycholesterol – endogenous weak negative allosteric modulator Conantokins – naturally occurring negative allosteric modulators of the polyamine site found in Conus geographus Modulators Examples The NMDA receptor is modulated by a number of endogenous and exogenous compounds: Aminoglycosides have been shown to have a similar effect to polyamines, and this may explain their neurotoxic effect. CDK5 regulates the amount of NR2B-containing NMDA receptors on the synaptic membrane, thus affecting synaptic plasticity. Polyamines do not directly activate NMDA receptors, but instead act to potentiate or inhibit glutamate-mediated responses. Reelin modulates NMDA function through Src family kinases and DAB1. significantly enhancing LTP in the hippocampus. Src kinase enhances NMDA receptor currents. Na+, K+ and Ca2+ not only pass through the NMDA receptor channel but also modulate the activity of NMDA receptors. Zn2+ and Cu2+ generally block NMDA current activity in a noncompetitive and a voltage-independent manner. However zinc may potentiate or inhibit the current depending on the neural activity. Pb2+ is a potent NMDAR antagonist. Presynaptic deficits resulting from Pb2+ exposure during synaptogenesis are mediated by disruption of NMDAR-dependent BDNF signaling. Proteins of the major histocompatibility complex class I are endogenous negative regulators of NMDAR-mediated currents in the adult hippocampus, and are required for appropriate NMDAR-induced changes in AMPAR trafficking and NMDAR-dependent synaptic plasticity and learning and memory. The activity of NMDA receptors is also strikingly sensitive to the changes in pH, and partially inhibited by the ambient concentration of H+ under physiological conditions. The level of inhibition by H+ is greatly reduced in receptors containing the NR1a subtype, which contains the positively charged insert Exon 5. The effect of this insert may be mimicked by positively charged polyamines and aminoglycosides, explaining their mode of action. NMDA receptor function is also strongly regulated by chemical reduction and oxidation, via the so-called "redox modulatory site." Through this site, reductants dramatically enhance NMDA channel activity, whereas oxidants either reverse the effects of reductants or depress native responses. It is generally believed that NMDA receptors are modulated by endogenous redox agents such as glutathione, lipoic acid, and the essential nutrient pyrroloquinoline quinone. Development of NMDA receptor antagonists The main problem with the development of NMDA antagonists for neuroprotection is that physiological NMDA receptor activity is essential for normal neuronal function. Complete blockade of all NMDA receptor activity results in side effects such as hallucinations, agitation and anesthesia. To be clinically relevant, an NMDA receptor antagonist must limit its action to blockade of excessive activation, without limiting normal function of the receptor. Competitive NMDA receptor antagonists Competitive NMDA receptor antagonists, which were developed first, are not a good option because they compete and bind to the same site (NR2 subunit) on the receptor as the agonist, glutamate, and therefore block normal function also. They will block healthy areas of the brain prior to having an impact on pathological areas, because healthy areas contain lower levels of agonist than pathological areas. These antagonists can be displaced from the receptor by high concentration of glutamate which can exist under excitotoxic circumstances. Noncompetitive NMDA receptor antagonists Uncompetitive NMDA receptor antagonists block within the ion channel at the Mg2+ site (pore region) and prevent excessive influx of Ca2+. Noncompetitive antagonism refers to a type of block that an increased concentration of glutamate cannot overcome, and is dependent upon prior activation of the receptor by the agonist, i.e. it only enters the channel when it is opened by agonist. Memantine and related compounds Because of these adverse side effects of high affinity blockers, the search for clinically successful NMDA receptor antagonists for neurodegenerative diseases continued and focused on developing low affinity blockers. However the affinity could not be too low and dwell time not too short (as seen with Mg2+) where membrane depolarization relieves the block. The discovery was thereby development of uncompetitive antagonist with longer dwell time than Mg2+ in the channel but shorter than MK-801. That way the drug obtained would only block excessively open NMDA receptor associated channels but not normal neurotransmission. Memantine is that drug. It is a derivative of amantadine which was first an anti-influenza agent but was later discovered by coincidence to have efficacy in Parkinson's disease. Chemical structures of memantine and amantadine can be seen in figure 5. The compound was first thought to be dopaminergic or anticholinergic but was later found to be an NMDA receptor antagonist. Memantine is the first drug approved for treatment of severe and more advanced Alzheimer's disease, which for example anticholinergic drugs do not do much good for. It helps recovery of synaptic function and in that way improves impaired memory and learning. In 2015 memantine is also in trials for therapeutic importance in additional neurological disorders. Many second-generation memantine derivatives have been in development that may show even better neuroprotective effects, where the main thought is to use other safe but effective modulatory sites on the NMDA receptor in addition to its associated ion channel. Structure activity relationship (SAR) Memantine (1-amino-3,5-dimethyladamantane) is an aminoalkyl cyclohexane derivative and an atypical drug compound with non-planar, three dimensional tricyclic structure. Figure 8 shows SAR for aminoalkyl cyclohexane derivative. Memantine has several important features in its structure for its effectiveness: Three-ring structure with a bridgehead amine, -NH2 The -NH2 group is protonated under physiological pH of the body to carry a positive charge, -NH3+ Two methyl (CH3) side groups which serve to prolong the dwell time and increase stability as well as affinity for the NMDA receptor channel compared with amantadine (1-adamantanamine). Despite the small structural difference between memantine and amantadine, two adamantane derivatives, the affinity for the binding site of NR1/NR2B subunit is much greater for memantine. In patch-clamp measurements memantine has an IC50 of (2.3+0.3) μM while amantadine has an IC50 of (71.0+11.1) μM. The binding site with the highest affinity is called the dominant binding site. It involves a connection between the amine group of memantine and the NR1-N161 binding pocket of the NR1/NR2B subunit. The methyl side groups play an important role in increasing the affinity to the open NMDA receptor channels and making it a much better neuroprotective drug than amantadine. The binding pockets for the methyl groups are considered to be at the NR1-A645 and NR2B-A644 of the NR1/NR2B. The binding pockets are shown in figure 2. Memantine binds at or near to the Mg2+ site inside the NMDA receptor associated channel. The -NH2 group on memantine, which is protonated under physiological pH of the body, represents the region that binds at or near to the Mg2+ site. Adding two methyl groups to the -N on the memantine structure has shown to decrease affinity, giving an IC50 value of (28.4+1.4) μM. Second generation derivative of memantine; nitromemantine Several derivatives of Nitromemantine, a second-generation derivative of memantine, have been synthesized in order to perform a detailed structure activity relationship (SAR) of these novel drugs. One class, containing a nitro (NO2) group opposite to the bridgehead amine (NH2), showed a promising outcome. Nitromemantine utilizes memantine binding site on the NMDA receptor to target the NOx (X= 1 or 2) group for interaction with the S- nitrosylation/redox site external to the memantine binding site. Lengthening the side chains of memantine compensates for the worse drug affinity in the channel associated with the addition of the –ONO2 group Therapeutic application Excitotoxicity is implied to be involved in some neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease, Huntington's disease and amyotrophic lateral sclerosis. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. It is, however, important to preserve physiological NMDA receptor activity while trying to block its excessive, excitotoxic activity. This can possibly be achieved by uncompetitive antagonists, blocking the receptor's ion channel when excessively open Memantine is an example of uncompetitive NMDA receptor antagonist that has approved indication for the neurodegenerative disease Alzheimer's disease. In 2015 memantine is still in clinical trials for additional neurological diseases. Receptor modulation The NMDA receptor is a non-specific cation channel that can allow the passage of Ca2+ and Na+ into the cell and K+ out of the cell. The excitatory postsynaptic potential (EPSP) produced by activation of an NMDA receptor increases the concentration of Ca2+ in the cell. The Ca2+ can in turn function as a second messenger in various signaling pathways. However, the NMDA receptor cation channel is blocked by Mg2+ at resting membrane potential. Magnesium unblock is not instantaneous; to unblock all available channels, the postsynaptic cell must be depolarized for a sufficiently long period of time (in the scale of milliseconds). Therefore, the NMDA receptor functions as a "molecular coincidence detector". Its ion channel opens only when the following two conditions are met: glutamate is bound to the receptor, and the postsynaptic cell is depolarized (which removes the Mg2+ blocking the channel). This property of the NMDA receptor explains many aspects of long-term potentiation (LTP) and synaptic plasticity. In a resting-membrane potential, the NMDA receptor pore is opened allowing for an influx of external magnesium ions binding to prevent further ion permeation. External magnesium ions are in a millimolar range while intracellular magnesium ions are at a micromolar range to result in negative membrane potential. NMDA receptors are modulated by a number of endogenous and exogenous compounds and play a key role in a wide range of physiological (e.g., memory) and pathological processes (e.g., excitotoxicity). Magnesium works to potentiate NMDA-induced responses at positive membrane potentials while blocking the NMDA channel. The use of calcium, potassium, and sodium are used to modulate the activity of NMDARs passing through the NMDA membrane. Changes in H+ concentration can partially inhibit the activity of NMDA receptors in different physiological conditions. Clinical significance NMDAR antagonists like ketamine, esketamine, tiletamine, phencyclidine, nitrous oxide, and xenon are used as general anesthetics. These and similar drugs like dextromethorphan and methoxetamine also produce dissociative, hallucinogenic, and euphoriant effects and are used as recreational drugs. NMDAR-targeted compounds, including ketamine, esketamine (JNJ-54135419), rapastinel (GLYX-13), apimostinel (NRX-1074), zelquistinel (AGN-241751), 4-chlorokynurenine (AV-101), and rislenemdaz (CERC-301, MK-0657), are under development for the treatment of mood disorders, including major depressive disorder and treatment-resistant depression. In addition, ketamine is already employed for this purpose as an off-label therapy in some clinics. Research suggests that tianeptine produces antidepressant effects through indirect alteration and inhibition of glutamate receptor activity and release of , in turn affecting neural plasticity. Tianeptine also acts on the NMDA and AMPA receptors. In animal models, tianeptine inhibits the pathological stress-induced changes in glutamatergic neurotransmission in the amygdala and hippocampus. Memantine, a low-trapping NMDAR antagonist, is approved in the United States and Europe for the treatment of moderate-to-severe Alzheimer's disease, and has now received a limited recommendation by the UK's National Institute for Health and Care Excellence for patients who fail other treatment options. Cochlear NMDARs are the target of intense research to find pharmacological solutions to treat tinnitus. NMDARs are associated with a rare autoimmune disease, anti-NMDA receptor encephalitis (also known as NMDAR encephalitis), that usually occurs due to cross-reactivity of antibodies produced by the immune system against ectopic brain tissues, such as those found in teratoma. These are known as anti-glutamate receptor antibodies. Compared to dopaminergic stimulants like methamphetamine, the NMDAR antagonist phencyclidine can produce a wider range of symptoms that resemble schizophrenia in healthy volunteers, in what has led to the glutamate hypothesis of schizophrenia. Experiments in which rodents are treated with NMDA receptor antagonist are today the most common model when it comes to testing of novel schizophrenia therapies or exploring the exact mechanism of drugs already approved for treatment of schizophrenia. NMDAR antagonists, for instance eliprodil, gavestinel, licostinel, and selfotel have been extensively investigated for the treatment of excitotoxicity-mediated neurotoxicity in situations like ischemic stroke and traumatic brain injury, but were unsuccessful in clinical trials used in small doses to avoid sedation, but NMDAR antagonists can block Spreading Depolarizations in animals and in patients with brain injury. This use has not been tested in clinical trials yet. See also Calcium/calmodulin-dependent protein kinases References External links NMDA receptor pharmacology Motor Discoordination Results from Combined Gene Disruption of the NMDA Receptor NR2A and NR2C Subunits, But Not from Single Disruption of the NR2A or NR2C Subunit Drosophila NMDA receptor 1 - The Interactive Fly Cell signaling Glutamate (neurotransmitter) Ion channels Ionotropic glutamate receptors Molecular neuroscience NMDA receptor antagonists
NMDA receptor
[ "Chemistry" ]
11,113
[ "Molecular neuroscience", "Neurochemistry", "Ion channels", "Molecular biology" ]
374,372
https://en.wikipedia.org/wiki/Noria
A noria (, nā‘ūra, plural nawāʿīr, from , nā‘orā, lit. "growler") is a hydropowered scoop wheel used to lift water into a small aqueduct, either for the purpose of irrigation or to supply water to cities and villages. Name and meaning Etymology The English word noria is derived via Spanish noria from Arabic nā‘ūra (ناعورة), which comes from the Arabic verb meaning to "groan" or "grunt", in reference to the sound it made when turning. Noria versus saqiyah The term noria is commonly used for devices which use the power of moving water to turn the wheel. For devices powered by animals, the usual term is saqiyah or saqiya. Other types of similar devices are grouped under the name of chain pumps. However, the names of traditional water-raising devices used in the Middle East, India, Spain and other areas are often used loosely and overlappingly, or vary depending on region. Al-Jazari's book on mechanical devices, for example, groups the water-driven wheel and several other types of water-lifting devices under the general term saqiya. In Spain, by contrast, the term noria is used for both types of wheels, whether powered by animals or water current. Function The noria performs the function of moving water from a lower elevation to a higher elevation, using the energy derived from the flow of a river. It consists of a large, narrow undershot water wheel whose rim is made up of a series of containers or compartments which lift water from the river to an aqueduct at the top of the wheel. Its concept is similar to the modern hydraulic ram, which also uses the power of flowing water to pump some of the water out of the river. Traditional norias may have pots, buckets or tubes attached directly to the periphery of the wheel, in effect sakias powered by flowing water rather than by animals or motors. For some the buckets themselves form the driving surfaces, for most the buckets are separate to the water wheels and attached on one side. More modern types can be built up compartments. All types are configured to discharge the lifted water sideways to a channel. For a modern noria in Steffisburg, Switzerland, the designers have uniquely connected the two functional wheels not directly but via a pair of cog wheels. This allows individual variation of speeds, diameters, and water levels. Unlike the water wheels found in watermills, a noria does not provide mechanical power to any other process. A few historical norias were hybrids, consisting of waterwheels assisted secondarily by animal power. There is at least one known instance where a noria feeds seawater into a saltern. History Paddle-driven water-lifting wheels had appeared in ancient Egypt by the 4th century BC. According to John Peter Oleson, both the compartmented wheel and the hydraulic noria appeared in Egypt by the 4th century BC, with the saqiyah being invented there a century later. This is supported by archeological finds in the Faiyum, where the oldest archeological evidence of a water wheel has been found, in the form of a saqiyah dating back to the 3rd century BC. A papyrus dating to the 2nd century BC also found in the Faiyum mentions a water wheel used for irrigation, a 2nd-century BC fresco found at Alexandria depicts a compartmented saqiyah, and the writings of Callixenus of Rhodes mention the use of a saqiyah in the Ptolemaic Kingdom during the reign of Pharaoh Ptolemy IV Philopator in the late 3rd century BC. The undershot water wheel and overshot water wheel, both animal- and water-driven, and with either a compartmented body (Latin tympanum) or a compartmented rim, were used by Hellenistic engineers between the 3rd and 2nd century BC. In 1st century BC, Roman architect Vitruvius described the function of the noria. Around 300, the Romans replaced the wooden compartments with separate, attached ceramic pots that were tied to the outside of an open-framed wheel, thereby creating the noria. During the Islamic Golden Age, norias were adopted from classical antiquity by Muslim engineers, who made improvements to the noria. For example, the flywheel mechanism used to smooth out the delivery of power from a driving device to a driven machine, was invented by ibn Bassal (fl. 1038–1075) of al-Andalus, who pioneered the use of the flywheel in the noria and saqiyah. In 1206, Ismail al-Jazari introduced the use of the carank in the noria and saqiya, and the concept of minimizing intermittency was implied for the purpose of maximising their efficiency. Muslim engineers used norias to discharge water into aqueducts which carried the water to towns and fields. The norias of Hama, for example, were in diameter and are still used in modern times (although currently only serving aesthetic purposes). The largest wheel has 120 water collection compartments and could raise more than 95 litres of water per minute. In the 10th century, Muhammad ibn Zakariya al-Razi's Al-Hawi describes a noria in Iraq that could lift as much as 153,000 litres per hour, or 2550 litres per minute. This is comparable to the output of modern norias in East Asia, which can lift up to 288,000 litres per hour, or 4800 litres per minute. In the late 13th century the Marinid sultan Abu Yaqub Yusuf built an enormous noria, sometimes referred to as the "Grand Noria", in order to provide water for the vast Mosara Garden he created in Fez, Morocco. Its construction began in 1286 and was finished the next year. The noria, designed by an Andalusian engineer named Ibn al-Hajj, measured 26 metres in diameter and 2 metres wide. The wheel was made of wood but covered in copper, fitted into a stone structure adjoined to a nearby city gate. After the decline of the Marinids both the gardens and the noria fell into neglect; the wheel of the noria reportedly disappeared in 1888, leaving only remains of the stone base. Numerous norias were also built in Al-Andalus, during the Islamic period of the Iberian Peninsula (8th-15th centuries), and continued to be built by Christian Spanish engineers afterwards. The most famous are the Albolafia in Cordoba (of uncertain date, partly reconstructed today), along the Guadalquivir River, and a former noria in Toledo, along the Tagus River. According to al-Idrisi, the Toledo noria was especially large and could raise water from the river to an aqueduct over 40 meters above it, which then supplied water to the city. Norias and similar devices were also used on vast scale in some parts of Spain for agricultural purposes. The rice plantations of Valencia were said to have 8000 norias, while Mallorca had over 4000 animal-driven saqiyas which were in use up until the beginning of the 20th century. The Alcantarilla Noria near Murcia, a noria built in the 15th century under Spanish Christian rule, is one of the better-known examples to have survived to the present-day. References Sources External links Spanish norias in the Region of Murcia Photos of the norias of Hama in Syria Aqueducts Pumps Irrigation Articles containing video clips Ancient Egyptian technology Egyptian inventions Watermills Water wheels
Noria
[ "Physics", "Chemistry" ]
1,560
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
374,388
https://en.wikipedia.org/wiki/Uranium%20hexafluoride
Uranium hexafluoride, sometimes called hex, is an inorganic compound with the formula . Uranium hexafluoride is a volatile, toxic white solid that is used in the process of enriching uranium, which produces fuel for nuclear reactors and nuclear weapons. Preparation Uranium dioxide is converted with hydrofluoric acid (HF) to uranium tetrafluoride: In samples contaminated with uranium trioxide, the oxyfluoride is produced in the HF step: The resulting is subsequently oxidized with fluorine to give the hexafluoride: Properties Physical properties At atmospheric pressure, sublimes at 56.5 °C. The solid-state structure was determined by neutron diffraction at 77 K and 293 K. Chemical properties UF6 reacts with water, releasing hydrofluoric acid. The compound reacts with aluminium, forming a surface layer of that resists any further reaction from the compound. Uranium hexafluoride is a mild oxidant. It is a Lewis acid as evidenced by its binding to form heptafluorouranate(VI), . Polymeric uranium(VI) fluorides containing organic cations have been isolated and characterized by X-ray diffraction. Application in the fuel cycle As one of the most volatile compounds of uranium, uranium hexafluoride is relatively convenient to process and is used in both of the main uranium enrichment methods, namely gaseous diffusion and the gas centrifuge method. Since the triple point of ; 64 °C(147 °F; 337 K) and 152 kPa (22 psi; 1.5 atm); is close to ambient conditions, phase transitions can be achieved with little thermodynamic work. Fluorine has only a single naturally occurring stable isotope, so isotopologues of differ in their molecular weight based solely on the uranium isotope present. This difference is the basis for the physical separation of isotopes in enrichment. All the other uranium fluorides are nonvolatile solids that are coordination polymers. The conversion factor for the isotopologue of ("hex") to "U mass" is 0.676. Gaseous diffusion requires about 60 times as much energy as the gas centrifuge process: gaseous diffusion-produced nuclear fuel produces 25 times more energy than is used in the diffusion process, while centrifuge-produced fuel produces 1,500 times more energy than is used in the centrifuge process. In addition to its use in enrichment, uranium hexafluoride has been used in an advanced reprocessing method (fluoride volatility), which was developed in the Czech Republic. In this process, spent nuclear fuel is treated with fluorine gas to transform the oxides or elemental metals into a mixture of fluorides. This mixture is then distilled to separate the different classes of material. Some fission products form nonvolatile fluorides which remain as solids and can then either be prepared for storage as nuclear waste or further processed either by solvation-based methods or electrochemically. Uranium enrichment produces large quantities of depleted uranium hexafluoride (D or D-) as a waste product. The long-term storage of D- presents environmental, health, and safety risks because of its chemical instability. When is exposed to moist air, it reacts with the water in the air to produce (uranyl fluoride) and HF (hydrogen fluoride) both of which are highly corrosive and toxic. In 2005, 686,500 tonnes of D- was housed in 57,122 storage cylinders located near Portsmouth, Ohio; Oak Ridge, Tennessee; and Paducah, Kentucky. Storage cylinders must be regularly inspected for signs of corrosion and leaks. The estimated lifetime of the steel cylinders is measured in decades. Accidents and disposal There have been several accidents involving uranium hexafluoride in the US, including a cylinder-filling accident and material release at the Sequoyah Fuels Corporation in 1986 where an estimated 29 500 pounds of gaseous escaped. The U.S. government has been converting D to solid uranium oxides for disposal. Such disposal of the entire D stockpile could cost anywhere from $15 million to $450 million. References Further reading Gmelins Handbuch der anorganischen Chemie, System Nr. 55, Uran, Teil A, p. 121–123. Gmelins Handbuch der anorganischen Chemie, System Nr. 55, Uran, Teil C 8, p. 71–163. R. DeWitt: Uranium hexafluoride: A survey of the physico-chemical properties, Technical report, GAT-280; Goodyear Atomic Corp., Portsmouth, Ohio; 12. August 1960; . Ingmar Grenthe, Janusz Drożdżynński, Takeo Fujino, Edgar C. Buck, Thomas E. Albrecht-Schmitt, Stephen F. Wolf: Uranium , in: Lester R. Morss, Norman M. Edelstein, Jean Fuger (Hrsg.): The Chemistry of the Actinide and Transactinide Elements, Springer, Dordrecht 2006; , p. 253–698; (p. 530–531, 557–564). US-Patent 2535572: Preparation of UF6; 26. December 1950. US-Patent 5723837: Uranium Hexafluoride Purification; 3. March 1998. External links Simon Cotton (Uppingham School, Rutland, UK): Uranium Hexafluoride. Uranium Hexafluoride (UF6) – Physical and chemical properties of UF6, and its use in uranium processing – Uranium Hexafluoride and Its Properties Uranium Hexafluoride at WebElements Import of Western depleted uranium hexafluoride (uranium tails) to Russia [dead link 30 June 2017] Actinide halides Hexafluorides Nuclear materials Octahedral compounds Uranium(VI) compounds
Uranium hexafluoride
[ "Physics" ]
1,258
[ "Materials", "Nuclear materials", "Matter" ]
374,581
https://en.wikipedia.org/wiki/Oligopeptide
An oligopeptide (oligo-, "a few"), is a peptide consisting of two to twenty amino acids, including dipeptides, tripeptides, tetrapeptides, and other polypeptides. Some of the major classes of naturally occurring oligopeptides include aeruginosins, cyanopeptolins, microcystins, microviridins, microginins, anabaenopeptins, and cyclamides. Microcystins are best studied because of their potential toxicity impact in drinking water. A review of some oligopeptides found that the largest class are the cyanopeptolins (40.1%), followed by microcystins (13.4%). Production Oligopeptide classes are produced by nonribosomal peptides synthases (NRPS), except cyclamides and microviridins are synthesized through ribosomic pathways. Examples Examples of oligopeptides include: Amanitins - Cyclic peptides taken from carpophores of several different mushroom species. They are potent inhibitors of RNA polymerases in most eukaryotic species, the prevent the production of mRNA and protein synthesis. These peptides are important in the study of transcription. Alpha-amanitin is the main toxin from the species Amanita phalloides, poisonous if ingested by humans or animals. Antipain - An oligopeptide produced by various bacteria which acts as a protease inhibitor. Ceruletide - A specific decapeptide found in the skin of Hyla caerulea, the Australian green tree frog. Ceruletide has very much in common with regards to action and composition to cholecystokinin. It stimulates gastric, biliary, and pancreatic secretion; and certain smooth muscle. It is used to induce pancreatitis in experimental animal models. Glutathione - A tripeptide with many roles in cells. It conjugates to drugs to make them more soluble for excretion, is a cofactor for some enzymes, is involved in protein disulfide bond rearrangement and reduces peroxides. Leupeptins - A group of acylated oligopeptides produced by Actinomycetes that function as protease inhibitors. They have been known to inhibit to varying degrees trypsin, plasmin, kallikreins, papain and the cathepsins. Netropsin - A basic oligopeptide isolated from Streptomyces netropsis. It is cytotoxic and its strong, specific binding to A-T areas of DNA is useful to genetics research. Pepstatins - N-acylated oligopeptides isolated from culture filtrates of Actinomycetes, which act specifically to inhibit acid proteases such as pepsin and renin. Peptide T - N-(N-(N(2)-(N-(N-(N-(N-D-Alanyl L-seryl)-L-threonyl)-L-threonyl) L-threonyl)-L-asparaginyl)-L-tyrosyl) L-threonine. Octapeptide sharing sequence homology with HIV envelope protein gp120. It may be useful as antiviral agent in AIDS therapy. The core pentapeptide sequence, TTNYT, consisting of amino acids 4-8 in peptide T, is the HIV envelope sequence required for attachment to the CD4 receptor. Phalloidin - A very toxic polypeptide isolated mainly from Amanita phalloides (Agaricaceae) or death cap; causes fatal liver, kidney and CNS damage in mushroom poisoning; used in the study of liver damage. Teprotide - A man-made nonapeptide (Pyr-Trp-Pro-Arg-Pro-Gln-Ile-Pro-Pro) which is exactly the same as the peptide from the venom of the snake, Bothrops jararaca. It inhibits kininase II and angiotensin I and has been proposed as an antihypertensive agent. Tuftsin - N(2)-((1-(N(2)-L-Threonyl)-L-lysyl)-L-prolyl)-L-arginine. A tetrapeptide manufactured in the spleen by enzymatic cleavage of a leukophilic gamma-globulin. It stimulates the phagocytic activity of blood polymorphonuclear leukocytes and neutrophils in particular. The peptide is located in the Fd fragment of the gamma-globulin molecule. See also Micropeptide Oligoester Oligomer Oligopeptidase Peptide synthesis Protease References External links Structural Biochemistry/Proteins/Amino Acids (Wikibooks) Peptides
Oligopeptide
[ "Chemistry" ]
1,078
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
375,033
https://en.wikipedia.org/wiki/Trigonometric%20substitution
In mathematics, a trigonometric substitution replaces a trigonometric function for another expression. In calculus, trigonometric substitutions are a technique for evaluating integrals. In this case, an expression involving a radical function is replaced with a trigonometric one. Trigonometric identities may help simplify the answer. Like other methods of integration by substitution, when evaluating a definite integral, it may be simpler to completely deduce the antiderivative before applying the boundaries of integration. Case I: Integrands containing a2 − x2 Let and use the identity Examples of Case I Example 1 In the integral we may use Then, The above step requires that and We can choose to be the principal root of and impose the restriction by using the inverse sine function. For a definite integral, one must figure out how the bounds of integration change. For example, as goes from to then goes from to so goes from to Then, Some care is needed when picking the bounds. Because integration above requires that , can only go from to Neglecting this restriction, one might have picked to go from to which would have resulted in the negative of the actual value. Alternatively, fully evaluate the indefinite integrals before applying the boundary conditions. In that case, the antiderivative gives as before. Example 2 The integral may be evaluated by letting where so that and by the range of arcsine, so that and Then, For a definite integral, the bounds change once the substitution is performed and are determined using the equation with values in the range Alternatively, apply the boundary terms directly to the formula for the antiderivative. For example, the definite integral may be evaluated by substituting with the bounds determined using Because and On the other hand, direct application of the boundary terms to the previously obtained formula for the antiderivative yields as before. Case II: Integrands containing a2 + x2 Let and use the identity Examples of Case II Example 1 In the integral we may write so that the integral becomes provided For a definite integral, the bounds change once the substitution is performed and are determined using the equation with values in the range Alternatively, apply the boundary terms directly to the formula for the antiderivative. For example, the definite integral may be evaluated by substituting with the bounds determined using Since and Meanwhile, direct application of the boundary terms to the formula for the antiderivative yields same as before. Example 2 The integral may be evaluated by letting where so that and by the range of arctangent, so that and Then, The integral of secant cubed may be evaluated using integration by parts. As a result, Case III: Integrands containing x2 − a2 Let and use the identity Examples of Case III Integrals such as can also be evaluated by partial fractions rather than trigonometric substitutions. However, the integral cannot. In this case, an appropriate substitution is: where so that and by assuming so that and Then, One may evaluate the integral of the secant function by multiplying the numerator and denominator by and the integral of secant cubed by parts. As a result, When which happens when given the range of arcsecant, meaning instead in that case. Substitutions that eliminate trigonometric functions Substitution can be used to remove trigonometric functions. For instance, The last substitution is known as the Weierstrass substitution, which makes use of tangent half-angle formulas. For example, Hyperbolic substitution Substitutions of hyperbolic functions can also be used to simplify integrals. For example, to integrate , introduce the substitution (and hence ), then use the identity to find: If desired, this result may be further transformed using other identities, such as using the relation : See also Integration by substitution Weierstrass substitution Euler substitution References Integral calculus Trigonometry
Trigonometric substitution
[ "Mathematics" ]
787
[ "Integral calculus", "Calculus" ]
375,140
https://en.wikipedia.org/wiki/Room-temperature%20superconductor
A room-temperature superconductor is a hypothetical material capable of displaying superconductivity above , operating temperatures which are commonly encountered in everyday settings. As of 2023, the material with the highest accepted superconducting temperature was highly pressurized lanthanum decahydride, whose transition temperature is approximately at 200 GPa. At standard atmospheric pressure, cuprates currently hold the temperature record, manifesting superconductivity at temperatures as high as . Over time, researchers have consistently encountered superconductivity at temperatures previously considered unexpected or impossible, challenging the notion that achieving superconductivity at room temperature was infeasible. The concept of "near-room temperature" transient effects has been a subject of discussion since the early 1950s. Reports Since the discovery of high-temperature superconductors ("high" being temperatures above , the boiling point of liquid nitrogen), several materials have been claimed, although not confirmed, to be room-temperature superconductors. Corroborated studies In 2014, an article published in Nature suggested that some materials, notably YBCO (yttrium barium copper oxide), could be made to briefly superconduct at room temperature using infrared laser pulses. In 2015, an article published in Nature by researchers of the Otto Hahn Institute suggested that under certain conditions such as extreme pressure transitioned to a superconductive form at 150 GPa (around 1.5 million times atmospheric pressure) in a diamond anvil cell. The critical temperature is which would be the highest Tc ever recorded and their research suggests that other hydrogen compounds could superconduct at up to . Also in 2018, researchers noted a possible superconducting phase at in lanthanum decahydride () at elevated (200 GPa) pressure. In 2019, the material with the highest accepted superconducting temperature was highly pressurized lanthanum decahydride, whose transition temperature is approximately . Uncorroborated studies In 1993 and 1997, Michel Laguës and his team published evidence of room temperature superconductivity observed on MBE deposited ultrathin nanostructures of BiSrCaCuO. These compounds exhibit extremely low resistivities orders of magnitude below that of copper, strongly non-linear I(V) characteristics and hysteretic I(V) behavior. In 2000, while extracting electrons from diamond during ion implantation work, Johan Prins claimed to have observed a phenomenon that he explained as room-temperature superconductivity within a phase formed on the surface of oxygen-doped type IIa diamonds in a vacuum. In 2003, a group of researchers published results on high-temperature superconductivity in palladium hydride (PdHx: ) and an explanation in 2004. In 2007, the same group published results suggesting a superconducting transition temperature of 260 K, with transition temperature increasing as the density of hydrogen inside the palladium lattice increases. This has not been corroborated by other groups. In March 2021, an announcement reported superconductivity in a layered yttrium-palladium-hydron material at 262 K and a pressure of 187 GPa. Palladium may act as a hydrogen migration catalyst in the material. On 31 December 2023, "Global Room-Temperature Superconductivity in Graphite" was published in the journal Advanced Quantum Technologies, claiming to demonstrate superconductivity at room temperature and ambient pressure in highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. Retracted or unreliable studies In 2012, an Advanced Materials article claimed superconducting behavior of graphite powder after treatment with pure water at temperatures as high as 300 K and above. So far, the authors have not been able to demonstrate the occurrence of a clear Meissner phase and the vanishing of the material's resistance. In 2018, Dev Kumar Thapa and Anshu Pandey from the Solid State and Structural Chemistry Unit of the Indian Institute of Science, Bangalore claimed the observation of superconductivity at ambient pressure and room temperature in films and pellets of a nanostructured material that is composed of silver particles embedded in a gold matrix. Due to similar noise patterns of supposedly independent plots and the publication's lack of peer review, the results have been called into question. Although the researchers repeated their findings in a later paper in 2019, this claim is yet to be verified and confirmed. Since 2016, a team led by Ranga P. Dias has produced a number of retracted or challenged papers in this field. In 2016 they claimed observation of solid metallic hydrogen in 2016. In October 2020, they reported room-temperature superconductivity at 288 K (at 15 °C) in a carbonaceous sulfur hydride at 267 GPa, triggered into crystallisation via green laser. This was retracted in 2022 after flaws in their statistical methods were identified and led to questioning of other data. In 2023 he reported superconductivity at 294 K and 1 GPa in nitrogen-doped lutetium hydride, in a paper widely met with skepticism about its methods and data. Later in 2023 he was found to have plagiarized parts of his dissertation from someone else's thesis, and to have fabricated data in a paper on manganese disulfide, which was retracted. The lutetium hydride paper was also retracted. The first attempts to replicate those results failed. On July 23, 2023, a Korean team claimed that Cu-doped lead apatite, which they named LK-99, was superconducting up to 370 K, though they had not observed this fully. They posted two preprints to arXiv, published a paper in a journal, and submitted a patent application. The reported observations were received with skepticism by experts due to the lack of clear signatures of superconductivity. The story was widely discussed on social media, leading to a large number of attempted replications, none of which had more than qualified success. By mid-August, a series of papers from major labs provided significant evidence that LK-99 was not a superconductor, finding resistivity much higher than copper, and explaining observed effects such as magnetic response and resistance drops in terms of impurities and ferromagnetism in the material. Theories Metallic hydrogen and phonon-mediated pairing Theoretical work by British physicist Neil Ashcroft predicted that solid metallic hydrogen at extremely high pressure (~500 GPa) should become superconducting at approximately room temperature, due to its extremely high speed of sound and expected strong coupling between the conduction electrons and the lattice-vibration phonons. A team at Harvard University has claimed to make metallic hydrogen and reports a pressure of 495 GPa. Though the exact critical temperature has not yet been determined, weak signs of a possible Meissner effect and changes in magnetic susceptibility at 250 K may have appeared in early magnetometer tests on an original now-lost sample. A French team is working with doughnut shapes rather than planar at the diamond culette tips. Organic polymers and exciton-mediated pairing In 1964, William A. Little proposed the possibility of high-temperature superconductivity in organic polymers. Other hydrides In 2004, Ashcroft returned to his idea and suggested that hydrogen-rich compounds can become metallic and superconducting at lower pressures than hydrogen. More specifically, he proposed a novel way to pre-compress hydrogen chemically by examining IVa hydrides. In 2014–2015, conventional superconductivity was observed in a sulfur hydride system ( or ) at 190 K to 203 K at pressures of up to 200 GPa. In 2016, research suggested a link between palladium hydride containing small impurities of sulfur nanoparticles as a plausible explanation for the anomalous transient resistance drops seen during some experiments, and hydrogen absorption by cuprates was suggested in light of the 2015 results in as a plausible explanation for transient resistance drops or "USO" noticed in the 1990s by Chu et al. during research after the discovery of YBCO. It has been predicted that (scandium dodecahydride) would exhibit superconductivity at room temperature – between and – under a pressure expected not to exceed 100 GPa. Some research efforts are currently moving towards ternary superhydrides, where it has been predicted that (dilithium magnesium hexadecahydride) would have a of at 250 GPa. Spin coupling It is also possible that if the bipolaron explanation is correct, a normally semiconducting material can transition under some conditions into a superconductor if a critical level of alternating spin coupling in a single plane within the lattice is exceeded; this may have been documented in very early experiments from 1986. The best analogy here would be anisotropic magnetoresistance, but in this case the outcome is a drop to zero rather than a decrease within a very narrow temperature range for the compounds tested similar to "re-entrant superconductivity". In 2018, support was found for electrons having anomalous 3/2 spin states in YPtBi. Though YPtBi is a relatively low temperature superconductor, this does suggest another approach to creating superconductors. "Quantum bipolarons" could describe how a material might superconduct at up to nearly room temperature. References Superconductors Hypothetical technology High pressure science
Room-temperature superconductor
[ "Physics", "Chemistry", "Materials_science" ]
1,969
[ "High pressure science", "Superconductivity", "Applied and interdisciplinary physics", "Superconductors" ]
375,272
https://en.wikipedia.org/wiki/Colorimetry
Colorimetry is "the science and technology used to quantify and describe physically the human color perception". It is similar to spectrophotometry, but is distinguished by its interest in reducing spectra to the physical correlates of color perception, most often the CIE 1931 XYZ color space tristimulus values and related quantities. History The Duboscq colorimeter was invented by Jules Duboscq in 1870. Instruments Colorimetric equipment is similar to that used in spectrophotometry. Some related equipment is also mentioned for completeness. A tristimulus colorimeter measures the tristimulus values of a color. A spectroradiometer measures the absolute spectral radiance (intensity) or irradiance of a light source. A spectrophotometer measures the spectral reflectance, transmittance, or relative irradiance of a color sample. A spectrocolorimeter is a spectrophotometer that can calculate tristimulus values. A densitometer measures the degree of light passing through or reflected by a subject. A color temperature meter measures the color temperature of an incident illuminant. Tristimulus colorimeter In digital imaging, colorimeters are tristimulus devices used for color calibration. Accurate color profiles ensure consistency throughout the imaging workflow, from acquisition to output. Spectroradiometer, spectrophotometer, spectrocolorimeter The absolute spectral power distribution of a light source can be measured with a spectroradiometer, which works by optically collecting the light, then passing it through a monochromator before reading it in narrow bands of wavelength. Reflected color can be measured using a spectrophotometer (also called spectroreflectometer or reflectometer), which takes measurements in the visible region (and a little beyond) of a given color sample. If the custom of taking readings at 10 nanometer increments is followed, the visible light range of 400–700 nm will yield 31 readings. These readings are typically used to draw the sample's spectral reflectance curve (how much it reflects, as a function of wavelength)—the most accurate data that can be provided regarding its characteristics. The readings by themselves are typically not as useful as their tristimulus values, which can be converted into chromaticity co-ordinates and manipulated through color space transformations. For this purpose, a spectrocolorimeter may be used. A spectrocolorimeter is simply a spectrophotometer that can estimate tristimulus values by numerical integration (of the color matching functions' inner product with the illuminant's spectral power distribution). One benefit of spectrocolorimeters over tristimulus colorimeters is that they do not have optical filters, which are subject to manufacturing variance, and have a fixed spectral transmittance curve—until they age. On the other hand, tristimulus colorimeters are purpose-built, cheaper, and easier to use. The CIE (International Commission on Illumination) recommends using measurement intervals under 5 nm, even for smooth spectra. Sparser measurements fail to accurately characterize spiky emission spectra, such as that of the red phosphor of a CRT display, depicted aside. Color temperature meter Photographers and cinematographers use information provided by these meters to decide what color balancing should be done to make different light sources appear to have the same color temperature. If the user enters the reference color temperature, the meter can calculate the mired difference between the measurement and the reference, enabling the user to choose a corrective color gel or photographic filter with the closest mired factor. Internally the meter is typically a silicon photodiode tristimulus colorimeter. The correlated color temperature can be calculated from the tristimulus values by first calculating the chromaticity co-ordinates in the CIE 1960 color space, then finding the closest point on the Planckian locus. See also Color science Photometry Radiometry References Further reading Optronik – Photometers An informative brochure with background information and specifications of their equipment. Konica Minolta Sensing – Precise Color Communication – from perception to instrumentation HunterLab – FAQ | How to Measure Color of a Sample & Use An Index A guide to measuring color and appearance of objects. The section provides information on numerical scales and indices that are used throughout the world to remove subjective measurements and assumptions. NIST Publications related to colorimetry. External links Colorlab MATLAB toolbox for color science computation and accurate color reproduction (by Jesus Malo and Maria Jose Luque, Universitat de Valencia). It includes CIE standard tristimulus colorimetry and transformations to a number of non-linear color appearance models (CIE Lab, CIE CAM, etc.). Color Physical quantities Measurement Radiometry
Colorimetry
[ "Physics", "Mathematics", "Engineering" ]
999
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Measurement", "Size", "Physical properties", "Radiometry" ]
375,900
https://en.wikipedia.org/wiki/Isentropic%20process
An isentropic process is an idealized thermodynamic process that is both adiabatic and reversible. The work transfers of the system are frictionless, and there is no net transfer of heat or matter. Such an idealized process is useful in engineering as a model of and basis of comparison for real processes. This process is idealized because reversible processes do not occur in reality; thinking of a process as both adiabatic and reversible would show that the initial and final entropies are the same, thus, the reason it is called isentropic (entropy does not change). Thermodynamic processes are named based on the effect they would have on the system (ex. isovolumetric: constant volume, isenthalpic: constant enthalpy). Even though in reality it is not necessarily possible to carry out an isentropic process, some may be approximated as such. The word "isentropic" derives from the process being one in which the entropy of the system remains unchanged. In addition to a process which is both adiabatic and reversible. Background The second law of thermodynamics states that where is the amount of energy the system gains by heating, is the temperature of the surroundings, and is the change in entropy. The equal sign refers to a reversible process, which is an imagined idealized theoretical limit, never actually occurring in physical reality, with essentially equal temperatures of system and surroundings. For an isentropic process, if also reversible, there is no transfer of energy as heat because the process is adiabatic; δQ = 0. In contrast, if the process is irreversible, entropy is produced within the system; consequently, in order to maintain constant entropy within the system, energy must be simultaneously removed from the system as heat. For reversible processes, an isentropic transformation is carried out by thermally "insulating" the system from its surroundings. Temperature is the thermodynamic conjugate variable to entropy, thus the conjugate process would be an isothermal process, in which the system is thermally "connected" to a constant-temperature heat bath. Isentropic processes in thermodynamic systems The entropy of a given mass does not change during a process that is internally reversible and adiabatic. A process during which the entropy remains constant is called an isentropic process, written or . Some examples of theoretically isentropic thermodynamic devices are pumps, gas compressors, turbines, nozzles, and diffusers. Isentropic efficiencies of steady-flow devices in thermodynamic systems Most steady-flow devices operate under adiabatic conditions, and the ideal process for these devices is the isentropic process. The parameter that describes how efficiently a device approximates a corresponding isentropic device is called isentropic or adiabatic efficiency. Isentropic efficiency of turbines: Isentropic efficiency of compressors: Isentropic efficiency of nozzles: For all the above equations: is the specific enthalpy at the entrance state, is the specific enthalpy at the exit state for the actual process, is the specific enthalpy at the exit state for the isentropic process. Isentropic devices in thermodynamic cycles Note: The isentropic assumptions are only applicable with ideal cycles. Real cycles have inherent losses due to compressor and turbine inefficiencies and the second law of thermodynamics. Real systems are not truly isentropic, but isentropic behavior is an adequate approximation for many calculation purposes. Isentropic flow In fluid dynamics, an isentropic flow is a fluid flow that is both adiabatic and reversible. That is, no heat is added to the flow, and no energy transformations occur due to friction or dissipative effects. For an isentropic flow of a perfect gas, several relations can be derived to define the pressure, density and temperature along a streamline. Note that energy can be exchanged with the flow in an isentropic transformation, as long as it doesn't happen as heat exchange. An example of such an exchange would be an isentropic expansion or compression that entails work done on or by the flow. For an isentropic flow, entropy density can vary between different streamlines. If the entropy density is the same everywhere, then the flow is said to be homentropic. Derivation of the isentropic relations For a closed system, the total change in energy of a system is the sum of the work done and the heat added: The reversible work done on a system by changing the volume is where is the pressure, and is the volume. The change in enthalpy () is given by Then for a process that is both reversible and adiabatic (i.e. no heat transfer occurs), , and so All reversible adiabatic processes are isentropic. This leads to two important observations: Next, a great deal can be computed for isentropic processes of an ideal gas. For any transformation of an ideal gas, it is always true that , and Using the general results derived above for and , then So for an ideal gas, the heat capacity ratio can be written as For a calorically perfect gas is constant. Hence on integrating the above equation, assuming a calorically perfect gas, we get that is, Using the equation of state for an ideal gas, , (Proof: But nR = constant itself, so .) also, for constant (per mole), and Thus for isentropic processes with an ideal gas, or Table of isentropic relations for an ideal gas {| style="bgcolor:white" cellpadding=5 |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- |} Derived from where: = pressure, = volume, = ratio of specific heats = , = temperature, = mass, = gas constant for the specific gas = , = universal gas constant, = molecular weight of the specific gas, = density, = molar specific heat at constant pressure, = molar specific heat at constant volume. See also Gas laws Adiabatic process Isenthalpic process Isentropic analysis Polytropic process Notes References Van Wylen, G. J. and Sonntag, R. E. (1965), Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc., New York. Library of Congress Catalog Card Number: 65-19470 Thermodynamic processes Thermodynamic entropy
Isentropic process
[ "Physics", "Chemistry" ]
1,605
[ "Physical quantities", "Thermodynamic processes", "Thermodynamic entropy", "Entropy", "Thermodynamics", "Statistical mechanics" ]
13,774,593
https://en.wikipedia.org/wiki/Differential%20capacitance
Differential capacitance in physics, electronics, and electrochemistry is a measure of the voltage-dependent capacitance of a nonlinear capacitor, such as an electrical double layer or a semiconductor diode. It is defined as the derivative of charge with respect to potential. Description In electrochemistry differential capacitance is a parameter introduced for characterizing electrical double layers: where is surface charge and is electric surface potential. Capacitance is usually defined as the stored charge between two conducting surfaces separated by a dielectric divided by the voltage between the surfaces. Another definition is the rate of change of the stored charge or surface charge () divided by the rate of change of the voltage between the surfaces or the electric surface potential (). The latter is called the "differential capacitance", but usually the stored charge is directly proportional to the voltage, making the capacitances given by the two definitions equal. This type of differential capacitance may be called "parallel plate capacitance", after the usual form of the capacitor. However, the term is meaningful when applied to any two conducting bodies such as spheres, and not necessarily ones of the same size, for example, the elevated terminals of a Tesla wireless system and the earth. These are widely spaced insulated conducting bodies positioned over a spherically conducting ground plane. "The differential capacitance between the spheres is obtained by assuming opposite charges on them ..." Another form of differential capacitance refers to single isolated conducting bodies. It is usually discussed in books under the topic of "electrostatics". This capacitance is best defined as the rate of change of charge stored in the body divided by the rate of change of the potential of the body. The definition of the absolute potential of the body depends on what is selected as a reference. This is sometimes referred to as the "self-capacitance" of a body. If the body is a conducting sphere, the self-capacitance is proportional to its radius, and is roughly 1 pF per centimetre of radius. See also Capacitive coupling Electric displacement field References External links McGraw-Hill Dictionary of Scientific and Technical Terms definition of "differential capacitance" Electrochemistry Colloidal chemistry Telecommunications engineering Power electronics
Differential capacitance
[ "Chemistry", "Engineering" ]
479
[ "Colloidal chemistry", "Physical chemistry stubs", "Telecommunications engineering", "Surface science", "Colloids", "Electrochemistry", "Electronic engineering", "Electrical engineering", "Electrochemistry stubs", "Power electronics" ]
13,774,692
https://en.wikipedia.org/wiki/Double%20layer%20%28biology%29
In biological systems, a double layer is the surface where two different phases of matter are in contact. Biological double layers are much like their interfacial counterparts, but with several notable distinctions. The surface of biological cells carry many different types of chemical groups, each with a different dissociation constant, causing them to have varying electric charges at a physiological pH. This indicates that biosurfaces are chemically heterogeneous. This biospecific feature is typical for all biosurfaces, including proteins, macromolecules and biological cells. In certain organisms, cells are covered with the glycocalyx layer, which can be modeled as a polyelectrolyte layer with a volume spread electric charge. This means that the notion of a surface charge is located on certain flat surfaces. This does not apply; instead, the cell surface is a finite thickness polyelectrolyte layer with a volume charge. At equilibrium, the relationship between these polyelectrolyte layers and a fluid bulk is called the Donnan equilibrium. The polyelectrolyte volume charge creates an equilibrated electric potential known as the Donnan potential. Part of the Donnan potential is located inside of the polyelectrolyte layer, while the other part is associated with the external double layer located in the dispersion medium. In another feature, the cells are not in an equilibrium with the fluid bulk. There is a constant ion exchange between living cells and a fluid. Consequently, there is a difference in electric potentials between the cell interior and a fluid bulk, known as the transmembrane potential. This non-equilibrium potential affects the structure of the double layer. Notes General references Ohshima, H. (2006). Theory of Colloid and Interfacial Electric Phenomena, Elsevier. Duval, J.F.L. et al. (2005). Langmuir, 21, 11268–11282. Physical chemistry Colloidal chemistry
Double layer (biology)
[ "Physics", "Chemistry" ]
407
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "nan", "Physical chemistry" ]
13,776,683
https://en.wikipedia.org/wiki/CALPHAD
CALPHAD stands for Computer Coupling of Phase Diagrams and Thermochemistry, a methodology introduced in 1970 by Larry Kaufman, originally known as CALculation of PHAse Diagrams. An equilibrium phase diagram is usually a diagram with axes for temperature and composition of a chemical system. It shows the regions where substances or solutions (i.e. phases) are stable and regions where two or more of them coexist. Phase diagrams are a very powerful tool for predicting the state of a system under different conditions and were initially a graphical method to rationalize experimental information on states of equilibrium. In complex systems, computational methods such as CALPHAD are employed to model thermodynamic properties for each phase and simulate multicomponent phase behavior. The CALPHAD approach is based on the fact that a phase diagram is a manifestation of the equilibrium thermodynamic properties of the system, which are the sum of the properties of the individual phases. It is thus possible to calculate a phase diagram by first assessing the thermodynamic properties of all the phases in a system. Methodology With the CALPHAD method one collects all experimental information on phase equilibria in a system and all thermodynamic information obtained from thermochemical and thermophysical studies. The thermodynamic properties of each phase are then described with a mathematical model containing adjustable parameters. The parameters are evaluated by optimizing the fit of the model to all the information, also involving coexisting phases. It is then possible to recalculate the phase diagram as well as the thermodynamic properties of all the phases. The philosophy of the CALPHAD method is to obtain a consistent description of the phase diagram and the thermodynamic properties so to reliably predict the set of stable phases and their thermodynamic properties in regions without experimental information and for metastable states during simulations of phase transformations. Thermodynamic modeling of a phase There are two crucial factors for the success of the CALPHAD method. The first factor is to find realistic as well as convenient mathematical models for the Gibbs energy for each phase. The Gibbs energy is used because most experimental data have been determined at known temperature and pressure and any other thermodynamic quantities can be calculated from it. It is not possible to obtain an exact description of the behavior of the Gibbs energy of a multi-component system with analytical expressions. It is thus necessary to identify the main features and base the mathematical models on them. The discrepancy between model and reality is finally represented by a power series expansion in temperature, pressure and constitution of the phase. The adjustable parameters of these model descriptions are refined to reproduce the experimental data. The strength of the CALPHAD method is that the descriptions of the constituent sub-systems can be combined to describe a multi-component system. Equilibrium calculations The second crucial factor is the availability of computer software for calculating equilibria and various kinds of diagrams and databases with the stored assessed information. As there are at present many different kinds of models used for different kinds of phases there are several thermodynamic databases available, either free or commercially, for different materials like steels, super-alloys, semiconductor materials, aqueous solutions, slags, etc. There are also several different kinds of software available using different kinds of algorithms for computing the equilibrium. It is an advantage if the software allows the equilibrium to be calculated using many different types of conditions for the system, not only the temperature, pressure and overall composition because in many cases the equilibrium may be determined at constant volume or at a given chemical potential of an element or a given composition of a particular phase. Applications CALPHAD had a slow start in the 60s but sophisticated thermodynamic data bank systems started to appear in the 80s and today there are several commercial products on the market, e.g. FactSage, MTDATA, PANDAT, MatCalc, JMatPro, and Thermo-Calc as well as open-sources codes such as OpenCalphad, PyCalphad, and ESPEI. They are used in research and industrial development (e.g., PrecipiCalc software and Materials by Design Technology), where they save large amounts of time and resources by reducing the experimental work and by making thermodynamic predictions available for multi-component systems that would be practically unattainable without this approach. There is a journal with this name where recent scientific achievements are published but scientific papers describing the use of the CALPHAD methods are published also in many other journals. See also Phase diagram Gibbs energy Enthalpy of mixing Miedema%27s Model Computational thermodynamics Thermodynamic_databases_for_pure_substances References External links Official CALPHAD website Thermodynamic free energy Phase transitions
CALPHAD
[ "Physics", "Chemistry" ]
1,006
[ "Thermodynamic properties", "Phase transitions", "Physical phenomena", "Physical quantities", "Phases of matter", "Critical phenomena", "Thermodynamic free energy", "Energy (physics)", "Statistical mechanics", "Wikipedia categories named after physical quantities", "Matter" ]
5,772,237
https://en.wikipedia.org/wiki/Tetrasodium%20EDTA
Tetrasodium EDTA is the salt resulting from the neutralization of ethylenediaminetetraacetic acid with four equivalents of sodium hydroxide (or an equivalent sodium base). It is a white solid that is highly soluble in water. Commercial samples are often hydrated, e.g. Na4EDTA.4H2O. The properties of solutions produced from the anhydrous and hydrated forms are the same, provided they are at the same pH. It is used as a source of the chelating agent EDTA4-. A 1% aqueous solution has a pH of approximately 11.3. When dissolved in neutral water, it converts partially to H2EDTA2-. Ethylenediaminetetraacetic acid is produced commercially via the intermediacy of tetrasodium EDTA. Products The substance is also known as Dissolvine E-39. It is a salt of edetic acid. It has been known at least since 1954. It is sometimes used as a chelating agent. The assignee on 5% of patents at the USPTO containing the substance is the firm Procter and Gamble. It is used most notably in cosmetics and hair and skin care products. The substance has been used to aid in formulation of a removal product for rust, corrosion, and scale from ferrous metal, copper, brass, and other surfaces. At a concentration of 6%, it is the main active ingredient in some types of engine coolant system flushes. References Acetic acids Amines Antidotes Chelating agents Preservatives E-number additives
Tetrasodium EDTA
[ "Chemistry" ]
338
[ "Functional groups", "Chelating agents", "Amines", "Bases (chemistry)", "Process chemicals" ]
5,774,572
https://en.wikipedia.org/wiki/Ship%20resistance%20and%20propulsion
A ship must be designed to move efficiently through the water with a minimum of external force. For thousands of years ship designers and builders of sailing vessels used rules of thumb based on the midship-section area to size the sails for a given vessel. The hull form and sail plan for the clipper ships, for example, evolved from experience, not from theory. It was not until the advent of steam power and the construction of large iron ships in the mid-19th century that it became clear to ship owners and builders that a more rigorous approach was needed. Definition Ship resistance is defined as the force required to tow the ship in calm water at a constant velocity. Components of resistance A body in water which is stationary with respect to water, experiences only hydrostatic pressure. Hydrostatic pressure always acts to oppose the weight of the body. The total (upward) force due to this buoyancy is equal to the (downward) weight of the displaced water. If the body is in motion, then there are also hydrodynamic pressures that act on the body. For a displacement vessel, that is the usual type of ship, three main types of resistance are considered: that due to wave-making, that due to the pressure of the moving water on the form, often not calculated or measured separately, and that due to friction of moving water on the wetted surface of the hull. These can be split up into more components: Froude's experiments Froude's method for extrapolating the results of model tests to ships was adopted in the 1870s. Another method created by Hughes introduced in the 1950s and later adopted by the International Towing Tank Conference (ITTC). Froude's method tends to overestimate the power for very large ships. Froude had observed that when a ship or model was at its so-called Hull speed the wave pattern of the transverse waves (the waves along the hull) have a wavelength equal to the length of the waterline. This means that the ship's bow was riding on one wave crest and so was its stern. This is often called the hull speed and is a function of the length of the ship where constant (k) should be taken as: 2.43 for velocity (V) in kn and length (L) in metres (m) or, 1.34 for velocity (V) in kn and length (L) in feet (ft). Observing this, Froude realized that the ship resistance problem had to be broken into two different parts: residuary resistance (mainly wave making resistance) and frictional resistance. To get the proper residuary resistance, it was necessary to recreate the wave train created by the ship in the model tests. He found for any ship and geometrically similar model towed at the suitable speed that: There is a frictional drag that is given by the shear due to the viscosity. This can result in 50% of the total resistance in fast ship designs and 80% of the total resistance in slower ship designs. To account for the frictional resistance Froude decided to tow a series of flat plates and measure the resistance of these plates, which were of the same wetted surface area and length as the model ship, and subtract this frictional resistance from the total resistance and get the remainder as the residuary resistance. Friction (Main article: Skin friction drag) In a viscous fluid, a boundary layer is formed. This causes a net drag due to friction. The boundary layer undergoes shear at different rates extending from the hull surface until it reaches the field flow of the water. Wave-making resistance (Main article: Wave-making resistance) A ship moving over the surface of undisturbed water sets up waves emanating mainly from the bow and stern of the ship. The waves created by the ship consist of divergent and transverse waves. The divergent waves are observed as the wake of a ship with a series of diagonal or oblique crests moving outwardly from the point of disturbance. These waves were first studied by William Thomson, 1st Baron Kelvin, who found that regardless of the speed of the ship, they were always contained within the 39° wedge shape (19.5° on each side) following the ship. The divergent waves do not cause much resistance against the ship's forward motion. However, the transverse waves appear as troughs and crests along the length of a ship and constitute the major part of the wave-making resistance of a ship. The energy associated with the transverse wave system travels at one half the phase velocity or the group velocity of the waves. The prime mover of the vessel must put additional energy into the system in order to overcome this expense of energy. The relationship between the velocity of ships and that of the transverse waves can be found by equating the wave celerity and the ship's velocity. Propulsion (Main article: Marine propulsion) Ships can be propelled by numerous sources of power: human, animal, or wind power (sails, kites, rotors and turbines), water currents, chemical or atomic fuels and stored electricity, pressure, heat or solar power supplying engines and motors. Most of these can propel a ship directly (e.g. by towing or chain), via hydrodynamic drag devices (e.g. oars and paddle wheels) and via hydrodynamic lift devices (e.g. propellers or jets). A few exotic means also exist, such as "fish-tail propulsion", rockets or magnetohydrodynamic propulsion. See also William Froude References E. V. Lewis, ed., Principles of Naval Architecture, vol. 2 (1988) Naval architecture
Ship resistance and propulsion
[ "Engineering" ]
1,167
[ "Naval architecture", "Marine engineering" ]
9,041,863
https://en.wikipedia.org/wiki/ChaNGa
ChaNGa (Charm N-body GrAvity solver) is a computer program to perform collisionless N-body simulations. It can perform cosmological simulations with periodic boundary conditions in comoving coordinates or simulations of isolated stellar systems. It is based on the Barnes–Hut algorithm and uses Ewald summation for periodic forces. ChaNGa makes use of the Charm++ parallel programming system, including its dynamic load balancing schemes, in order to scale to large processor configurations. Simulation results have been reported on up to 20,000 IBM Bluegene/L processors . More information For more information on obtaining, building and running ChaNGa, please see the Wiki documentation at . See also PKDGRAV GADGET GRAPE External links University of Washington ChaNGa website Charm++ web page at the Parallel Programming Lab, UIUC ChaNGa Wiki documentation Physical cosmology Cosmological simulation
ChaNGa
[ "Physics", "Astronomy" ]
181
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Computational physics", "Cosmological simulation", "Physical cosmology" ]
9,048,103
https://en.wikipedia.org/wiki/C7H8N4O2
{{DISPLAYTITLE:C7H8N4O2}} The molecular formula C7H8N4O2 (molar mass: 180.16 g/mol) may refer to: Paraxanthine Theobromine Theophylline Molecular formulas
C7H8N4O2
[ "Physics", "Chemistry" ]
59
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
9,049,798
https://en.wikipedia.org/wiki/Portable%20collision%20avoidance%20system
A portable collision avoidance system (PCAS) is a proprietary aircraft collision avoidance system similar in function to traffic collision avoidance system (TCAS). TCAS is the industry standard for commercial collision avoidance systems but PCAS is gaining recognition as an effective means of collision avoidance for general aviation and is in use the world over by independent pilots in personally owned or rented light aircraft as well as by flight schools and flying clubs. PCAS was manufactured by Zaon. Its main competitor is FLARM. PCAS allows pilots, particularly in single pilot VFR aircraft, an additional instrument to increase their situational awareness of other aircraft operating nearby. A basic system will notify pilots of the nearest transponder equipped aircraft, its relative height and distance, and importantly if the distance is decreasing or increasing. More advanced systems can integrate with EFIS, overlaying nearby aircraft on the GPS map with relative height information. This information reduces pilot work loads in busy airspace. It may also help pilots to hone their ability to spot nearby aircraft by alerting them when an aircraft is near. The original PCAS technology was developed in 1999 by Zane Hovey, a pilot and flight instructor, who also patented a portable ADS-b version as well. Through this technology, transponder-equipped aircraft are detected and ranged, and the altitude is decoded. PCAS G4 technology has advanced to the point that highly accurate range, relative altitude, and 45 degree direction can be accurately detected in a portable cockpit device. PCAS gained notoriety with the growing popular TV series The Aviators (TV series) as a sponsor, and specifically in episode 6 airing on both PBS and the Discovery Channel Network. Basic operation ATC ground stations and active TCAS systems transmit interrogation pulses on an uplink frequency of 1,030 MHz. Aircraft transponders reply on a downlink frequency of 1,090 MHz. PCAS devices detect these transponder responses, then analyze and display conflict information. Differences between PCAS and TCAS PCAS is passive and less expensive than active aircraft detection systems, such as TCAS. TCAS operates with more precision than PCAS but is also more expensive and usually requires 'permanent' in-aircraft installation (requiring, in the United States, an FAA-approved mechanic to install). Class 2 TCAS gives mandatory instructions (called Resolution Advisories) whereas PCAS only alerts the pilot and may give a suggestion as to how to act. A very well known general aviation organization completed an evaluation of the PCAS XRX system to demonstrate the capabilities. Detailed operation Step 1 An interrogation is sent out from ground-based RADAR stations and/or TCAS or other actively interrogating systems in your area. This signal is sent on 1,030 MHz. For TCAS, this interrogation range can have a radius of 40 miles from the interrogation source. The Ground RADAR range can be 200 miles or more. Step 2 The transponder on any aircraft within range of the interrogation replies on 1090 MHz with their squawk code (known as mode A) and altitude code (or mode C). Mode S transponders also reply on this frequency, and encoded within the mode S transmission is the mode A (squawk) and mode C (altitude) information. Military aircraft also respond on this frequency but use a different transmission protocol (see Step 3). A PCAS-containing aircraft's own transponder should also reply. However, the XRX unit watches for this signal and will not report it as a threat aircraft. The unit may use this information to establish base altitude for use in step 4. Step 3 The PCAS unit computes range (maximum 6 miles) based on the amplitude of the received signal, the altitude code is decoded, and the signal angle-of-arrival is determined to a resolution of "quadrants" (ahead, behind, left, or right) using a directional antenna array. XRX will recognize interrogations from TCAS, Skywatch, and any other "active" system, military protocols, and Mode S transmissions. Step 4 The altitude of the aircraft (in the example, 2,500 ft.) is compared to the altitude of the PCAS altitude (e.g., 1,500 ft.) and the relative altitude is calculated (e.g., 1,000 ft. above you). With relative direction, altitude and range determined, XRX displays this information and stores it in memory. Step 5 If additional aircraft are within detection range, the above process is repeated for each aircraft. The top threat is displayed on the left of the traffic screen and the second and third threats are displayed on the right. The greatest threat is determined by looking at aircraft within the detection window and comparing primarily the vertical separation (± relative altitude), and secondarily the range to the aircraft currently being displayed. XRX uses algorithms to determine which of two or more aircraft is a greater threat. Models Zaon PCAS XRX (2016) References Aircraft collision avoidance systems Warning systems Avionics
Portable collision avoidance system
[ "Technology", "Engineering" ]
1,036
[ "Safety engineering", "Avionics", "Measuring instruments", "Aircraft collision avoidance systems", "Aircraft instruments", "Warning systems" ]
9,053,177
https://en.wikipedia.org/wiki/LIM%20domain
LIM domains are protein structural domains, composed of two contiguous zinc fingers, separated by a two-amino acid residue hydrophobic linker. The domain name is an acronym of the three genes in which it was first identified (LIN-11, Isl-1 and MEC-3). LIM is a protein interaction domain that is involved in binding to many structurally and functionally diverse partners. The LIM domain appeared in eukaryotes sometime prior to the most recent common ancestor of plants, fungi, amoeba and animals. In animal cells, LIM domain-containing proteins often shuttle between the cell nucleus where they can regulate gene expression, and the cytoplasm where they are usually associated with actin cytoskeletal structures involved in connecting cells together and to the surrounding matrix, such as stress fibers, focal adhesions and adherens junctions. Discovery LIM domains are named after their initial discovery in the three homeobox proteins that have the following functions: Lin-11 – asymmetric division of vulvar blast cells Isl-1 – motor neuron development of neuroepithelial cells Mec-3 – differentiation of touch receptor neurons Sequence and Structure Humans contain 73 described genes encoding different LIM domain-containing proteins. These LIM domains have divergent amino acid sequences apart from certain key residues involved in zinc binding, which facilitate the formation of a stable protein core and tertiary fold. The sequence variation between different LIM domains may be due to the evolution of novel binding sites for diverse partners on top of the conserved stable core. Additionally, LIM domain proteins are functionally diverse; especially during the early evolution of animals, the LIM domain recombined with a variety of other domain types to create these diverse proteins with new functionality. The sequence signature of LIM domains is as follows: [C]-[X]2–4-[C]-[X]13–19-[W]-[H]-[X]2–4-[C]-[F]-[LVI]-[C]-[X]2–4-[C]-[X]13–20-C-[X]2–4-[C] LIM domains frequently occur in multiples, as seen in proteins such as TES, LMO4, and can also be attached to other domains in order to confer a binding or targeting function upon them, such as LIM-kinase. Roles LIM-domain containing proteins have been shown to play roles in cytoskeletal organization, organ development, regulation of plant cell development, cell lineage specification, and regulation of gene transcription. LIM proteins are also implicated in a variety of heart and muscle conditions, oncogenesis, neurological disorders and other diseases. LIM-domains mediate a variety of protein–protein interactions in many different cellular processes. However a large subset of LIM proteins are recruited to actin cytoskeletal structures that are under a mechanical load. Direct force-activated F-actin binding by LIM recruits LIM domain proteins to stressed cytoskeletal networks and is an example of a mechanosensing mechanism by which cytoskeletal tension governs mechanical homeostasis, nuclear localization, gene expression, and other cellular physiology. Classification The LIM superclass of genes have been classified into 14 classes: ABLIM, CRP, ENIGMA, EPLIN, LASP, LHX, LMO, LIMK, LMO7, MICAL, PXN, PINCH, TES, and ZYX. Six of these classes (i.e., ABLIM, MICAL, ENIGMA, ZYX, LHX, LM07) originated in the stem lineage of animals, and this expansion is thought to have made a major contribution to the origin of animal multicellularity. Asides lineage of animals, there are an entire class of plan LIM genes that were classified into four different classes: WLIM1, WLIM2, PLIM1, PLIM2, and FLIM (XLIM). These are sorted into 4 different subfamilies: αLIM1, βLIM1, γLIM2, and δLIM2. The αLIM1 subclades include PLIM1, WLIM1, and FLIM (XLIM). βLIM1 is a new subfamily, so no current distinguishable subclades. γLIM2 subclades contain WLIM2 and PLIM2. The final subfamily δLIM2 contains WLIM2, and PLIM2. LIM domains are also found in various bacterial lineages where they are typically fused to a metallopeptidase domain. Some versions show fusions to an inactive P-loop NTPase at their N-terminus and a single transmembrane helix. These domain fusions suggest that the prokaryotic LIM domains are likely to regulate protein processing at the cell membrane. The domain architectural syntax is remarkably parallel to those of the prokaryotic versions of the B-box zinc finger and the AN1 zinc finger domains. LIM domain containing proteins serve many specific functions in cells such as adherens junction, cytoarchitecture, specification of cell polarity, nuclear-cytoplasmic shuttling, and protein trafficking. These domains can be found in eukaryoes, plants, animal, fungi, and mycetozoa. It was classified as A, B, C, and D. These classifications are further sorted into three groups. Group 1 This group contains LIM domain classes A and B. They are typically fused to other functional domains such as kinases. The subclasses for these domains are LIM-homeodomain transcription factors, LMO proteins, and LIM kinases. LIM-homeodomain transcription factors They have multifunctionality primarily focusing on development of the nervous system, activation of transcription, and cell fate specification during development. The nervous system relies on the LIM domain type for differentiation of neurotransmitter biosynthetic pathways. LMO proteins These proteins focus on overall development of multiple cell types as well as oncogenesis and transcriptional regulation. Oncogenesis was found to occur due to the expression of LMO 1 and LMO 2 in T-cell leukemia patients. LIM kinases The purpose of these proteins is the establishment and regulation of the cytoskeleton. The regulation of the cytoskeleton by these kinases is through phosphorylation of cofilin, which allows for the accumulation of actin filaments. Notably, they have been found to be responsible for regulation of cell motility and morphogenesis. Group 2 This group contains LIM domain class C, which are localized typically in the cytoplasm. These domains are internally duplicated with two copies per a protein. Also, they are more similar to each than classes A and B. Cysteine-rich proteins There are three different cysteine rich proteins. The purpose of these proteins is their role in myogenesis and muscle structure. Although, it was found that structural role is played in multiple types of cells. Each of the CRP proteins are activated throughout myogenesis. CRP 3 plays a role in development of myoblasts, while CRP 1 is active in fibro blast cells. CRP 1 has more roles involved with actin filaments and z lines of myofibers. Group 3 This group contains only class D, which are typically localized in the cytoplasm. These LIM proteins contain 1 to 5 domains. These domains can have additional functional domains or motifs. This group is limited to three different adaptor proteins: zyxin, paxillin, and PINCH. They each respectively have different number of LIM domains with 3, 4, and 5. These are considered adaptor proteins related to adhesion plaques that regulate cell shape and spreading through distinct LIM-mediated protein-protein interactions. Protein-protein interactions LIM-HD & LMO These proteins are formed through interaction of LIM domain binding families that are bound by LIM1. LIM-Ldb interacts to form different heterodimers of LIM-HD. THis will typically form a LIM-LID region that interacts with LIM proteins. LIM-HD is known to determine distinct identities for motor neurons during development. It has been found to bind LMO1, LMO2, Lhx1, Isl1, and Mec3. LMO2 is found to be localized in the nucleus, which is involved in erythroid development especially in the fetal liver. Zyxin This protein is localized between the cytoplasm and nucleus through shuttling. It focuses on moving between cell adhesion sites and nucleus. The zinc fingers of the LIM domain will act independently. Zyxin has a variety of bind partners such as CRP, α-actinin, proto-oncogene Vav, p130, and members of Ena/VASP family of proteins. The known interactions of zyxin are between Ena/VASP and CRP1. LIM1 is responsible for recognition of CRP1, but cooperate with LIM2 for binding to zyxin. The Ena/VASP will bind profilin that is known to act as a actin-polymerizing protein. The zyxin-VASP complex will initiate actin-polymerization for the cytoskeletal structure. Paxillin This protein is localized in the cytoplasm at focal adhesion sites. It functions as a central protein for fatty acids and development of cystoskeletal structure. In fatty acids, they act as scaffolds for many binding partners. The LIM domain at the c-terminal bind protein tyrosine phosphatase-PEST. PTP-PEST binds at c-termini LIM 3 and 4 to disassemble fatty acids that will lead to the modulation of the fatty acid targeting regions. The extent of binding will depend on LIM 2 and 4. This will occur upon dephosphorylation of p130 and paxillin. ENIGMA This protein is localized in the cytoplasm, which serves in signaling and protein trafficking. The structure of this protein contains three LIM domains at the c-terminal. It will bind insulin receptor internalization motif (InsRF) at LIM domain 3. LIM domain 2 binds Ret receptor tyrosine kinase. PINCH This protein is localized in the cytoplasm and nucleus. It is responsible for effecting specific muscle adherens junctions and mechanosensory functions of touch receptor neurons. The protein sequence in the LIM domains are linked with very short interdomain peptides and c-terminal extension with high amounts of positive charges. The protein has multiple functions even presenting in senescent erythrocyte antigens. It can bind to ankyrin repeat domains of integrin-linked kinase. Also, LIM domain 4 of PINCH can bind to Nck2 protein to act as a adaptor. References Protein domains Transcription factors
LIM domain
[ "Chemistry", "Biology" ]
2,265
[ "Gene expression", "Protein classification", "Signal transduction", "Protein domains", "Induced stem cells", "Transcription factors" ]
9,053,231
https://en.wikipedia.org/wiki/Functional%20specification
A functional specification (also, functional spec, specs, functional specifications document (FSD), functional requirements specification) in systems engineering and software development is a document that specifies the functions that a system or component must perform (often part of a requirements specification) (ISO/IEC/IEEE 24765-2010). The documentation typically describes what is needed by the system user as well as requested properties of inputs and outputs (e.g. of the software system). A functional specification is the more technical response to a matching requirements document, e.g. the Product Requirements Document "PRD". Thus it picks up the results of the requirements analysis stage. On more complex systems multiple levels of functional specifications will typically nest to each other, e.g. on the system level, on the module level and on the level of technical details. Overview A functional specification does not define the inner workings of the proposed system; it does not include the specification of how the system function will be implemented. A functional requirement in a functional specification might state as follows: When the user clicks the OK button, the dialog is closed and the focus is returned to the main window in the state it was in before this dialog was displayed. Such a requirement describes an interaction between an external agent (the user) and the software system. When the user provides input to the system by clicking the OK button, the program responds (or should respond) by closing the dialog window containing the OK button. Functional specification topics Purpose There are many purposes for functional specifications. One of the primary purposes on team projects is to achieve some form of team consensus on what the program is to achieve before making the more time-consuming effort of writing source code and test cases, followed by a period of debugging. Typically, such consensus is reached after one or more reviews by the stakeholders on the project at hand after having negotiated a cost-effective way to achieve the requirements the software needs to fulfill. To let the developers know what to build. To let the testers know what tests to run. To let stakeholders know what they are getting. Process In the ordered industrial software engineering life-cycle (waterfall model), functional specification describes what has to be implemented. The next, Systems architecture document describes how the functions will be realized using a chosen software environment. In non industrial, prototypical systems development, functional specifications are typically written after or as part of requirements analysis. When the team agrees that functional specification consensus is reached, the functional spec is typically declared "complete" or "signed off". After this, typically the software development and testing team write source code and test cases using the functional specification as the reference. While testing is performed, the behavior of the program is compared against the expected behavior as defined in the functional specification. Methods One popular method of writing a functional specification document involves drawing or rendering either simple wire frames or accurate, graphically designed UI screenshots. After this has been completed, and the screen examples are approved by all stakeholders, graphical elements can be numbered and written instructions can be added for each number on the screen example. For example, a login screen can have the username field labeled '1' and password field labeled '2,' and then each number can be declared in writing, for use by software engineers and later for beta testing purposes to ensure that functionality is as intended. The benefit of this method is that countless additional details can be attached to the screen examples. Examples of functional specifications Advanced Microcontroller Bus Architecture Extensible Firmware Interface Multiboot Specification Real-time specification for Java Single UNIX Specification Types of software development specifications Bit specification (disambiguation) Design specification Diagnostic design specification Product design specification Software Requirements Specification See also Benchmarking Software development process Specification (technical standard) Software verification and validation References External links Painless Functional Specifications, 4-part series by Joel Spolsky Systems engineering Software documentation Software design bs:Specifikacija programa de:Pflichtenheft hr:Specifikacija programa pt:Especificação de programa
Functional specification
[ "Engineering" ]
833
[ "Systems engineering", "Design", "Software design" ]
9,054,293
https://en.wikipedia.org/wiki/MS%20Sigyn
M/S Sigyn was a ship that transported spent nuclear fuel and nuclear waste from Swedish nuclear power plants to Clab, the storage facility at Oskarshamn and the waste facilities at Studsvik and Forsmark. She was named after the goddess Sigyn, the mythological wife of Loki. Her name alluded to the role Sigyn played in holding a cup over the fettered Loki, catching the venom dripping from a viper perched above him. Sigyn was built in 1982 by the French Société Nouvelle des Ateliers et Chantiers du Havre at Le Havre, and could transport up to ten containers for spent fuel or radioactive waste. The ship could also take one or two vehicles or standard shipping containers for low-level waste. A stern ramp enabled roll-on/roll-off loading and unloading from the long cargo deck. Sigyn was owned by Svensk Kärnbränslehantering AB (SKB) (Swedish Nuclear Fuel and Waste Management Company), but was operated and staffed by Furetank Rederi AB (Furetank Shipping Company) In December 2010 it was announced that SKB have ordered a new ship from the Dutch Damen Group to replace Sigyn. The new ship named Sigrid, was laid down in Galați, Romania, in December 2011, and was delivered in May 2013. In October 2012 this new Nuclear Cargo Vessel 1600 was launched. References 1982 ships Radioactive waste Nuclear technology in Sweden
MS Sigyn
[ "Chemistry", "Technology" ]
299
[ "Radioactive waste", "Environmental impact of nuclear power", "Radioactivity", "Hazardous waste" ]
9,054,351
https://en.wikipedia.org/wiki/Clab
The Clab, also known as Centralt mellanlager för använt kärnbränsle (Swedish for 'Central holding storage for spent nuclear fuel') is an interim radioactive waste repository located at Oskarshamn Nuclear Power Plant about 25 km north of Oskarshamn. Clab used to be owned by Oskarshamnsverkets Kraftgrupp AB (OKG) but is now owned by Svensk Kärnbränslehantering Aktiebolag (SKB). It was opened in 1985 for the storage of spent nuclear fuel from all Swedish nuclear power plants. The fuel is stored for 30 to 40 years, in preparation for final storage. The facility currently contains approximately 7,300 tons of high-level waste, submerged in 8 meters of water, in pools 30 meters below the surface. Contaminated reactor components, such as control rods, are also stored at the facility. Waste produced from Sweden's nuclear power plants will continue be stored at the facility until the Swedish Nuclear Fuel and Waste Management Company can complete construction of a more permanent storage site at Forsmark. The facility contains an in-pool station where passive gamma-measurements can be done on spent nuclear fuel. References External links Clab Svensk Kärnbränslehantering AB about Clab (in Swedish) Nuclear materials Nuclear reprocessing Nuclear technology in Sweden Radioactive waste Radioactive waste repositories
Clab
[ "Physics", "Chemistry", "Technology" ]
286
[ "Radioactive waste", "Materials", "Nuclear chemistry stubs", "Nuclear materials", "Nuclear and atomic physics stubs", "Radioactivity", "Nuclear physics", "Environmental impact of nuclear power", "Hazardous waste", "Matter" ]
9,054,395
https://en.wikipedia.org/wiki/Wombling
In statistics, Wombling is any of a number of techniques used for identifying zones of rapid change, typically in some quantity as it varies across some geographical or Euclidean space. It is named for statistician William H. Womble. The technique may be applied to gene frequency in a population of organisms, and to evolution of language. References William H. Womble 1951. "Differential Systematics". Science vol 114, No. 2961, p315–322. Fitzpatrick M.C., Preisser E.L., Porter A., Elkinton J., Waller L.A., Carlin B.P. and Ellison A.E. (2010) "Ecological boundary detection using Bayesian areal wombling", Ecology 91:3448–3455 Liang, S., Banerjee, S. and Carlin, B.P. (2009) "Bayesian Wombling for Spatial Point Processes", Biometrics, 65 (11), 1243–1253 Ma, H. and Carlin, B.P. (2007) "Bayesian Multivariate Areal Wombling for Multiple Disease Boundary Analysis", Bayesian Analysis, 2 (2), 281–302 Banerjee, S. and Gelfand, A.E. (2006) "Bayesian Wombling: Curvilinear Gradient Assessment Under Spatial Process Models", Journal of the American Statistical Association, 101(476), 1487–1501. Quick, H., Banerjee, S. and Carlin, B.P. (2015). "Bayesian Modeling and Analysis for Gradients in Spatiotemporal Processes" Biometrics, 71, 575–584. Quick, H., Banerjee, S. and Carlin, B.P. (2013). "Modeling temporal gradients in regionally aggregated California asthma hospitalization data" Annals of Applied Statistics, 7(1), 154–176. Halder, A., Banerjee, S. and Dey, D. K. "Bayesian modeling with spatial curvature processes." Journal of the American Statistical Association (2023): 1-13. Available Software: Git Change detection Spatial analysis Gao, L., Banerjee, S. and Ritz, B. "Spatial Difference Boundary Detection for Multiple Outcomes Using Bayesian Disease Mapping." Biostatistics (journal) (2023): 922–944.
Wombling
[ "Physics" ]
519
[ "Spacetime", "Space", "Spatial analysis" ]
17,738,454
https://en.wikipedia.org/wiki/Tissue-to-air%20ratio
Tissue-to-air ratio (TAR) is a term used in radiotherapy treatment planning to help calculate absorbed dose to water in conditions other than those directly measured. Definition The TAR at a point in a water phantom irradiated by a photon beam is taken to be the ratio of the total absorbed dose at that point to the absorbed dose at the same point in a minimal-scatter phantom with just-sufficient build-up. Tissue-air ratio is defined as the ratio of the dose to water at a given depth to the dose in air measured with a buildup cap: where D(f,z) is the dose at a given depth z and distance focus-detector f; and D(f,0) is the dose in air (z=0). TAR increases with increasing beam energy because higher energy radiation is more penetrating TAR decreases with depth because of attenuation TAR increases with field size due to increased scatter contribution Measurements for each are taken using an ion chamber for identical source to detector distances and field sizes. See also Dosimetry Percentage depth dose curve References Radiation therapy Medical physics
Tissue-to-air ratio
[ "Physics" ]
225
[ "Applied and interdisciplinary physics", "Medical physics" ]
17,742,678
https://en.wikipedia.org/wiki/Energy%20release%20rate%20%28fracture%20mechanics%29
In fracture mechanics, the energy release rate, , is the rate at which energy is transformed as a material undergoes fracture. Mathematically, the energy release rate is expressed as the decrease in total potential energy per increase in fracture surface area, and is thus expressed in terms of energy per unit area. Various energy balances can be constructed relating the energy released during fracture to the energy of the resulting new surface, as well as other dissipative processes such as plasticity and heat generation. The energy release rate is central to the field of fracture mechanics when solving problems and estimating material properties related to fracture and fatigue. Definition The energy release rate is defined as the instantaneous loss of total potential energy per unit crack growth area , where the total potential energy is written in terms of the total strain energy , surface traction , displacement , and body force by The first integral is over the surface of the material, and the second is over its volume . The figure on the right shows the plot of an external force vs. the load-point displacement , in which the area under the curve is the strain energy. The white area between the curve and the -axis is referred to as the complementary energy. In the case of a linearly-elastic material, is a straight line and the strain energy is equal to the complementary energy. Prescribed displacement In the case of prescribed displacement, the strain energy can be expressed in terms of the specified displacement and the crack surface , and the change in this strain energy is only affected by the change in fracture surface area: . Correspondingly, the energy release rate in this case is expressed as Here is where one can accurately refer to as the strain energy release rate. Prescribed loads When the load is prescribed instead of the displacement, the strain energy needs to be modified as . The energy release rate is then computed as If the material is linearly-elastic, then and one may instead write G in two-dimensional cases In the cases of two-dimensional problems, the change in crack growth area is simply the change in crack length times the thickness of the specimen. Namely, . Therefore, the equation for computing can be modified for the 2D case: Prescribed Displacement: Prescribed Load: Prescribed Load, Linear Elastic: One can refer to the example calculations embedded in the next section for further information. Sometimes, the strain energy is written using , an energy-per-unit thickness. This gives Prescribed Displacement: Prescribed Load: Prescribed Load, Linear Elastic: Relation to stress intensity factors The energy release rate is directly related to the stress intensity factor associated with a given two-dimensional loading mode (Mode-I, Mode-II, or Mode-III) when the crack grows straight ahead. This is applicable to cracks under plane stress, plane strain, and antiplane shear. For Mode-I, the energy release rate rate is related to the Mode-I stress intensity factor for a linearly-elastic material by where is related to Young's modulus and Poisson's ratio depending on whether the material is under plane stress or plane strain: For Mode-II, the energy release rate is similarly written as For Mode-III (antiplane shear), the energy release rate now is a function of the shear modulus , For an arbitrary combination of all loading modes, these linear elastic solutions may be superposed as Relation to fracture toughness Crack growth is initiated when the energy release rate overcomes a critical value , which is a material property, Under Mode-I loading, the critical energy release rate is then related to the Mode-I fracture toughness , another material property, by Calculating G There are a variety of methods available for calculating the energy release rate given material properties, specimen geometry, and loading conditions. Some are dependent on certain criteria being satisfied, such as the material being entirely elastic or even linearly-elastic, and/or that the crack must grow straight ahead. The only method presented that works arbitrarily is that using the total potential energy. If two methods are both applicable, they should yield identical energy release rates. Total potential energy The only method to calculate for arbitrary conditions is to calculate the total potential energy and differentiate it with respect to the crack surface area. This is typically done by: calculating the stress field resulting from the loading, calculating the strain energy in the material resulting from the stress field, calculating the work done by the external loads, all in terms of the crack surface area. Compliance method If the material is linearly elastic, the computation of its energy release rate can be much simplified. In this case, the Load vs. Load-point Displacement curve is linear with a positive slope, and the displacement per unit force applied is defined as the compliance, The corresponding strain energy (area under the curve) is equal to Using the compliance method, one can show that the energy release rate for both cases of prescribed load and displacement come out to be Multiple specimen methods for nonlinear materials In the case of prescribed displacement, holding the crack length fixed, the energy release rate can be computed by while in the case of prescribed load, As one can see, in both cases, the energy release rate times the change in surface returns the area between curves, which indicates the energy dissipated for the new surface area as illustrated in the right figure Crack closure integral Since the energy release rate is defined as the negative derivative of the total potential energy with respect to crack surface growth, the energy release rate may be written as the difference between the potential energy before and after the crack grows. After some careful derivation, this leads one to the crack closure integral where is the new fracture surface area, are the components of the traction released on the top fracture surface as the crack grows, are the components of the crack opening displacement (the difference in displacement increments between the top and bottom crack surfaces), and the integral is over the surface of the material . The crack closure integral is valid only for elastic materials but is still valid for cracks that grow in any direction. Nevertheless, for a two-dimensional crack that does indeed grow straight ahead, the crack closure integral simplifies to where is the new crack length, and the displacement components are written as a function of the polar coordinates and . J-integral In certain situations, the energy release rate can be calculated using the J-integral, i.e. , using where is the elastic strain energy density, is the component of the unit vector normal to , the curve used for the line integral, are the components of the traction vector , where is the stress tensor, and are the components of the displacement vector. This integral is zero over a simple closed path and is path independent, allowing any simple path starting and ending on the crack faces to be used to calculate . In order to equate the energy release rate to the J-integral, , the following conditions must be met: the crack must be growing straight ahead, and the deformation near the crack (enclosed by ) must be elastic (not plastic). The J-integral may be calculated with these conditions violated, but then . When they are not violated, one can then relate the energy release rate and the J-integral to the elastic moduli and the stress intensity factors using Computational methods in fracture mechanics A handful of methods exist for calculating with finite elements. Although a direct calculation of the J-integral is possible (using the strains and stresses outputted by FEA), approximate approaches for some type of crack growth exist and provide reasonable accuracy with straightforward calculations. This section will elaborate on some relatively simple methods for fracture analysis utilizing numerical simulations. Nodal release method If the crack is growing straight, the energy release rate can be decomposed as a sum of 3 terms associated with the energy in each 3 modes. As a result, the Nodal Release method (NR) can be used to determine from FEA results. The energy release rate is calculated at the nodes of the finite element mesh for the crack at an initial length and extended by a small distance . First, we calculate the displacement variation at the node of interest (before and after the crack tip node is released). Secondly, we keep track of the nodal force outputted by FEA. Finally, we can find each components of using the following formulas: Where is the width of the element bounding the crack tip. The accuracy of the method highly depends on the mesh refinement, both because the displacement and forces depend on it, and because . Note that the equations above are derived using the crack closure integral. If the energy release rate exceeds a critical value, the crack will grow. In this case, a new FEA simulation is performed (for the next time step) where the node at the crack tip is released. For a bounded substrate, we may simply stop enforcing fixed Dirichlet boundary conditions at the crack tip node of the previous time step (i.e. displacements are no longer restrained). For a symmetric crack, we would need to update the geometry of the domain with a longer crack opening (and therefore generate a new mesh). Modified crack closure integral Similar to the Nodal Release Method, the Modified Crack Closure Integral (MCCI) is a method for calculating the energy release rate utilizing FEA nodal displacements and forces . Where represents the direction corresponding to the Cartesian basis vectors with origin at the crack tip, and represents the nodal index. MCCI is more computationally efficient than the nodal release method because it only requires one analysis for each increment of crack growth. A necessary condition for the MCCI method is uniform element length along the crack face in the -direction. Additionally, this method requires sufficient discretization such that over the length of one element stress fields are self-similar. This implies that as the crack propagates. Below are examples of the MCCI method with two types of common finite elements. 4-node elements The 4-node square linear elements seen in Figure 2 have a distance between nodes and equal to Consider a crack with its tip located at node Similar to the nodal release method, if the crack were to propagate one element length along the line of symmetry (parallel to the -axis) the crack opening displacement would be the displacement at the previous crack tip, i.e. and the force at the new crack tip would be Since the crack growth is assumed to be self-similar the displacement at node after the crack propagates is equal to the displacement at node before the crack propagates. This same concept can be applied to the forces at node and Utilizing the same method shown in the nodal release section we recover the following equations for energy release rate:Where (displacement above and below the crack face respectively). Because we have a line of symmetry parallel to the crack, we can assume Thus, 8-node elements The 8-node rectangular elements seen in Figure 3 have quadratic basis functions. The process for calculating G is the same as the 4-node elements with the exception that (the crack growth over one element) is now the distance from node to Once again, making the assumption of self-similar straight crack growth the energy release rate can be calculated utilizing the following equations:Like with the nodal release method the accuracy of MCCI is highly dependent on the level of discretization along the crack tip, i.e. Accuracy also depends on element choice. A mesh of 8-node quadratic elements can produce more accurate results than a mesh of 4-node linear elements with the same number of degrees of freedom in the mesh. Domain integral approach for J The J-integral may be calculated directly using the finite element mesh and shape functions. We consider a domain contour as shown in figure 4 and choose an arbitrary smooth function such that on and on . For linear elastic cracks growing straight ahead, . The energy release rate can then be calculated over the area bounded by the contour using an updated formulation: The formula above may be applied to any annular area surrounding the crack tip (in particular, a set of neighboring elements can be used). This method is very accurate, even with a coarse mesh around the crack tip (one may choose an integration domain located far away, with stresses and displacement less sensitive to mesh refinement) {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Derivation of the J-integral for domain integral approach |- | The J-intregral may be expressed over the full contour as follows: With . , on and the work and stresses cancel out on and , hence by application of the divergence theorem this leads to: Finally, by noting that and using the equilibrium equation: |} 2-D crack tip singular elements The above-mentioned methods for calculating energy release rate asymptotically approach the actual solution with increased discretization but fail to fully capture the crack tip singularity. More accurate simulations can be performed by utilizing quarter-point elements around the crack tip. These elements have a built-in singularity which more accurately produces stress fields around the crack tip. The advantage of the quarter-point method is that it allows for coarser finite element meshes and greatly reduces computational cost. Furthermore, these elements are derived from small modifications to common finite elements without requiring special computational programs for analysis. For the purposes of this section elastic materials will be examined, although this method can be extended to elastic-plastic fracture mechanics. Assuming perfect elasticity the stress fields will experience a crack tip singularity. 8-node isoparametric element The 8-node quadratic element is described by Figure 5 in both parent space with local coordinates and and by the mapped element in physical/global space by and The parent element is mapped from the local space to the physical space by the shape functions and the degree of freedom coordinates The crack tip is located at or In a similar way, displacements (defined as ) can also be mapped.A property of shape functions in the finite element method is compact support, specifically the Kronecker delta property (i.e. at node and zero at all other nodes). This results in the following shape functions for the 8-node quadratic elements:When considering a line in front of the crack that is co-linear with the - axis (i.e. ) all basis functions are zero except for Calculating the normal strain involves using the chain rule to take the derivative of displacement with respect to If the nodes are spaced evenly on the rectangular element then the strain will not contain the singularity. By moving nodes 5 and 8 position to a quarter of the length of the element closer to the crack tip as seen in figure 5, the mapping from becomes:Solving for and taking the derivative results in:Plugging this result into the equation for strain the final result is obtained:By moving the mid-nodes to a quarter position results in the correct crack tip singularity. Other element types The rectangular element method does not allow for singular elements to be easily meshed around the crack tip. This impedes the ability to capture the angular dependence of the stress fields which is critical in determining the crack path. Also, except along the element edges the singularity exists in a very small region near the crack tip. Figure 6 shows another quarter-point method for modeling this singularity. The 8-node rectangular element can be mapped into a triangle. This is done by collapsing the nodes on the line to the mid-node location and shifting the mid-nodes on to the quarter-point location. The collapsed rectangle can more easily surround the crack tip but requires that the element edges be straight or the accuracy of calculating the stress intensity factor will be reduced. A better candidate for the quarter-point method is the natural triangle as seen in Figure 7. The element's geometry allows for the crack tip to be easily surrounded and meshing is simplified. Following the same procedure described above, the displacement and strain field for the triangular elements are:This method reproduces the first two terms of the Williams solutions with a constant and singular term. An advantage of the quarter-point method is that it can be easily generalized to 3-dimensional models. This can greatly reduce computation when compared to other 3-dimensional methods but can lead to errors if that crack tip propagates with a large degree of curvature. See also Fracture mechanics Stress intensity factor Fracture toughness J-integral References External links Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson (from Harvard University) Griffith's Strain Energy Release Rate on www.fracturemechanics.org Fracture mechanics Solid mechanics Mechanics
Energy release rate (fracture mechanics)
[ "Physics", "Materials_science", "Engineering" ]
3,352
[ "Structural engineering", "Solid mechanics", "Fracture mechanics", "Materials science", "Mechanics", "Mechanical engineering", "Materials degradation" ]
2,312,855
https://en.wikipedia.org/wiki/Isotope%20hydrology
Isotope hydrology is a field of geochemistry and hydrology that uses naturally occurring stable and radioactive isotopic techniques to evaluate the age and origins of surface and groundwater and the processes within the atmospheric hydrologic cycle. Isotope hydrology applications are highly diverse, and used for informing water-use policy, mapping aquifers, conserving water supplies, assessing sources of water pollution, investigating surface-groundwater interaction, refining groundwater flow models, and increasingly are used in eco-hydrology to study human impacts on all dimensions of the hydrological cycle and ecosystem services. Details Water molecules carry unique isotopic "fingerprints", based in part on differing ratios of the oxygen and hydrogen isotopes that constitute the water molecule. Isotopes are atoms of the same element that have a different number of neutrons in their nuclei. Air, freshwater and seawater contain mostly oxygen-16 ( 16O). Oxygen-18 (18O) occurs in approximately one oxygen atom in every five hundred and has a slightly higher mass than oxygen-16, as it has two extra neutrons. From a simple energy and bond breakage standpoint this results in a preference for evaporating the lighter 16O containing water and leaving more of the 18O water behind in the liquid state (called isotope fractionation). Thus seawater tends to contain more 18O than rain and snow. Dissolved ions in surface and groundwater water also contain useful isotopes for hydrological investigations. Dissolved species like sulfate and nitrate contain differing ratios of 34-S to 32-S or 15-N to 14-N, and are often diagnostic of pollutant sources. Natural radioisotopes like tritium (3-H) and radiocarbon (14-C) are also used as natural clocks to determine the residence times of water in aquifers, rivers, and the oceans. Applications The most commonly used isotope application in hydrology uses hydrogen and oxygen isotopes to evaluate sources or age of water, ice or snow. Isotopes in ice cores help to reveal conditions of past climate. Higher average global temperature would provide more energy and thus an increase the atmospheric 18O content of rain or snow, so that lower than modern amounts of 18O in groundwater or ice layer imply the water or ice represents a period of cooler climatic eras or even ice ages. Another application involves the separation of groundwater flow and baseflow from streamflow in the field of catchment hydrology (i.e. a method of hydrograph separation). Since precipitation in each rain or snowfall event has a specific isotopic signature, and subsurface water can be identified by well sampling, the composite signature in the stream is an indicator the proportion of the streamflow comes from overland flow and what portion comes from subsurface flow. Stable isotopes in the water molecule are also useful in tracing the sources (or proportion of sources) of water that plants use. Current use The isotope hydrology program at the International Atomic Energy Agency works to aid developing states to create a detailed portrait of Earth's water resources. In Ethiopia, Libya, Chad, Egypt and Sudan, the International Atomic Energy Agency used radioisotope techniques to help local water policy identify and conserve fossil water. The International Atomic Energy Agency maintains a publicly accessible global network and isotopic database for Earth's rainfall and rivers. See also Baseflow Hydrograph Water chemistry analysis References External links Isotope Hydrology at the IAEA Environmental isotopes in the hydrological cycle: Principles and applications Water Hydrology Hydraulic engineering
Isotope hydrology
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
710
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Water", "Hydraulic engineering" ]
2,313,106
https://en.wikipedia.org/wiki/Event%20reconstruction
In a particle detector experiment, event reconstruction is the process of interpreting the electronic signals produced by the detector to determine the original particles that passed through, their momenta, directions, and the primary vertex of the event. Thus the initial physical process (for instance, that occurred at the interaction point of the particle accelerator), whose study is the ultimate goal of the experiment, can be determined. The total event reconstruction is not always possible and necessary; in some cases, only a part of the data described above is obtained and processed. Experimental particle physics
Event reconstruction
[ "Physics" ]
111
[ "Particle physics stubs", "Experimental physics", "Particle physics", "Experimental particle physics" ]
2,313,597
https://en.wikipedia.org/wiki/In%20situ%20hybridization
In situ hybridization (ISH) is a type of hybridization that uses a labeled complementary DNA, RNA or modified nucleic acid strand (i.e., a probe) to localize a specific DNA or RNA sequence in a portion or section of tissue (in situ) or if the tissue is small enough (e.g., plant seeds, Drosophila embryos), in the entire tissue (whole mount ISH), in cells, and in circulating tumor cells (CTCs). This is distinct from immunohistochemistry, which usually localizes proteins in tissue sections. In situ hybridization is used to reveal the location of specific nucleic acid sequences on chromosomes or in tissues, a crucial step for understanding the organization, regulation, and function of genes. The key techniques currently in use include in situ hybridization to mRNA with oligonucleotide and RNA probes (both radio-labeled and hapten-labeled), analysis with light and electron microscopes, whole mount in situ hybridization, double detection of RNAs and RNA plus protein, and fluorescent in situ hybridization to detect chromosomal sequences. DNA ISH can be used to determine the structure of chromosomes. Fluorescent DNA ISH (FISH) can, for example, be used in medical diagnostics to assess chromosomal integrity. RNA ISH (RNA in situ hybridization) is used to measure and localize RNAs (mRNAs, lncRNAs, and miRNAs) within tissue sections, cells, whole mounts, and circulating tumor cells (CTCs). In situ hybridization was invented by American biologists Mary-Lou Pardue and Joseph G. Gall. Challenges of in-situ hybridization In situ hybridization is a powerful technique for identifying specific mRNA species within individual cells in tissue sections, providing insights into physiological processes and disease pathogenesis. However, in situ hybridization requires that many steps be taken with precise optimization for each tissue examined and for each probe used. In order to preserve the target mRNA within tissues, it is often required that crosslinking fixatives (such as formaldehyde) be used. In addition, in-situ hybridization on tissue sections require that tissue slices be very thin, usually 3 μm to 7 μm in thickness. Common methods of preparing tissue sections for in-situ hybridization processing include cutting specimens with a cryostat or a Compresstome tissue slicer. A cryostat takes fresh or fixed tissue and immerses it into liquid nitrogen for flash freezing. Then tissue is embedded in freeze media called OCT and thin sections are cut. Obstacles include getting freeze artifacts on tissue that may interfere with proper mRNA staining. The Compresstome cuts tissue into thin slices without a freeze process; free-floating sections are cut after being embedded in agarose for stability. This method avoids freezing tissue and thus associated freeze artifacts. The process is permanent and irreversible once its complete. Process For hybridization histochemistry, sample cells and tissues are usually treated to fix the target transcripts in place and to increase access of the probe. As noted above, the probe is either a labeled complementary DNA or, now most commonly, a complementary RNA (riboprobe). The probe hybridizes to the target sequence at elevated temperature, and then the excess probe is washed away (after prior hydrolysis using RNase in the case of unhybridized, excess RNA probe). Solution parameters such as temperature, salt, and/or detergent concentration can be manipulated to remove any non-identical interactions (i.e., only exact sequence matches will remain bound). Then, the probe that was labeled with either radio-, fluorescent- or antigen-labeled bases (e.g., digoxigenin) is localized and quantified in the tissue using either autoradiography, fluorescence microscopy, or immunohistochemistry, respectively. ISH can also use two or more probes, labeled with radioactivity or the other non-radioactive labels, to simultaneously detect two or more transcripts. An alternative technology, branched DNA assay, can be used for RNA (mRNA, lncRNA, and miRNA ) in situ hybridization assays with single molecule sensitivity without the use of radioactivity. This approach (e.g., ViewRNA assays) can be used to visualize up to four targets in one assay, and it uses patented probe design and bDNA signal amplification to generate sensitive and specific signals. Samples (cells, tissues, and CTCs) are fixed, then treated to allow RNA target accessibility (RNA un-masking). Target-specific probes hybridize to each target RNA. Subsequent signal amplification is predicated on specific hybridization of adjacent probes (individual oligonucleotides [oligos] that bind side by side on RNA targets). A typical target-specific probe will contain 40 oligonucleotides, resulting in 20 oligo pairs that bind side-by-side on the target for detection of mRNA and lncRNA, and 2 oligos or a single pair for miRNA detection. Signal amplification is achieved via a series of sequential hybridization steps. A pre-amplifier molecule hybridizes to each oligo pair on the target-specific RNA, then multiple amplifier molecules hybridize to each pre-amplifier. Next, multiple label probe oligonucleotides (conjugated to alkaline phosphatase or directly to fluorophores) hybridize to each amplifier molecule. A fully assembled signal amplification structure “Tree” has 400 binding sites for the label probes. When all target-specific probes bind to the target mRNA transcript, an 8,000 fold signal amplification occurs for that one transcript. Separate but compatible signal amplification systems enable the multiplex assays. The signal can be visualized using a fluorescence or brightfield microscope. Basic steps for digoxigenin-labeled probes permeabilization of cells with proteinase K to open cell membranes (around 25 minutes, not needed for tissue sections or some early-stage embryos) binding of mRNAs to marked RNA probe (usually overnight) antibody-phosphatase binding to RNA-probe (some hours) staining of antibody (e.g., with alkaline phosphatase) The protocol takes around 2–3 days and takes some time to set up. Some companies sell robots to automate the process (e.g., CEM InsituPro). As a result, large-scale screenings have been conducted in laboratories on thousands of genes. The results can usually be accessed via websites (see external links). See also Chromogenic in situ hybridization (CISH) Fluorescence in situ hybridization MRNA-based disease diagnosis References Comprehensive and annotated in situ hybridization histochemistry RNA sequencing of pancreatic circulating tumour cells implicates WNT signalling in metastasis The Local Transcriptome in the Synaptic Neuropil Revealed by Deep Sequencing and High-Resolution Imaging External links In Situ Hybridization of RNA and miRNA Probes to cells, CTCs, and tissues Whole-Mount In Situ Hybridization of RNA Probes to Plant Tissues Preparation of Complex DNA Probe Sets for 3D FISH with up to Six Different Fluorochromes Transcript In Situ Hybridization of Whole-Mount Embryos for Phenotype Analysis of RNAi-Treated Drosophila in-situ databases: Ghost, C. intestinalis transcription factors Zebrafish gene expression Mouse MGI GXD gene expression eurexpress Biochemistry detection methods Genetics techniques Laboratory techniques Biological techniques and tools
In situ hybridization
[ "Chemistry", "Engineering", "Biology" ]
1,588
[ "Biochemistry methods", "Genetics techniques", "Genetic engineering", "Chemical tests", "nan", "Biochemistry detection methods" ]
2,315,118
https://en.wikipedia.org/wiki/Digital%20sum%20in%20base%20b
The digital sum in base b of a set of natural numbers is calculated as follows: express each of the numbers in base b, then take the sum of corresponding digits and discard all carry overs. That is, the digital sum is the same as the normal sum except that no carrying is used. For example, in decimal (base 10) arithmetic, the digital sum of 123 and 789 is 802: 3 + 9 = 12, discard the 10 leaving 2. 2 + 8 = 10, discard the 10 leaving 0. 1 + 7 = 8, there is no carry to discard. 123 789 --- 802 More usually the digital sum is calculated in binary (base 2) where the result only depends upon whether there are an even or odd number of 1s in each column. This is the same function as parity or multiple exclusive ors. For example: 011 (3) 100 (4) 101 (5) --- 010 (2) is the binary digital sum of 3, 4 and 5. The binary digital sum is crucial for the theory of the game of Nim. The digital sum in base b is an associative and commutative operation on the natural numbers; it has 0 as neutral element and every natural number has an inverse element under this operation. The natural numbers together with the base-b digital sum thus form an abelian group; this group is isomorphic to the direct sum of a countable number of copies of Z/bZ. References https://math.stackexchange.com/questions/4246954/sum-of-digits-of-a-number-in-base-b Integers
Digital sum in base b
[ "Mathematics" ]
350
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
11,226,479
https://en.wikipedia.org/wiki/Wells%20turbine
The Wells turbine is a low-pressure air turbine that rotates continuously in one direction independent of the direction of the air flow. Its blades feature a symmetrical airfoil with its plane of symmetry in the plane of rotation and perpendicular to the air stream. It was developed for use in Oscillating Water Column wave power plants, in which a rising and falling water surface moving in an air compression chamber produces an oscillating air current. The use of this bidirectional turbine avoids the need to rectify the air stream by delicate and expensive check valve systems. Its efficiency is lower than that of a turbine with constant air stream direction and asymmetric airfoil. One reason for the lower efficiency is that symmetric airfoils have a higher drag coefficient than asymmetric ones, even under optimal conditions. Also, in the Wells turbine, the symmetric airfoil runs partly under high angle of attack (i.e., low blade speed / air speed ratio), which occurs during the air velocity maxima of the oscillating flow. A high angle of attack causes a condition known as "stall" in which the airfoil loses lift. The efficiency of the Wells turbine in oscillating flow reaches values between 0.4 and 0.7. The Wells turbine was developed by Prof. Alan Arthur Wells of Queen's University Belfast in the late 1970s. Annotation Another solution of the problem of stream direction independent turbine is the Darrieus wind turbine (Darrieus rotor). See also Siadar Wave Energy Project Yoshio Masuda Hanna Wave Energy Turbine free 3D design to print your own External links Animation showing OWC wave power plant" SolaRoad Queen's University Belfast Mechanical engineering Turbines Power station technology Renewable energy technology Water power Electrical generators
Wells turbine
[ "Physics", "Chemistry", "Technology", "Engineering" ]
363
[ "Electrical generators", "Machines", "Applied and interdisciplinary physics", "Turbomachinery", "Turbines", "Physical systems", "Mechanical engineering" ]
11,230,975
https://en.wikipedia.org/wiki/Resonance%20fluorescence
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom. General theory Typically the photon contained electromagnetic field is applied to the two-level atom through the use of a monochromatic laser. A two-level atom is a specific type of two-state system in which the atom can be found in the two possible states. The two possible states are if an electron is found in its ground state or the excited state. In many experiments an atom of lithium is used because it can be closely modeled to a two-level atom as the excited states of the singular electron are separated by large enough energy gaps to significantly reduce the possibility of the electron jumping to a higher excited state. Thus it allows for easier frequency tuning of the applied laser as frequencies further from resonance can be used while still driving the electron to jump to only the first excited state. Once the atom is excited, it will release a photon with the same energy as the energy difference between the excited and ground state. The mechanism for this release is the spontaneous decay of the atom. The emitted photon is released in an arbitrary direction. While the transition between two specific energy levels is the dominant mechanism in resonance fluorescence, experimentally other transitions will play a very small role and thus must be taken into account when analyzing results. The other transitions will lead to emission of a photon of a different atomic transition with much lower energy which will lead to "dark" periods of resonance fluorescence. The dynamics of the electromagnetic field of the monochromatic laser can be derived by first treating the two-level atom as a spin-1/2 system with two energy eigenstates which have energy separation of ħω. The dynamics of the atom can then be described by the three rotation operators, ,,, acting upon the Bloch sphere. Thus the energy of the system is described entirely through an electric dipole interaction between the atom and field with the resulting hamiltonian being described by . After quantizing the electromagnetic field, the Heisenberg Equation as well as Maxwell's equations can then be used to find the resulting equations of motion for as well as for , the annihilation operator of the field, , where and are frequency parameters used to simplify equations. Now that the dynamics of the field with respect to the states of the atom has been described, the mechanism through which photons are released from the atom as the electron falls from the excited state to the ground state, Spontaneous Emission, can be examined. Spontaneous emission is when an excited electron arbitrarily decays to the ground state emitting a photon. As the electromagnetic field is coupled to the state of the atom, and the atom can only absorb a single photon before having to decay, the most basic case then is if the field only contains a single photon. Thus spontaneous decay occurs when the excited state of the atom emits a photon back into the vacuum Fock state of the field . During this process the decay of the expectation values of the above operators follow the following relations , . So the atom decays exponentially and the atomic dipole moment shall oscillate. The dipole moment oscillates due to the Lamb shift, which is a shift in the energy levels of the atom due to fluctuations of the field. It is imperative, however, to look at fluorescence in the presence of a field with many photons, as this is a much more general case. This is the case in which the atom goes through many excitation cycles. In this case the exciting field emitted from the laser is in the form of coherent states . This allows for the operators which comprise the field to act on the coherent state and thus be replaced with eigenvalues. Thus we can simplify the equations by allowing operators to be turned into constants. The field can then be described much more classically than a quantized field normally would be able to. As a result, we are able to find the expectation value of the electric field for the retarded time. , where is the angle between and . There are two general types of excitations produced by fields. The first is one that dies out as , while the other one reaches a state in which it eventually reaches a constant amplitude, thus . Here is a real normalization constant, is a real phase factor, and is a unit vector which indicates the direction of the excitation. Thus as , then . As is the Rabi frequency, we can see that this is analogous to the rotation of a spin state around the Bloch sphere from an interferometer. Thus the dynamics of a two-level atom can be accurately modeled by a photon in an interferometer. It is also possible to model as an atom and a field, and it will, in fact, retain more properties of the system such as lamb shift, but the basic dynamics of resonance fluorescence can be modeled as a spin-1/2 particle. Resonance fluorescence in the Weak Field There are several limits that can be analyzed to make the study of resonance fluorescence easier. The first of these is the approximations associated with the Weak Field Limit, where the square modulus of the Rabi frequency of the field that is coupled to two-level atom is much smaller than the rate of spontaneous emission of the atom. This means that the difference in the population between the excited state of the atom and the ground state of the atom is approximately independent of time. If we also take the limit in which the time period is much larger than the time for spontaneous decay, the coherences of the light can be modeled as , where is the Rabi frequency of the driving field and is the spontaneous decay rate of the atom. Thus it is clear that when an electric field is applied to the atom, the dipole of the atom oscillates according to driving frequency and not the natural frequency of the atom. If we also look at the positive frequency component of the electric field, we can see that the emitted field is the same as the absorbed field other than the difference in direction, resulting in the spectrum of the emitted field being the same as that of the absorbed field. The result is that the two-level atom behaves exactly as a driven oscillator and continues scattering photons so long as the driving field remains coupled to the atom. The weak field approximation is also used in approaching two-time correlation functions. In the weak-field limit, the correlation function can be calculated much more easily as only the first three terms must be kept. Thus the correlation function becomes as . From the above equation we can see that as the correlation function will no longer depend on time, but rather that it will depend on . The system will eventually reach a quasi-stationary state as It is also clear that there are terms in the equation that go to zero as . These are the result of the Markovian processes of the quantum fluctuations of the system. We see that in the weak field approximation as well as , the coupled system will reach a quasi-steady state where the quantum fluctuations become negligible. Resonance fluorescence in the Strong Field The Strong Field Limit is the exact opposite limit to the weak field where the square modulus of the Rabi frequency of the electromagnetic field is much larger than the rate of spontaneous emission of the two-level atom. When a strong field is applied to the atom, a single peak is no longer observed in fluorescent light's radiation spectrum. Instead, other peaks begin appearing on either side of the original peak. These are known as side bands. The sidebands are a result of the Rabi oscillations of the field causing a modulation in the dipole moment of the atom. This causes a splitting in the degeneracy of certain eigenstates of the hamiltonian, specifically and are split into doublets. This is known as dynamic Stark splitting and is the cause for the Mollow triplet, which is a characteristic energy spectrum found in Resonance fluorescence. An interesting phenomena arises in the Mollow triplet where both of the sideband peaks have a width different than that of the central peak. If the Rabi frequency is allowed to become much larger than the rate of spontaneous decay of the atom, we can see that in the strong field limit will become . From this equation it is clear where the differences in width of the peaks in the Mollow triplet arise from as the central peak has a width of and the sideband peaks have a width of where is the rate of spontaneous emission for the atom. Unfortunately this cannot be used to calculate a steady state solution as and in a steady state solution. Thus the spectrum would vanish in a steady state solution, which is not the actual case. The solution that does allow for a steady state solution must take the form of a two-time correlation function as opposed to the above one-time correlation function. This solution appears as . Since this correlation function includes the steady state limits of the density matrix, where and , and the spectrum is nonzero, it is clear to see that the Mollow triplet remains the spectrum for the fluoresced light even in a steady state solution. General two-time correlation functions and spectral density The study of correlation functions is critical to the study of quantum optics as the Fourier transform of the correlation function is the energy spectral density. Thus the two-time correlation function is a useful tool in the calculation of the energy spectrum for a given system. We take the parameter to be the difference between the two times in which the function is calculated. While correlation functions can more easily be described using limits of the strength of the field and limits placed on the time of the system, they can be found more generally as well. For resonance fluorescence, the most important correlation functions are , , , where , , . Two-time correlation functions are generally shown to be independent of , and instead rely on as . These functions can be used to find the spectral density by computing the transform , where K is a constant. The spectral density can be viewed as the rate of photon emission of photons of frequency at the given time , which is useful in determining the power output of a system at a given time. The correlation function associated with the spectral density of resonance fluorescence is reliant on the electric field. Thus once the constant K has been determined, the result is equivalent to This is related to the intensity by In the weak field limit when the power spectrum can be determined to be . In the strong field limit, the power spectrum is slight more complicated and found to be . From these two functions it is easy to see that in the weak field limit a single peak appears at in the spectral density due to the delta function, while in the strong field limit a Mollow triplet forms with sideband peaks at , and appropriate peak width of for the central peak and for the sideband peaks. Photon Anti-bunching Photon anti-bunching is the process in Resonance Fluorescence through which rate at which photons are emitted by a two-level atom is limited. A two-level atom is only capable of absorbing a photon from the driving electromagnetic field after a certain period of time has passed. This time period is modeled as a probability distribution where as . As the atom cannot absorb a photon, it is unable to emit one and thus there is a restriction on the spectral density. This is illustrated by the second order correlation function . From the above equation it is clear that and thus resulting in the relation that describes photon antibunching . This shows that the power cannot be anything other than zero for . In the weak field approximation can only increase monotonically as increases, however in the strong field approximation oscillates as it increases. These oscillations die off as . The physical idea behind photon anti-bunching is that while the atom itself is ready to be excited as soon as it releases its previous photon, the electromagnetic field created by the laser takes time to excite the atom. Double Resonance Double Resonance is the phenomena when an additional magnetic field is applied to a two-level atom in addition to the typical electromagnetic field used to drive resonance fluorescence. This lifts the spin degeneracy of the Zeeman energy levels splitting them along the energies associated with the respective available spin levels, allowing for not only resonance to be achieved around the typical excited state, but if a second driving electromagnetic associated with the Larmor frequency is applied, a second resonance can be achieved around the energy state associated with and the states associated with . Thus resonance is achievable not only about the possible energy-levels of a two-level atom, but also about the sub-levels in the energy created by lifting the degeneracy of the level. If the applied magnetic field is tuned properly, the polarization of resonance fluorescence can be used to describe the composition of the excited state. Thus double resonance can be used to find the Landé factor, which is used to describe the magnetic moment of the electron within the two-level atom. Resonance fluorescence of a single artificial atom Any two state system can be modeled as a two-level atom. This leads to many systems being described as an "Artificial Atom". For instance a superconducting loop which can create a magnetic flux passing through it can act as an artificial atom as the current can induce a magnetic flux in either direction through the loop depending on whether the current is clockwise or counterclockwise. The hamiltonian for this system is described as where . This models the dipole interaction of the atom with a 1-D electromagnetic wave. It is easy to see that this is truly analogous to a real two-level atom due to the fact that the fluorescence appears in the spectrum as the Mollow triplet, precisely like a true two-level atom. These artificial atoms are often used to explore the phenomena of quantum coherence. This allows for the study of squeezed light which is known for creating more precise measurements. It is difficult to explore the resonance fluorescence of squeezed light in a typical two-level atom as all modes of the electromagnetic field must be squeezed which cannot easily be accomplished. In an artificial atom, the number of possible modes of the field is significantly limited allowing for easier study of squeezed light. In 2016 D.M. Toyli et al., performed an experiment in which two superconducting parametric amplifiers were used to generate squeezed light and then detect resonance fluorescence in artificial atoms from the squeezed light. Their results agreed strongly with the theory describing the phenomena. The implication of this study is it allows for resonance fluorescence to assist in qubit readout for squeezed light. The qubit used in the study was an aluminum transmon circuit that was then coupled to a 3-D aluminum cavity. Extra silicon chips were introduced to the cavity to assist in the tuning of resonance to that of the cavity. The majority of the detuning that did occur was a result of the degeneration of the qubit over time. Resonance fluorescence from a Semiconductor Quantum Dot A quantum dot is a semiconductor nano-particle that is often used in quantum optical systems. This includes their ability to be placed in optical microcavities where they can act as two-level systems. In this process, quantum dots are placed in cavities which allow for the discretization of the possible energy states of the quantum dot coupled with the vacuum field. The vacuum field is then replaced by an excitation field and resonance fluorescence is observed. Current technology only allows for population of the dot in an excited state (not necessarily always the same), and relaxation of the quantum dot back to its ground state. Direct excitation followed by ground state collection was not achieved until recently. This is mainly due to the fact that as a result of the size of quantum dots, defects and contaminants create fluorescence of their own apart from the quantum dot. This desired manipulation has been achieved by quantum dots by themselves through a number of techniques including four-wave mixing and differential reflectivity, however no techniques had shown it to occur in cavities until 2007. Resonance fluorescence has been seen in a single self-assembled quantum dot as presented by Muller among others in 2007. In the experiment they used quantum dots that were grown between two mirrors in the cavity. Thus the quantum dot was not placed in the cavity, but instead created in it. They then coupled a strong in-plane polarized tunable continuous-wave laser to the quantum dot and were able to observe resonance fluorescence from the quantum dot. In addition to the excitation of the quantum dot that was achieved, they were also able to collect the photon that was emitted with a micro-PL setup. This allows for resonant coherent control of the ground state of the quantum dot while also collecting the photons emitted from the fluorescence. Coupling photons to a molecule In 2007, G. Wrigge, I. Gerhardt, J. Hwang, G. Zumofen, and V. Sandoghdar developed an efficient method to observe resonance fluorescence for an entire molecule as opposed to its typical observation in a single atom. Instead of coupling the electric field to a single atom, they were able to replicate two-level systems in dye molecules embedded in solids. They used a tunable dye laser to excite the dye molecules in their sample. Due to the fact that they could only have one source at a time, the proportion of shot noise to actual data was much higher than normal. The sample which they excited was a Shpol'skii matrix which they had doped with the dyes they wished to use, dibenzanthanthrene. To improve the accuracy of the results, single-molecule fluorescence-excitation spectroscopy was used. The actual process for measuring the resonance was measuring the interference between the laser beam and the photons that were scattered from the molecule. Thus the laser was passed over the sample, resulting in several photons were scattered back, allowing for the measurement of the interference in the electromagnetic field that resulted. The improvement to this technique was they used solid-immersion lens technology. This is a lens that has a much higher numerical aperture than normal lenses as it is filled with a material that has a large refractive index. The technique used to measure the resonance fluorescence in this system was originally designed to locate individual molecules within substances. Implications of Resonance fluorescence The largest implication that arises from resonance fluorescence is that for future technologies. Resonance fluorescence is used primarily in the coherent control of atoms. By coupling a two-level atom, such as a quantum dot, to an electric field in the form of a laser, you are able to effectively create a qubit. The qubit states correspond to the excited and the ground state of the two-level atoms. Manipulation of the electromagnetic field allows for effective control of the dynamics of the atom. These can then be used to create quantum computers. The largest barriers that still stand in the way of this being achievable are failures in truly controlling the atom. For instance true control of spontaneous decay and decoherence of the field pose large problems that must be overcome before two-level atoms can truly be used as qubits. References Radiochemistry Fluorescence Resonance
Resonance fluorescence
[ "Physics", "Chemistry" ]
3,940
[ "Resonance", "Physical phenomena", "Luminescence", "Fluorescence", "Waves", "Scattering", "Radiochemistry", "Radioactivity" ]
12,191,272
https://en.wikipedia.org/wiki/Newton%27s%20theorem%20of%20revolving%20orbits
In classical mechanics, Newton's theorem of revolving orbits identifies the type of central force needed to multiply the angular speed of a particle by a factor k without affecting its radial motion (Figures 1 and 2). Newton applied his theorem to understanding the overall rotation of orbits (apsidal precession, Figure 3) that is observed for the Moon and planets. The term "radial motion" signifies the motion towards or away from the center of force, whereas the angular motion is perpendicular to the radial motion. Isaac Newton derived this theorem in Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica, first published in 1687. In Proposition 43, he showed that the added force must be a central force, one whose magnitude depends only upon the distance r between the particle and a point fixed in space (the center). In Proposition 44, he derived a formula for the force, showing that it was an inverse-cube force, one that varies as the inverse cube of r. In Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit. This theorem remained largely unknown and undeveloped for over three centuries, as noted by astrophysicist Subrahmanyan Chandrasekhar in his 1995 commentary on Newton's Principia. Since 1997, the theorem has been studied by Donald Lynden-Bell and collaborators. Its first exact extension came in 2000 with the work of Mahomed and Vawda. Historical context The motion of astronomical bodies has been studied systematically for thousands of years. The stars were observed to rotate uniformly, always maintaining the same relative positions to one another. However, other bodies were observed to wander against the background of the fixed stars; most such bodies were called planets after the Greek word "πλανήτοι" (planētoi) for "wanderers". Although they generally move in the same direction along a path across the sky (the ecliptic), individual planets sometimes reverse their direction briefly, exhibiting retrograde motion. To describe this forward-and-backward motion, Apollonius of Perga () developed the concept of deferents and epicycles, according to which the planets are carried on rotating circles that are themselves carried on other rotating circles, and so on. Any orbit can be described with a sufficient number of judiciously chosen epicycles, since this approach corresponds to a modern Fourier transform. Roughly 350 years later, Claudius Ptolemaeus published his Almagest, in which he developed this system to match the best astronomical observations of his era. To explain the epicycles, Ptolemy adopted the geocentric cosmology of Aristotle, according to which planets were confined to concentric rotating spheres. This model of the universe was authoritative for nearly 1500 years. The modern understanding of planetary motion arose from the combined efforts of astronomer Tycho Brahe and physicist Johannes Kepler in the 16th century. Tycho is credited with extremely accurate measurements of planetary motions, from which Kepler was able to derive his laws of planetary motion. According to these laws, planets move on ellipses (not epicycles) about the Sun (not the Earth). Kepler's second and third laws make specific quantitative predictions: planets sweep out equal areas in equal time, and the square of their orbital periods equals a fixed constant times the cube of their semi-major axis. Subsequent observations of the planetary orbits showed that the long axis of the ellipse (the so-called line of apsides) rotates gradually with time; this rotation is known as apsidal precession. The apses of an orbit are the points at which the orbiting body is closest or furthest away from the attracting center; for planets orbiting the Sun, the apses correspond to the perihelion (closest) and aphelion (furthest). With the publication of his Principia roughly eighty years later (1687), Isaac Newton provided a physical theory that accounted for all three of Kepler's laws, a theory based on Newton's laws of motion and his law of universal gravitation. In particular, Newton proposed that the gravitational force between any two bodies was a central force F(r) that varied as the inverse square of the distance r between them. Arguing from his laws of motion, Newton showed that the orbit of any particle acted upon by one such force is always a conic section, specifically an ellipse if it does not go to infinity. However, this conclusion holds only when two bodies are present (the two-body problem); the motion of three bodies or more acting under their mutual gravitation (the n-body problem) remained unsolved for centuries after Newton, although solutions to a few special cases were discovered. Newton proposed that the orbits of planets about the Sun are largely elliptical because the Sun's gravitation is dominant; to first approximation, the presence of the other planets can be ignored. By analogy, the elliptical orbit of the Moon about the Earth was dominated by the Earth's gravity; to first approximation, the Sun's gravity and those of other bodies of the Solar System can be neglected. However, Newton stated that the gradual apsidal precession of the planetary and lunar orbits was due to the effects of these neglected interactions; in particular, he stated that the precession of the Moon's orbit was due to the perturbing effects of gravitational interactions with the Sun. Newton's theorem of revolving orbits was his first attempt to understand apsidal precession quantitatively. According to this theorem, the addition of a particular type of central force—the inverse-cube force—can produce a rotating orbit; the angular speed is multiplied by a factor k, whereas the radial motion is left unchanged. However, this theorem is restricted to a specific type of force that may not be relevant; several perturbing inverse-square interactions (such as those of other planets) seem unlikely to sum exactly to an inverse-cube force. To make his theorem applicable to other types of forces, Newton found the best approximation of an arbitrary central force F(r) to an inverse-cube potential in the limit of nearly circular orbits, that is, elliptical orbits of low eccentricity, as is indeed true for most orbits in the Solar System. To find this approximation, Newton developed an infinite series that can be viewed as the forerunner of the Taylor expansion. This approximation allowed Newton to estimate the rate of precession for arbitrary central forces. Newton applied this approximation to test models of the force causing the apsidal precession of the Moon's orbit. However, the problem of the Moon's motion is dauntingly complex, and Newton never published an accurate gravitational model of the Moon's apsidal precession. After a more accurate model by Clairaut in 1747, analytical models of the Moon's motion were developed in the late 19th century by Hill, Brown, and Delaunay. However, Newton's theorem is more general than merely explaining apsidal precession. It describes the effects of adding an inverse-cube force to any central force F(r), not only to inverse-square forces such as Newton's law of universal gravitation and Coulomb's law. Newton's theorem simplifies orbital problems in classical mechanics by eliminating inverse-cube forces from consideration. The radial and angular motions, r(t) and θ1(t), can be calculated without the inverse-cube force; afterwards, its effect can be calculated by multiplying the angular speed of the particle Mathematical statement Consider a particle moving under an arbitrary central force F1(r) whose magnitude depends only on the distance r between the particle and a fixed center. Since the motion of a particle under a central force always lies in a plane, the position of the particle can be described by polar coordinates (r, θ1), the radius and angle of the particle relative to the center of force (Figure 1). Both of these coordinates, r(t) and θ1(t), change with time t as the particle moves. Imagine a second particle with the same mass m and with the same radial motion r(t), but one whose angular speed is k times faster than that of the first particle. In other words, the azimuthal angles of the two particles are related by the equation θ2(t) = k θ1(t). Newton showed that the motion of the second particle can be produced by adding an inverse-cube central force to whatever force F1(r) acts on the first particle where L1 is the magnitude of the first particle's angular momentum, which is a constant of motion (conserved) for central forces. If k2 is greater than one, F2 − F1 is a negative number; thus, the added inverse-cube force is attractive, as observed in the green planet of Figures 1–4 and 9. By contrast, if k2 is less than one, F2−F1 is a positive number; the added inverse-cube force is repulsive, as observed in the green planet of Figures 5 and 10, and in the red planet of Figures 4 and 5. Alteration of the particle path The addition of such an inverse-cube force also changes the path followed by the particle. The path of the particle ignores the time dependencies of the radial and angular motions, such as r(t) and θ1(t); rather, it relates the radius and angle variables to one another. For this purpose, the angle variable is unrestricted and can increase indefinitely as the particle revolves around the central point multiple times. For example, if the particle revolves twice about the central point and returns to its starting position, its final angle is not the same as its initial angle; rather, it has increased by . Formally, the angle variable is defined as the integral of the angular speed A similar definition holds for θ2, the angle of the second particle. If the path of the first particle is described in the form , the path of the second particle is given by the function , since . For example, let the path of the first particle be an ellipse where A and B are constants; then, the path of the second particle is given by Orbital precession If k is close, but not equal, to one, the second orbit resembles the first, but revolves gradually about the center of force; this is known as orbital precession (Figure 3). If k is greater than one, the orbit precesses in the same direction as the orbit (Figure 3); if k is less than one, the orbit precesses in the opposite direction. Although the orbit in Figure 3 may seem to rotate uniformly, i.e., at a constant angular speed, this is true only for circular orbits. If the orbit rotates at an angular speed Ω, the angular speed of the second particle is faster or slower than that of the first particle by Ω; in other words, the angular speeds would satisfy the equation . However, Newton's theorem of revolving orbits states that the angular speeds are related by multiplication: , where k is a constant. Combining these two equations shows that the angular speed of the precession equals . Hence, Ω is constant only if ω1 is constant. According to the conservation of angular momentum, ω1 changes with the radius r where m and L1 are the first particle's mass and angular momentum, respectively, both of which are constant. Hence, ω1 is constant only if the radius r is constant, i.e., when the orbit is a circle. However, in that case, the orbit does not change as it precesses. Illustrative example: Cotes's spirals The simplest illustration of Newton's theorem occurs when there is no initial force, i.e., F1(r) = 0. In this case, the first particle is stationary or travels in a straight line. If it travels in a straight line that does not pass through the origin (yellow line in Figure 6) the equation for such a line may be written in the polar coordinates (r, θ1) as where θ0 is the angle at which the distance is minimized (Figure 6). The distance r begins at infinity (when θ1 – ), and decreases gradually until θ1 – , when the distance reaches a minimum, then gradually increases again to infinity at θ1 – . The minimum distance b is the impact parameter, which is defined as the length of the perpendicular from the fixed center to the line of motion. The same radial motion is possible when an inverse-cube central force is added. An inverse-cube central force F2(r) has the form where the numerator μ may be positive (repulsive) or negative (attractive). If such an inverse-cube force is introduced, Newton's theorem says that the corresponding solutions have a shape called Cotes's spirals. These are curves defined by the equation where the constant k equals When the right-hand side of the equation is a positive real number, the solution corresponds to an epispiral. When the argument θ1 – θ0 equals ±90°×k, the cosine goes to zero and the radius goes to infinity. Thus, when k is less than one, the range of allowed angles becomes small and the force is repulsive (red curve on right in Figure 7). On the other hand, when k is greater than one, the range of allowed angles increases, corresponding to an attractive force (green, cyan and blue curves on left in Figure 7); the orbit of the particle can even wrap around the center several times. The possible values of the parameter k may range from zero to infinity, which corresponds to values of μ ranging from negative infinity up to the positive upper limit, L12/m. Thus, for all attractive inverse-cube forces (negative μ) there is a corresponding epispiral orbit, as for some repulsive ones (μ < L12/m), as illustrated in Figure 7. Stronger repulsive forces correspond to a faster linear motion. One of the other solution types is given in terms of the hyperbolic cosine: where the constant λ satisfies This form of Cotes's spirals corresponds to one of the two Poinsot's spirals (Figure 8). The possible values of λ range from zero to infinity, which corresponds to values of μ greater than the positive number L12/m. Thus, Poinsot spiral motion only occurs for repulsive inverse-cube central forces, and applies in the case that L is not too large for the given μ. Taking the limit of k or λ going to zero yields the third form of a Cotes's spiral, the so-called reciprocal spiral or hyperbolic spiral, as a solution where A and ε are arbitrary constants. Such curves result when the strength μ of the repulsive force exactly balances the angular momentum-mass term Closed orbits and inverse-cube central forces Two types of central forces—those that increase linearly with distance, F = Cr, such as Hooke's law, and inverse-square forces, , such as Newton's law of universal gravitation and Coulomb's law—have a very unusual property. A particle moving under either type of force always returns to its starting place with its initial velocity, provided that it lacks sufficient energy to move out to infinity. In other words, the path of a bound particle is always closed and its motion repeats indefinitely, no matter what its initial position or velocity. As shown by Bertrand's theorem, this property is not true for other types of forces; in general, a particle will not return to its starting point with the same velocity. However, Newton's theorem shows that an inverse-cubic force may be applied to a particle moving under a linear or inverse-square force such that its orbit remains closed, provided that k equals a rational number. (A number is called "rational" if it can be written as a fraction m/n, where m and n are integers.) In such cases, the addition of the inverse-cubic force causes the particle to complete m rotations about the center of force in the same time that the original particle completes n rotations. This method for producing closed orbits does not violate Bertrand's theorem, because the added inverse-cubic force depends on the initial velocity of the particle. Harmonic and subharmonic orbits are special types of such closed orbits. A closed trajectory is called a harmonic orbit if k is an integer, i.e., if in the formula . For example, if (green planet in Figures 1 and 4, green orbit in Figure 9), the resulting orbit is the third harmonic of the original orbit. Conversely, the closed trajectory is called a subharmonic orbit if k is the inverse of an integer, i.e., if in the formula . For example, if (green planet in Figure 5, green orbit in Figure 10), the resulting orbit is called the third subharmonic of the original orbit. Although such orbits are unlikely to occur in nature, they are helpful for illustrating Newton's theorem. Limit of nearly circular orbits In Proposition 45 of his Principia, Newton applies his theorem of revolving orbits to develop a method for finding the force laws that govern the motions of planets. Johannes Kepler had noted that the orbits of most planets and the Moon seemed to be ellipses, and the long axis of those ellipses can determined accurately from astronomical measurements. The long axis is defined as the line connecting the positions of minimum and maximum distances to the central point, i.e., the line connecting the two apses. For illustration, the long axis of the planet Mercury is defined as the line through its successive positions of perihelion and aphelion. Over time, the long axis of most orbiting bodies rotates gradually, generally no more than a few degrees per complete revolution, because of gravitational perturbations from other bodies, oblateness in the attracting body, general relativistic effects, and other effects. Newton's method uses this apsidal precession as a sensitive probe of the type of force being applied to the planets. Newton's theorem describes only the effects of adding an inverse-cube central force. However, Newton extends his theorem to an arbitrary central force F(r) by restricting his attention to orbits that are nearly circular, such as ellipses with low orbital eccentricity (ε ≤ 0.1), which is true of seven of the eight planetary orbits in the solar system. Newton also applied his theorem to the planet Mercury, which has an eccentricity ε of roughly 0.21, and suggested that it may pertain to Halley's comet, whose orbit has an eccentricity of roughly 0.97. A qualitative justification for this extrapolation of his method has been suggested by Valluri, Wilson and Harper. According to their argument, Newton considered the apsidal precession angle α (the angle between the vectors of successive minimum and maximum distance from the center) to be a smooth, continuous function of the orbital eccentricity ε. For the inverse-square force, α equals 180°; the vectors to the positions of minimum and maximum distances lie on the same line. If α is initially not 180° at low ε (quasi-circular orbits) then, in general, α will equal 180° only for isolated values of ε; a randomly chosen value of ε would be very unlikely to give α = 180°. Therefore, the observed slow rotation of the apsides of planetary orbits suggest that the force of gravity is an inverse-square law. Quantitative formula To simplify the equations, Newton writes F(r) in terms of a new function C(r) where R is the average radius of the nearly circular orbit. Newton expands C(r) in a series—now known as a Taylor expansion—in powers of the distance r, one of the first appearances of such a series. By equating the resulting inverse-cube force term with the inverse-cube force for revolving orbits, Newton derives an equivalent angular scaling factor k for nearly circular orbits: In other words, the application of an arbitrary central force F(r) to a nearly circular elliptical orbit can accelerate the angular motion by the factor k without affecting the radial motion significantly. If an elliptical orbit is stationary, the particle rotates about the center of force by 180° as it moves from one end of the long axis to the other (the two apses). Thus, the corresponding apsidal angle α for a general central force equals k×180°, using the general law . Examples Newton illustrates his formula with three examples. In the first two, the central force is a power law, , so C(r) is proportional to rn. The formula above indicates that the angular motion is multiplied by a factor , so that the apsidal angle α equals 180°/. This angular scaling can be seen in the apsidal precession, i.e., in the gradual rotation of the long axis of the ellipse (Figure 3). As noted above, the orbit as a whole rotates with a mean angular speed Ω=(k−1)ω, where ω equals the mean angular speed of the particle about the stationary ellipse. If the particle requires a time T to move from one apse to the other, this implies that, in the same time, the long axis will rotate by an angle β = ΩT = (k − 1)ωT = (k − 1)×180°. For an inverse-square law such as Newton's law of universal gravitation, where n equals 1, there is no angular scaling (k = 1), the apsidal angle α is 180°, and the elliptical orbit is stationary (Ω = β = 0). As a final illustration, Newton considers a sum of two power laws which multiplies the angular speed by a factor Newton applies both of these formulae (the power law and sum of two power laws) to examine the apsidal precession of the Moon's orbit. Precession of the Moon's orbit The motion of the Moon can be measured accurately, and is noticeably more complex than that of the planets. The ancient Greek astronomers, Hipparchus and Ptolemy, had noted several periodic variations in the Moon's orbit, such as small oscillations in its orbital eccentricity and the inclination of its orbit to the plane of the ecliptic. These oscillations generally occur on a once-monthly or twice-monthly time-scale. The line of its apses precesses gradually with a period of roughly 8.85 years, while its line of nodes turns a full circle in roughly double that time, 18.6 years. This accounts for the roughly 18-year periodicity of eclipses, the so-called Saros cycle. However, both lines experience small fluctuations in their motion, again on the monthly time-scale. In 1673, Jeremiah Horrocks published a reasonably accurate model of the Moon's motion in which the Moon was assumed to follow a precessing elliptical orbit. A sufficiently accurate and simple method for predicting the Moon's motion would have solved the navigational problem of determining a ship's longitude; in Newton's time, the goal was to predict the Moon's position to 2' (two arc-minutes), which would correspond to a 1° error in terrestrial longitude. Horrocks' model predicted the lunar position with errors no more than 10 arc-minutes; for comparison, the diameter of the Moon is roughly 30 arc-minutes. Newton used his theorem of revolving orbits in two ways to account for the apsidal precession of the Moon. First, he showed that the Moon's observed apsidal precession could be accounted for by changing the force law of gravity from an inverse-square law to a power law in which the exponent was (roughly 2.0165) In 1894, Asaph Hall adopted this approach of modifying the exponent in the inverse-square law slightly to explain an anomalous orbital precession of the planet Mercury, which had been observed in 1859 by Urbain Le Verrier. Ironically, Hall's theory was ruled out by careful astronomical observations of the Moon. The currently accepted explanation for this precession involves the theory of general relativity, which (to first approximation) adds an inverse-quartic force, i.e., one that varies as the inverse fourth power of distance. As a second approach to explaining the Moon's precession, Newton suggested that the perturbing influence of the Sun on the Moon's motion might be approximately equivalent to an additional linear force The first term corresponds to the gravitational attraction between the Moon and the Earth, where r is the Moon's distance from the Earth. The second term, so Newton reasoned, might represent the average perturbing force of the Sun's gravity of the Earth-Moon system. Such a force law could also result if the Earth were surrounded by a spherical dust cloud of uniform density. Using the formula for k for nearly circular orbits, and estimates of A and B, Newton showed that this force law could not account for the Moon's precession, since the predicted apsidal angle α was (≈ 180.76°) rather than the observed α (≈ 181.525°). For every revolution, the long axis would rotate 1.5°, roughly half of the observed 3.0° Generalization Isaac Newton first published his theorem in 1687, as Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica. However, as astrophysicist Subrahmanyan Chandrasekhar noted in his 1995 commentary on Newton's Principia, the theorem remained largely unknown and undeveloped for over three centuries. The first generalization of Newton's theorem was discovered by Mahomed and Vawda in 2000. As Newton did, they assumed that the angular motion of the second particle was k times faster than that of the first particle, . In contrast to Newton, however, Mahomed and Vawda did not require that the radial motion of the two particles be the same, . Rather, they required that the inverse radii be related by a linear equation This transformation of the variables changes the path of the particle. If the path of the first particle is written , the second particle's path can be written as If the motion of the first particle is produced by a central force F1(r), Mahomed and Vawda showed that the motion of the second particle can be produced by the following force According to this equation, the second force F2(r) is obtained by scaling the first force and changing its argument, as well as by adding inverse-square and inverse-cube central forces. For comparison, Newton's theorem of revolving orbits corresponds to the case and , so that . In this case, the original force is not scaled, and its argument is unchanged; the inverse-cube force is added, but the inverse-square term is not. Also, the path of the second particle is , consistent with the formula given above. Derivations Newton's derivation Newton's derivation is found in Section IX of his Principia, specifically Propositions 43–45. His derivations of these Propositions are based largely on geometry. Proposition 43; Problem 30 It is required to make a body move in a curve that revolves about the center of force in the same manner as another body in the same curve at rest. Newton's derivation of Proposition 43 depends on his Proposition 2, derived earlier in the Principia. Proposition 2 provides a geometrical test for whether the net force acting on a point mass (a particle) is a central force. Newton showed that a force is central if and only if the particle sweeps out equal areas in equal times as measured from the center. Newton's derivation begins with a particle moving under an arbitrary central force F1(r); the motion of this particle under this force is described by its radius r(t) from the center as a function of time, and also its angle θ1(t). In an infinitesimal time dt, the particle sweeps out an approximate right triangle whose area is Since the force acting on the particle is assumed to be a central force, the particle sweeps out equal angles in equal times, by Newton's Proposition 2. Expressed another way, the rate of sweeping out area is constant This constant areal velocity can be calculated as follows. At the apapsis and periapsis, the positions of closest and furthest distance from the attracting center, the velocity and radius vectors are perpendicular; therefore, the angular momentum L1 per mass m of the particle (written as h1) can be related to the rate of sweeping out areas Now consider a second particle whose orbit is identical in its radius, but whose angular variation is multiplied by a constant factor k The areal velocity of the second particle equals that of the first particle multiplied by the same factor k Since k is a constant, the second particle also sweeps out equal areas in equal times. Therefore, by Proposition 2, the second particle is also acted upon by a central force F2(r). This is the conclusion of Proposition 43. Proposition 44 The difference of the forces, by which two bodies may be made to move equally, one in a fixed, the other in the same orbit revolving, varies inversely as the cube of their common altitudes. To find the magnitude of F2(r) from the original central force F1(r), Newton calculated their difference using geometry and the definition of centripetal acceleration. In Proposition 44 of his Principia, he showed that the difference is proportional to the inverse cube of the radius, specifically by the formula given above, which Newtons writes in terms of the two constant areal velocities, h1 and h2 Proposition 45; Problem 31 To find the motion of the apsides in orbits approaching very near to circles. In this Proposition, Newton derives the consequences of his theorem of revolving orbits in the limit of nearly circular orbits. This approximation is generally valid for planetary orbits and the orbit of the Moon about the Earth. This approximation also allows Newton to consider a great variety of central force laws, not merely inverse-square and inverse-cube force laws. Modern derivation Modern derivations of Newton's theorem have been published by Whittaker (1937) and Chandrasekhar (1995). By assumption, the second angular speed is k times faster than the first Since the two radii have the same behavior with time, r(t), the conserved angular momenta are related by the same factor k The equation of motion for a radius r of a particle of mass m moving in a central potential V(r) is given by Lagrange's equations Applying the general formula to the two orbits yields the equation which can be re-arranged to the form This equation relating the two radial forces can be understood qualitatively as follows. The difference in angular speeds (or equivalently, in angular momenta) causes a difference in the centripetal force requirement; to offset this, the radial force must be altered with an inverse-cube force. Newton's theorem can be expressed equivalently in terms of potential energy, which is defined for central forces The radial force equation can be written in terms of the two potential energies Integrating with respect to the distance r, Newtons's theorem states that a k-fold change in angular speed results from adding an inverse-square potential energy to any given potential energy V1(r) See also Kepler problem Laplace–Runge–Lenz vector Two-body problem in general relativity Newton's theorem about ovals References Bibliography Further reading (séance du lundi 20 Octobre 1873) Alternative translation of earlier (2nd) edition of Newton's Principia. External links Three-body problem discussed by Alain Chenciner at Scholarpedia 1680s introductions 1687 beginnings 1687 in science Isaac Newton Classical mechanics Concepts in physics Articles containing video clips
Newton's theorem of revolving orbits
[ "Physics" ]
6,661
[ "Mechanics", "Classical mechanics", "nan" ]
7,522,652
https://en.wikipedia.org/wiki/Elmore%20delay
Elmore delay is a simple approximation to the delay through an RC network in an electronic system. It is often used in applications such as logic synthesis, delay calculation, static timing analysis, placement and routing, since it is simple to compute (especially in tree structured networks, which are the vast majority of signal nets within ICs) and is reasonably accurate. Even where it is not accurate, it is usually faithful, in the sense that reducing the Elmore delay will almost always reduce the true delay, so it is still useful in optimization. Elmore delay can be thought of in several ways, all mathematically identical. For tree structured networks, find the delay through each segment as the R (electrical resistance) times the downstream C (electrical capacitance). Sum the delays from the root to the sink. Assume the output is a simple exponential, and find the exponential that has the same integral as the true response. This is also equivalent to moment matching with one moment, since the first moment is a pure exponential. Find a one pole approximation to the true frequency response. This is a first-order Padé approximation. There are many extensions to Elmore delay. It can be extended to upper and lower bounds, to include inductance as well as R and C, to be more accurate (higher order approximations) and so on. See delay calculation for more details and references. See also Delay calculation Static timing analysis William Cronk Elmore References Electronic engineering Electronic design Electronic design automation Integrated circuits
Elmore delay
[ "Technology", "Engineering" ]
306
[ "Computer engineering", "Electronic design", "Electronic engineering", "Electrical engineering", "Design", "Integrated circuits" ]
7,522,685
https://en.wikipedia.org/wiki/Continuous%20group%20action
In topology, a continuous group action on a topological space X is a group action of a topological group G that is continuous: i.e., is a continuous map. Together with the group action, X is called a G-space. If is a continuous group homomorphism of topological groups and if X is a G-space, then H can act on X by restriction: , making X a H-space. Often f is either an inclusion or a quotient map. In particular, any topological space may be thought of as a G-space via (and G would act trivially.) Two basic operations are that of taking the space of points fixed by a subgroup H and that of forming a quotient by H. We write for the set of all x in X such that . For example, if we write for the set of continuous maps from a G-space X to another G-space Y, then, with the action , consists of f such that ; i.e., f is an equivariant map. We write . Note, for example, for a G-space X and a closed subgroup H, . References See also Lie group action Group actions (mathematics) Topological groups
Continuous group action
[ "Physics", "Mathematics" ]
251
[ "Group actions", "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology", "Topological groups", "Symmetry" ]
7,523,083
https://en.wikipedia.org/wiki/Shock%20mount
A shock mount or isolation mount is a mechanical fastener that connects two parts elastically to provide shock and vibration isolation. Isolation mounts allow equipment to be securely mounted to a foundation and/or frame and, at the same time, allow it to float independently from it. Uses Shock mounts are found in a wide variety of applications. They can be used to isolate the foundation or substrate from the dynamics of the mounted equipment. This is vital on submarines where silence is critical to mission success. Yachts also use shock mounts to dampen mechanical noise (mainly transmitted throughout the structure) and increase comfort. This is usually done through elastic supports and transmission couplings. Other common examples are the motor and transmission mounts used in virtually every automobile manufactured today. Without isolation mounts, interior noise and comfort levels would be significantly different. Such shock and vibration-isolation mounts are often chosen by the nature of the dynamics produced by the equipment and the weight of the equipment. Shock mounts can isolate sensitive equipment from undesirable dynamics of the foundation or substrate. Sensitive laboratory equipment most be isolated from shock from handling and ambient vibration. Military equipment and ships must be able to withstand nearby explosions. Shock mounts are found in some disc drives and compact disc players, where the disc and rainy agreement are held by soft bushings that isolate them from outside vibration and other outside forces, such as torsion. In this case, isolation mounts are often chosen by the sensitivity of the equipment to shock (fragility) and vibration (natural frequency) and the weight of the equipment. For shock mounting to be effective, the input shock and vibration must be matched. A shock pulse is characterised by its peak acceleration, duration, and shape (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating mechanical shock. Shock mounts used to isolate entire buildings from earthquakes are called base isolators. A similar idea, also known as a shock mount, is found in furniture design, introduced by Charles and Ray Eames. It provides some shock absorption and operates as a living hinge, allowing the seat back to pivot. Shock mounts are also sometimes used in bicycle saddles, handlebars and chassis. Design Maxwell and Kelvin–Voigt models of viscoelasticity use springs and dashpots in series and parallel circuits respectively. Hydraulic and pneumatic components can be included, depending on the use. Laminated pads One common type of isolation mounts is laminated pads. Generally, these pads consist of a cork or polymeric foam core which has been laminated between two pieces of ribbed neoprene sheet. Molded rubber isolation mounts Molded rubber isolation mounts are typically manufactured for specific applications. The best example of this is automotive engine and transmission mounts. Rubber bushings compress synthetic rubber rings on bolts to provide some isolation – operating temperature is sometimes a factor. Other shock mounts have mechanical springs or an elastomer (in tension or compression) engineered to isolate an item from specified mechanical shock and vibration. Some form of dashpot is usually used with a spring to provide viscous damping. Viscoelastic materials are common. Temperature is a factor in the dynamic response of rubber. Generally, a molded rubber mount is best suited for heavy loads producing higher frequency vibrations. Cable isolation mounts Cable mounts are based around a coil of wire rope fixed to an upper and lower mounting bar. When properly matched to the load, these mounts provide isolation over a broad frequency range. They are typically applied to high performance applications, such as mounting sensitive instrumentation into off-road vehicles and shipboard. Coil spring isolation mounts Coil spring isolation mounts generally provide the greatest degree of movement and the best low frequency performance. They are particularly popular for mounting equipment in buildings such as air handlers, filtration units, air conditioning and refrigeration systems and large pipes. Their degree of movement makes them ideal for applications where high flexure and/or expansion and contraction are a consideration. Microphone mounts Shock mounts for microphones can provide basic protection from damage, but their prime use is to isolate microphones from mechanically transmitted noise. This can originate as floor vibrations transmitted through a floor stand, or as "finger" and other handling noise on boom poles. All microphones behave to some extent as accelerometers, with the most sensitive axis being perpendicular to the diaphragm. Additionally, some microphones contain internal elements such as vacuum tubes and transformers which can be inherently microphonic. These are often cushioned by resilient internal methods, in addition to the employment of external isolation mounts. Early microphones used a 'ring and spring' mount, where a single rigid ring was mounted and carried the microphone between a number of coil springs, usually four or eight. When early microphones were heavy and omnidirectional, this was adequate. However the single plane of suspension allowed the microphone to twist very easily; once microphones started to become directional, this twisting caused fading of the signal. A more three-dimensional and less planar suspension would be required. Large side-address studio microphone are generally strung in "cat's cradle" mounts, using fabric-wound rubber elastic elements to provide isolation. While the elastic elements can deteriorate and sag over time, the low price of the mount and ease of replacing the elastic elements mean they remain a mainstay despite introduction of elastomer-based designs less sensible to degradation over time. The same occurs for end-fire microphones, most often employed for location work, however positioning consistency issues in mobile contexts means elastomer-based alternatives have made more inroads: they offer more displacement (positional flexibility) along the prime axis, but better restrict movement along other axis, and have less tendency to keep oscillating after movements, which provide for better control of the microphone's precise position. See also Bushing Vibration isolation Shock absorber MIL-S-901 Microphonics Cushioning References DeSilva, C. W., "Vibration and Shock Handbook", CRC, 2005, Harris, C. M., and Peirsol, A. G. "Shock and Vibration Handbook", 2001, McGraw Hill, External links Shock and vibration testing of shock mounts Microphones Mechanical vibrations Fasteners
Shock mount
[ "Physics", "Engineering" ]
1,279
[ "Structural engineering", "Fasteners", "Construction", "Mechanics", "Mechanical vibrations" ]
7,528,000
https://en.wikipedia.org/wiki/SN%201998bw
SN 1998bw was a rare broad-lined Type Ic gamma ray burst supernova detected on 26 April 1998 in the ESO 184-G82 spiral galaxy, which some astronomers believe may be an example of a collapsar (hypernova). The hypernova has been linked to GRB 980425, which was detected on 25 April 1998, the first time a gamma-ray burst has been linked to a supernova. The hypernova is approximately 140 million light years away, very close for a gamma ray burst source. The region of the galaxy where the supernova occurred hosts stars 5-8 million years old and is relatively free from dust. A nearby region hosts multiple Wolf-Rayet stars less than 3 million years old, but it is unlikely that the supernova progenitor could be a runaway from that region. The implication is that the progenitor was a star originally if it exploded as a single star at the end of its life. References External links Light curves and spectra on the Open Supernova Catalog Hypernovae Supernovae 19980426 Telescopium
SN 1998bw
[ "Physics", "Chemistry", "Astronomy" ]
228
[ "Supernovae", "Physical phenomena", "Astronomical events", "Hypernovae", "Constellations", "Telescopium", "Gamma-ray bursts", "Explosions", "Stellar phenomena" ]
7,528,635
https://en.wikipedia.org/wiki/Magnetic%20dip
Magnetic dip, dip angle, or magnetic inclination is the angle made with the horizontal by Earth's magnetic field lines. This angle varies at different points on Earth's surface. Positive values of inclination indicate that the magnetic field of Earth is pointing downward, into Earth, at the point of measurement, and negative values indicate that it is pointing upward. The dip angle is in principle the angle made by the needle of a vertically held compass, though in practice ordinary compass needles may be weighted against dip or may be unable to move freely in the correct plane. The value can be measured more reliably with a special instrument typically known as a dip circle. Dip angle was discovered by the German engineer Georg Hartmann in 1544. A method of measuring it with a dip circle was described by Robert Norman in England in 1581. Explanation Magnetic dip results from the tendency of a magnet to align itself with lines of magnetic field. As Earth's magnetic field lines are not parallel to the surface, the north end of a compass needle will point upward in the Southern Hemisphere (negative dip) or downward in the Northern Hemisphere (positive dip). The range of dip is from -90 degrees (at the South Magnetic Pole) to +90 degrees (at the North Magnetic Pole). Contour lines along which the dip measured at Earth's surface is equal are referred to as isoclinic lines. The locus of the points having zero dip is called the magnetic equator or aclinic line. Calculation for a given latitude The inclination is defined locally for the magnetic field due to Earth's core, and has a positive value if the field points below the horizontal (i.e. into Earth). Here we show how to determine the value of at a given latitude, following the treatment given by Fowler. Outside Earth's core we consider Maxwell's equations in a vacuum, and where and the subscript denotes the core as the origin of these fields. The first means we can introduce the scalar potential such that , while the second means the potential satisfies the Laplace equation . Solving to leading order gives the magnetic dipole potential and hence the field for magnetic moment and position vector on Earth's surface. From here it can be shown that the inclination as defined above satisfies (from ) where is the latitude of the point on Earth's surface. Practical importance The phenomenon is especially important in aviation. Magnetic compasses on airplanes are made so that the center of gravity is significantly lower than the pivot point. As a result, the vertical component of the magnetic force is too weak to tilt the compass card significantly out of the horizontal plane, thus minimizing the dip angle shown in the compass. However, this also causes the airplane's compass to give erroneous readings during banked turns (turning error) and airspeed changes (acceleration error). Turning error Magnetic dip shifts the center of gravity of the compass card, causing temporary inaccurate readings when turning north or south. As the aircraft turns, the force that results from the magnetic dip causes the float assembly to swing in the same direction that the float turns. This compass error is amplified with the proximity to either magnetic pole. To compensate for turning errors, pilots in the Northern Hemisphere will have to "undershoot" the turn when turning north, stopping the turn prior to the compass rotating to the correct heading; and "overshoot" the turn when turning south by stopping later than the compass. The effect is the opposite in the Southern Hemisphere. Acceleration error The acceleration errors occur because the compass card tilts on its mount when under acceleration. In the Northern Hemisphere, when accelerating on either an easterly or westerly heading, the error appears as a turn indication toward the north. When decelerating on either of these headings, the compass indicates a turn toward the south. The effect is the opposite in the Southern Hemisphere. Balancing Compass needles are often weighted during manufacture to compensate for magnetic dip, so that they will balance roughly horizontally. This balancing is latitude-dependent; see Compass balancing (magnetic dip). See also Aircraft compass turns Magnetic declination South Atlantic Anomaly References External links Compass errors Look up magnetic dip values Geomagnetism Orientation (geometry)
Magnetic dip
[ "Physics", "Mathematics" ]
857
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
7,528,864
https://en.wikipedia.org/wiki/Cell%20notation
In electrochemistry, cell notation or cell representation is a shorthand method of expressing a reaction in an electrochemical cell. In cell notation, the two half-cells are described by writing the formula of each individual chemical species involved in the redox reaction across the cell, with all other common ions and inert substances being ignored. Each species is separated by a vertical bar, with the species in each half-cell grouped together, and the two half-cells separated by two bars or slashes representing a salt bridge (which generally contains an electrolyte solution such as potassium nitrate or sodium chloride that is left unwritten). It is common practice to represent the anode to the left of the double bar and the cathode to the right, and to put aqueous species closest to the double bar. Cell notation may be used to represent other information that is not essential to the reaction but still useful to include. For example, the electrode's species may be marked by a degree symbol. The standard abbreviations for the phases of each species are often included as subscripts, in a manner similar to the notation in chemical equations. Sometimes, the initial concentrations of dissolved species may be written to the right in parentheses (see example below). Some examples of this notation are: This means that the left electrode (anode) is made of zinc, while the other one (right, cathode) is composed of a silver wire covered by a silver chloride layer which is not soluble. Both of the electrodes are immersed into aqueous media where zinc and chloride ions are present. This cell is very famous: the Daniell cell. If the electrodes are connected, a spontaneous reaction takes place. Zinc is oxidized, and copper ions are reduced. Sometimes the state of each species into the cell is written. For example, in the zinc cell (shown above), we can write that zinc, silver and silver chloride are solids, while zinc cation and chloride anion are in aqueous medium. So, the new notation will be: It is possible to express the ion concentration too. For example, in the Galvanic cell: In this case, all ions (sulfate, zinc and copper) are in a concentration equal to 1 mol/L. References Electrochemistry
Cell notation
[ "Chemistry" ]
471
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs" ]
7,528,959
https://en.wikipedia.org/wiki/Peptide%20computing
Peptide computing is a form of computing which uses peptides, instead of traditional electronic components. The basis of this computational model is the affinity of antibodies towards peptide sequences. Similar to DNA computing, the parallel interactions of peptide sequences and antibodies have been used by this model to solve a few NP-complete problems. Specifically, the hamiltonian path problem (HPP) and some versions of the set cover problem are a few NP-complete problems which have been solved using this computational model so far. This model of computation has also been shown to be computationally universal (or Turing complete). This model of computation has some critical advantages over DNA computing. For instance, while DNA is made of four building blocks, peptides are made of twenty building blocks. The peptide-antibody interactions are also more flexible with respect to recognition and affinity than an interaction between a DNA strand and its reverse complement. However, unlike DNA computing, this model is yet to be practically realized. The main limitation is the availability of specific monoclonal antibodies required by the model. See also Biocomputers Computational gene Computational complexity theory DNA computing Molecular electronics Parallel computing Unconventional computing Molecular logic gate References Classes of computers Models of computation Molecular biology
Peptide computing
[ "Chemistry", "Technology", "Biology" ]
242
[ "Computer systems", "Computer science stubs", "Computer science", "Molecular biology", "Biochemistry", "Computing stubs", "Computers", "Classes of computers" ]
7,529,592
https://en.wikipedia.org/wiki/Beltweigher
A beltweigher or belt weigher, more commonly known as a belt scale, is a piece of industrial control equipment used to measure the mass and flow rate of bulk material traveling over a conveyor belt. Invented by Herbert Merrick in the early 1900's, belt weighers are commonly used in plants and heavy industries, such as mining. Belt weighers replace a short section of the support mechanism of the belt, which might be one or more sets of idler rollers, or a short section of channel or plate. This weighed support is mounted on load cells, either pivoted, counterbalanced or not, or fully suspended. The mass measured by the load cells is integrated with the belt speed to compute the mass of material moving on the belt, after allowing for the mass of the belt itself. Belt weighers generally include the necessary electronics to perform this calculation, often in the form of a small industrialized microprocessor system. A belt weigher is normally mounted in a well supported straight section of belt, with no vertical or sideways curvature, and as close to level as is practicable. The weighed support must be aligned vertically and horizontally with the adjacent supports to avoid tensile forces in the belt skewing the measurement. Due to the belt tension variation, frequent check calibration must be done. Outputs from belt weighers are typically: pulses at predefined increments of mass an analogue signal proportional to the flow rate Some belt weigher controllers offer features such as driving output to stop the belt when a predefined mass of material has been measured, or a range of alarms to indicate nil flow, belt slippage, and belt stoppage. References Industrial equipment Weighing instruments
Beltweigher
[ "Physics", "Technology", "Engineering" ]
352
[ "Weighing instruments", "Mass", "Measuring instruments", "nan", "Matter" ]
14,881,646
https://en.wikipedia.org/wiki/CACNA2D1
Voltage-dependent calcium channel subunit alpha-2/delta-1 is a protein that in humans is encoded by the CACNA2D1 gene. This gene encodes a member of the alpha-2/delta subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane depolarization and consist of a complex of alpha-1, alpha-2/delta, beta, and gamma subunits in a 1:1:1:1 ratio. Research on a highly similar protein in rabbit suggests the protein described in this record is cleaved into alpha-2 and delta subunits. Alternate transcriptional splice variants of this gene have been observed, but have not been thoroughly characterized. In mammals, alpha-2/delta proteins exist in four subtypes coded by four separate but closely related genes, CACNA2D1, CACNA2D2, CACNA2D3 and CACNA2D4. Recently, alpha-2/delta1 proteins, in addition to calcium channels, have been found to interact directly with N-methyl-D-aspartate type glutamate receptors (NMDAR), AMPA type glutamate receptors (AMPAR) and the extracellular adhesion protein, thrombospondin. Gabapentinoids Alpha-2/delta proteins are believed to be the molecular target of the gabapentinoids gabapentin and pregabalin, which are used to treat epilepsy and neuropathic pain. Only alpha-2/delta subtypes 1 and 2 (but not 3 and 4) are substrates for gabapentinoid drug binding. See also Voltage-dependent calcium channel Gabapentinoid drugs References Further reading External links Ion channels
CACNA2D1
[ "Chemistry" ]
379
[ "Neurochemistry", "Ion channels" ]
14,884,442
https://en.wikipedia.org/wiki/Softening%20point
The softening point is the temperature at which a material softens beyond some arbitrary softness. It can be determined, for example, by the Vicat method (ASTM-D1525 or ISO 306), Heat Deflection Test (ASTM-D648) or a ring and ball method (ISO 4625 or ASTM E28-67/E28-99 or ASTM D36 or ASTM D6493 - 11 or JIS K 6863). A ring and ball apparatus can also be used for the determination of softening point of bituminous materials. See also Glass Transition Temperature References Temperature Polymer chemistry Glass physics
Softening point
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
136
[ "Thermodynamics stubs", "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Glass engineering and science", "Physical quantities", "SI base quantities", "Intensive quantities", "Materials science", "Glass physics", "Thermodynamics", "Condensed matter physics", "Polymer ...
14,887,052
https://en.wikipedia.org/wiki/41%20equal%20temperament
In music, 41 equal temperament, abbreviated 41-TET, 41-EDO, or 41-ET, is the tempered scale derived by dividing the octave into 41 equally sized steps (equal frequency ratios). Each step represents a frequency ratio of 21/41, or 29.27 cents (), an interval close in size to the septimal comma. 41-ET can be seen as a tuning of the schismatic, magic and miracle temperaments. It is the second smallest equal temperament, after 29-ET, whose perfect fifth is closer to just intonation than that of 12-ET. In other words, is a better approximation to the ratio than either or . History and use Although 41-ET has not seen as wide use as other temperaments such as 19-ET or 31-ET , pianist and engineer Paul von Janko built a piano using this tuning, which is on display at the Gemeentemuseum in The Hague. 41-ET can also be seen as an octave-based approximation of the Bohlen–Pierce scale. 41-ET guitars have been built, notably by Yossi Tamim. The frets on such guitars are very tightly spaced. To make a more playable 41-ET guitar, an approach called "The Kite Tuning" omits every-other fret (in other words, 41 frets per two octaves or 20.5 frets per octave) while tuning adjacent strings to an odd number of steps of 41. Thus, any two adjacent strings together contain all the pitch classes of the full 41-ET system. The Kite Guitar's main tuning uses 13 steps of 41-ET (which approximates a 5/4 ratio) between strings. With that tuning, all simple ratios of odd limit 9 or less are available at spans at most only 4 frets. 41-ET is also a subset of 205-ET, for which the keyboard layout of the Tonal Plexus is designed. Interval size Here are the sizes of some common intervals (shaded rows mark relatively poor matches): As the table above shows, the 41-ET both distinguishes between and closely matches all intervals involving the ratios in the harmonic series up to and including the 10th overtone. This includes the distinction between the major tone and minor tone (thus 41-ET is not a meantone tuning). These close fits make 41-ET a good approximation for 5-, 7- and 9-limit music. 41-ET also closely matches a number of other intervals involving higher harmonics. It distinguishes between and closely matches all intervals involving up through the 12th overtones, with the exception of the greater undecimal neutral second (11:10). Although not as accurate, it can be considered a full 15-limit tuning as well. Tempering Intervals not tempered out by 41-ET include the lesser diesis (128:125), septimal diesis (49:48), septimal sixth-tone (50:49), septimal comma (64:63), and the syntonic comma (81:80). 41-ET tempers out 100:99, which is the difference between the greater undecimal neutral second and the minor tone, as well as the septimal kleisma (225:224), 1029:1024 (the difference between three intervals of 8:7 the interval 3:2), and the small diesis (3125:3072). Notation Using extended pythagorean notation results in double and even triple sharps and flats. Furthermore, the notes run out of order. The chromatic scale is C, B, A/E, D, C, B, E, D... These issues can be avoided by using ups and downs notation. The up and down arrows are written as a caret or a lower-case "v", usually in a sans-serif font. One arrow equals one step of 41-TET. In note names, the arrows come first, to facilitate chord naming. The many enharmonic equivalences allow great freedom of spelling. C, ^C, ^^C/vvC/vD, vC/D, C/^D, ^C/^^D/vvD, vD, D, ^D, ^^D/vvD/vE, vD/E, D/^E, ^D/^^E/vvE, vE, E, ^E/vvF, ^^E/vF, F, ^F, ^^F/vvF/vG, vF/G, F/^G, ^F/^^G/vvG, vG, G, ^G, ^^G/vvG/vA, vG/A, G/^A, ^G/^^A/vvA, vA, A, ^A, ^^A/vvA/vB, vA/B, A/^B, ^A/^^B/vvB, vB, B, ^B/vvC, ^^B/vC, C Chords of 41 equal temperament Because ups and downs notation names the intervals of 41-TET, it can provide precise chord names. The pythagorean minor chord with 32/27 on C is still named Cm and still spelled C–E–G. But the 5-limit upminor chord uses the upminor 3rd 6/5 and is spelled C–^E–G. This chord is named C^m. Compare with ^Cm (^C–^E–^G). References Equal temperaments Microtonality
41 equal temperament
[ "Physics" ]
1,178
[ "Physical quantities", "Musical symmetry", "Logarithmic scales of measurement", "Equal temperaments", "Symmetry" ]
4,335,708
https://en.wikipedia.org/wiki/Liverpool%20University%20Neuroleptic%20Side-Effect%20Rating%20Scale
LUNSERS refers to the Liverpool University Neuroleptic Side Effect Rating Scale. Overview Within the field of psychiatry, many simple and complex tools exist for the rating of such things as severity of illness and problems associated with the use of medications, for treating mental illness. The medications used to treat mental illness—particularly psychotic disorders—are referred to as anti-psychotics or neuroleptics. Both phrases, although generally used interchangeably, are not actually the same. The LUNSERS is designed to monitor medication-induced side effects. This psychiatric assessment tools allows for the monitoring of side effects related to neuroleptic (or anti-psychotic) medications. The test is a self-reported check-tick box format with a predefined scale from "not at all" to "very much". The test asks 51 questions in all with a number being red herrings to test for people over-rating themselves. It has been proposed that this is useful for spotting malingerers and hypochondriacs, however its intention in the original research proposal for LUNSERS was to demonstrate the robustness and reliability of self-reporting. There are seven subcategories in the overall results: Extrapyramidal – parkinsonian type side effects. Autonomic – related to uncontrollable side effects. Psychic – relating to the functioning of mind and emotion. Miscellaneous/various – known side effects without category. Anticholinergic – side-effects impacting the choline system. Allergic reaction. Prolactin – many neuroleptics affect hormones particularly prolactin. As well as, Red herrings – designed to trap people who over-rate symptoms. References External links Paper on the use of LUNSERS LUNSERS and related materials scoring LUNSERS The use of the Liverpool University Neuroleptic Side-Effect Rating Scale (LUNSERS) in clinical practice. Adverse effects of psychoactive drugs Antipsychotics Mental disorders screening and assessment tools Neuroscience in the United Kingdom University of Liverpool
Liverpool University Neuroleptic Side-Effect Rating Scale
[ "Chemistry" ]
419
[ "Drug safety", "Adverse effects of psychoactive drugs" ]
4,336,836
https://en.wikipedia.org/wiki/Magneto-optical%20trap
In atomic, molecular, and optical physics, a magneto-optical trap (MOT) is an apparatus which uses laser cooling and a spatially varying magnetic field to create a trap which can produce samples of cold neutral atoms. Temperatures achieved in a MOT can be as low as several microkelvins, depending on the atomic species, which is two or three times below the photon-recoil limit. However, for atoms with an unresolved hyperfine structure, such as , the temperature achieved in a MOT will be higher than the Doppler cooling limit. A MOT is formed from the intersection of a weak quadrupolar, spatially varying magnetic field and six circularly polarized red-detuned optical molasses beams. As atoms travel away from the zero field at the center of the trap (halfway between the coils), the spatially varying Zeeman shift brings an atomic transition into resonance with the laser beams, which gives rise to a scattering force that pushes the atoms back towards the center of the trap. This is why a MOT traps atoms, and because this force arises from photon scattering in which atoms receive momentum "kicks" in the direction opposite their motion, it also slows the atoms (i.e. cools them), on average, over repeated absorption and spontaneous emission cycles. In this way, a MOT is able to trap and cool atoms with initial velocities of hundreds of meters per second down to tens of centimeters per second (again, depending upon the atomic species). Although charged particles can be trapped using a Penning trap or a Paul trap using a combination of electric and magnetic fields, those traps are ineffective for neutral atoms. Theoretical description of a MOT Two coils in an anti-Helmholtz configuration are used to generate a weak quadrupolar magnetic field; here, we will consider the coils as being separated along the -axis. In the proximity of the field zero, located halfway between the two coils along the -direction, the field gradient is uniform and the field itself varies linearly with position. For this discussion, consider an atom with ground and excited states with and , respectively, where is the magnitude of the total angular momentum vector. Due to the Zeeman effect, these states will each be split into sublevels with associated values of , denoted by (note that the Zeeman shift for the ground state is zero and that it will not be split into sublevels by the field). This results in spatially-dependent energy shifts of the excited-state sublevels, as the Zeeman shift is proportional to the field strength and in this configuration the field strength is linear in position. As a note, the Maxwell equation implies that the field gradient is twice as strong along the -direction than in the and -directions, and thus the trapping force along the -direction is twice as strong. In combination with the magnetic field, pairs of counter-propagating circularly-polarized laser beams are sent in along three orthogonal axes, for a total of six MOT beams (there are exceptions to this, but a minimum of five beams is required to make a 3D MOT). The beams are red-detuned from the transition by an amount such that , or equivalently, , where is the frequency of the laser beams and is the frequency of the transition. The beams must be circularly polarized to ensure that photon absorption can only occur for certain transitions between the ground state and the sublevels of the excited state , where . In other words, the circularly-polarized beams enforce selection rules on the allowed electric dipole transitions between states. At the center of the trap, the magnetic field is zero and atoms are "dark" to incident red-detuned photons. That is, at the center of the trap, the Zeeman shift is zero for all states and so the transition frequency from remains unchanged. The detuning of the photons from this frequency means that there will not be an appreciable amount of absorption (and therefore emission) by atoms in the center of the trap, hence the term "dark". Thus, the coldest, slowest moving atoms accumulate in the center of the MOT where they scatter very few photons. Now consider an atom which is moving in the -direction. The Zeeman effect shifts the energy of the state lower in energy, decreasing the energy gap between it and the state; that is, the frequency associated with the transition decreases. Red-detuned photons, which only drive transitions, propagating in the -direction thus become closer to resonance as the atom travels further from the center of the trap, increasing the scattering rate and scattering force. When an atom absorbs a photon, it is excited to the state and gets a "kick" of one photon recoil momentum, , in the direction opposite to its motion, where . The atom, now in an excited state, will then spontaneously emit a photon in a random direction and after many absorption-spontaneous emission events, the atom will have, on average, been "pushed" back towards the field-zero of the trap. This trapping process will also occur for an atom moving in the -direction if photons are traveling in the -direction, the only difference being that the excitation will be from to since the magnetic field is negative for . Since the magnetic field gradient near the trap center is uniform, the same phenomenon of trapping and cooling occurs along the and -directions as well. Mathematically, the radiation pressure force that atoms experience in a MOT is given by: where is the damping coefficient, is the Landé g-factor and is the Bohr magneton. Doppler cooling Photons have a momentum given by (where is the reduced Planck constant and the photon wavenumber), which is conserved in all atom-photon interactions. Thus, when an atom absorbs a photon, it is given a momentum kick in the direction of the photon before absorption. By detuning a laser beam to a frequency less than the resonant frequency (also known as red detuning), laser light is only absorbed if the light is frequency up-shifted by the Doppler effect, which occurs whenever the atom is moving towards the laser source. This applies a friction force to the atom whenever it moves towards a laser source. For cooling to occur along all directions, the atom must see this friction force along all three Cartesian axes; this is most easily achieved by illuminating the atom with three orthogonal laser beams, which are then reflected back along the same direction. Magnetic trapping Magnetic trapping is created by adding a spatially varying magnetic quadrupole field to the red detuned optical field needed for laser cooling. This causes a Zeeman shift in the magnetic-sensitive mf levels, which increases with the radial distance from the center of the trap. Because of this, as an atom moves away from the center of the trap, the atomic resonance is shifted closer to the frequency of the laser light, and the atom becomes more likely to get a photon kick towards the center of the trap. The direction of the kick is given by the polarization of the light, which is either left or right handed circular, giving different interactions with the different mf levels. The correct polarizations are used so that photons moving towards the center of the trap will be on resonance with the correct shifted atomic energy level, always driving the atom towards the center. Atomic structure necessary for magneto-optical trapping As a thermal atom at room temperature has many thousands of times the momentum of a single photon, the cooling of an atom must involve many absorption-spontaneous emission cycles, with the atom losing up to ħk of momenta each cycle . Because of this, if an atom is to be laser cooled, it must possess a specific energy level structure known as a closed optical loop, where following an excitation-spontaneous emission event, the atom is always returned to its original state. 85Rubidium, for example, has a closed optical loop between the state and the state. Once in the excited state, the atom is forbidden from decaying to any of the states, which would not conserve parity, and is also forbidden from decaying to the state, which would require an angular momentum change of −2, which cannot be supplied by a single photon. Many atoms that do not contain closed optical loops can still be laser cooled, however, by using repump lasers which re-excite the population back into the optical loop after it has decayed to a state outside of the cooling cycle. The magneto-optical trapping of rubidium 85, for example, involves cycling on the closed transition. On excitation, however, the detuning necessary for cooling gives a small, but non-zero overlap with the state. If an atom is excited to this state, which occurs roughly every thousand cycles, the atom is then free to decay either the , light coupled upper hyperfine state, or the "dark" lower hyperfine state. If it falls back to the dark state, the atom stops cycling between ground and excited state, and the cooling and trapping of this atom stops. A repump laser which is resonant with the transition is used to recycle the population back into the optical loop so that cooling can continue. Apparatus Laser All magneto-optical traps require at least one trapping laser plus any necessary repumper lasers (see above). These lasers need stability, rather than high power, requiring no more than the saturation intensity, but a linewidth much less than the Doppler width, usually several megahertz. Because of their low cost, compact size and ease of use, laser diodes are used for many of the standard MOT species while the linewidth and stability of these lasers is controlled using servo systems, which stabilises the lasers to an atomic frequency reference by using, for example, saturated absorption spectroscopy and the Pound-Drever-Hall technique to generate a locking signal. By employing a 2-dimensional diffraction grating it is possible to generate the configuration of laser beams required for a magneto-optical trap from a single laser beam and thus have a very compact magneto-optical trap. Vacuum chamber The MOT cloud is loaded from a background of thermal vapour, or from an atomic beam, usually slowed down to the capture velocity using a Zeeman slower. However, the trapping potential in a magneto-optical trap is small in comparison to thermal energies of atoms and most collisions between trapped atoms and the background gas supply enough energy to the trapped atom to kick it out of the trap. If the background pressure is too high, atoms are kicked out of the trap faster than they can be loaded, and the trap does not form. This means that the MOT cloud only forms in a vacuum chamber with a background pressure of less than 100 micropascals (10−9 bar)}. The limits to the magneto-optical trap The minimum temperature and maximum density of a cloud in a magneto-optical trap is limited by the spontaneously emitted photon in cooling each cycle. While the asymmetry in atom excitation gives cooling and trapping forces, the emission of the spontaneously emitted photon is in a random direction, and therefore contributes to a heating of the atom. Of the two ħk kicks the atom receives in each cooling cycle, the first cools, and the second heats: a simple description of laser cooling which enables us to calculate a point at which these two effects reach equilibrium, and therefore define a lower temperature limit, known as the Doppler cooling limit. The density is also limited by the spontaneously emitted photon. As the density of the cloud increases, the chance that the spontaneously emitted photon will leave the cloud without interacting with any further atoms tends to zero. The absorption, by a neighboring atom, of a spontaneously emitted photon gives a 2ħk momentum kick between the emitting and absorbing atom which can be seen as a repulsive force, similar to coulomb repulsion, which limits the maximum density of the cloud. As of 2022 the method has been demonstrated to work up to triatomic molecules. Application Because of the continuous cycle of absorption and spontaneous emission, which causes decoherence, any quantum manipulation experiments must be performed with the MOT beams turned off. As a result of low densities and speeds of atoms achieved by optical cooling, the mean free path in a ball of MOT cooled atoms is very long, and atoms may be treated as ballistic. This is useful for quantum information experiments where it is necessary to have long coherence times (the time an atom spends in a defined quantum state). In this case, it is common to stop the expansion of the cloud while the MOT is off by loading the cooled atoms into a dipole trap. A magneto-optical trap is usually the first step to achieving Bose–Einstein condensation. Atoms are cooled in a MOT down to a few times the recoil limit, and then evaporatively cooled which lowers the temperature and increases the density to the required phase space density. A MOT of 133Cs was used to make some of the best measurements of CP violation. MOTs are used in a number of quantum technologies (i.e. cold atom gravity gradiometers) and have been deployed on several platforms (i.e. UAVs) and in several environments (i.e. down boreholes ). See also Dipole trap Zeeman slower References Liwag, John Waruel F. Cooling and trapping of 87Rb atoms in a magneto-optical trap using low-power diode lasers, Thesis 621.39767 L767c (1999) Atomic, molecular, and optical physics Particle traps
Magneto-optical trap
[ "Physics", "Chemistry" ]
2,824
[ "Molecular physics", "Particle traps", " molecular", "Atomic", " and optical physics" ]
4,340,134
https://en.wikipedia.org/wiki/Featherstone%27s%20algorithm
Featherstone's algorithm is a technique used for computing the effects of forces applied to a structure of joints and links (an "open kinematic chain") such as a skeleton used in ragdoll physics. The Featherstone's algorithm uses a reduced coordinate representation. This is in contrast to the more popular Lagrange multiplier method, which uses maximal coordinates. Brian Mirtich's PhD Thesis has a very clear and detailed description of the algorithm. Baraff's paper "Linear-time dynamics using Lagrange multipliers" has a discussion and comparison of both algorithms. References External links Featherstone Multibody in Bullet Physics engine Featherstone's algorithm implementation in the Moby rigid body dynamics simulator Source code for implementation of Featherstone's algorithm Description and references Mirtich's Thesis Baraff's Lagrange multiplier method Roy Featherstone's home page Mechanics Computational physics Computer physics engines
Featherstone's algorithm
[ "Physics", "Engineering" ]
195
[ "Mechanics", "Mechanical engineering", "Computational physics stubs", "Computational physics" ]
4,340,183
https://en.wikipedia.org/wiki/AP%2042%20Compilation%20of%20Air%20Pollutant%20Emission%20Factors
The AP 42 Compilation of Air Pollutant Emission Factors is a compilation of the US Environmental Protection Agency (EPA)'s emission factor information on air pollution, first published in 1968. , the last edition is the 5th from 2010. History The AP 42 Compilation of Air Pollutant Emission Factors is a compilation of emission factors of air pollutants, in other words numbers which relate the quantity of a pollutant released into the ambient air with a certain activity. This compilation was first compiled and published by the US Public Health Service in 1968. In 1972, it was revised and issued as the second edition by the US Environmental Protection Agency EPA. In 1985, the subsequent fourth edition was split into two volumes: Volume I has since included stationary point and area source emission factors, and Volume II includes mobile source emission factors. Volume I is currently in its fifth edition and is available on the Internet. Volume II is no longer maintained as such, but roadway air dispersion models for estimating emissions from on-road vehicles and from non-road vehicles and mobile equipment are available on the Internet. In routine common usage, Volume I of the emission factor compilation is very often referred to as simply AP 42. Content Air pollution emission factors are usually expressed as the weight of the pollutant divided by a unit weight, volume, distance, or duration of the activity emitting the pollutant (e.g., kilograms of particulate matter emitted per megagram of coal burned). The factors help to estimate emissions from various sources of air pollution. In most cases, the factors are simply averages of all available data of acceptable quality, and are generally assumed to be representative of long-term averages. The equation for the estimation of emissions before emission reduction controls are applied is: E = A × EF and for emissions after reduction controls are applied: E = A × EF × (1-ER/100) Emission factors are used by atmospheric dispersion modelers and others to determine the amount of air pollutants being emitted from sources within industrial facilities. Chapters Chapter 5, Section 5.1 "Petroleum Refining" discusses the air pollutant emissions from the equipment in the various refinery processing units as well as from the auxiliary steam-generating boilers, furnaces and engines, and Table 5.1.1 includes the pertinent emission factors. Table 5.1.2 includes the emission factors for the fugitive air pollutant emissions from the large wet cooling towers in refineries and from the oil/water separators used in treating refinery wastewater. The fugitive air pollutant emission factors from relief valves, piping valves, open-ended piping lines or drains, piping flanges, sample connections, and seals on pump and compressor shafts are discussed and included in the report EPA-458/R-95-017, "Protocol for Equipment Leak Emission Estimates" which is included in the Chapter 5 section of AP 42. That report includes the emission factors developed by the EPA for petroleum refineries and for the synthetic organic chemical industry (SOCMI). In most cases, the emission factors in Chapter 5 are included for both uncontrolled conditions before emission reduction controls are implemented and controlled conditions after specified emission reduction methods are implemented. Chapter 7 "Liquid Storage Tanks" is devoted to the methodology for calculating the emissions losses from the six basic tank designs used for organic liquid storage: fixed roof (vertical and horizontal), external floating roof, domed external (or covered) floating roof, internal floating roof, variable vapor space, and pressure (low and high). The methodology in Chapter 7 was developed by the American Petroleum Institute in collaboration with the EPA. The EPA has developed a software program named "TANKS" which performs the Chapter 7 methodology for calculating emission losses from storage tanks. The program's installer file along with a user manual, and the source code are available on the Internet. Chapters 5 and 7 discussed above are illustrative of the type of information contained in the other chapters of AP 42. Many of the fugitive emission factors in Chapter 5 and the emissions calculation methodology in Chapter 7 and the TANKS program also apply to many other industrial categories besides the petroleum industry. See also Cement kiln emissions Emission factor References Smog Air pollution emissions Air pollution in the United States Atmospheric dispersion modeling United States Environmental Protection Agency 1968 in the environment
AP 42 Compilation of Air Pollutant Emission Factors
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
886
[ "Visibility", "Physical quantities", "Smog", "Atmospheric dispersion modeling", "Environmental engineering", "Environmental modelling" ]
4,340,403
https://en.wikipedia.org/wiki/List%20of%20adiabatic%20concepts
Adiabatic (from Gr. ἀ negative + διάβασις passage; transference) refers to any process that occurs without heat transfer. This concept is used in many areas of physics and engineering. Notable examples are listed below. Automobiles Engine braking, a feature of some diesel engines, uses adiabatic expansion to diminish the vehicle's forward momentum. Meteorology Adiabatic lapse rate, the change in air temperature with changing height, resulting from pressure change. Quantum chemistry Adiabatic invariant Born–Oppenheimer approximation Thermodynamics Adiabatic process Adiabatic ionization Adiabatic index Adiabatic accessibility Quantum mechanics Adiabatic theorem Adiabatic quantum motor Electronics Adiabatic circuit Adiabatic logic References Thermodynamic processes Science-related lists
List of adiabatic concepts
[ "Physics", "Chemistry" ]
167
[ "Thermodynamic processes", "Thermodynamics" ]
4,340,750
https://en.wikipedia.org/wiki/Antarctic%20bottom%20water
The Antarctic bottom water (AABW) is a type of water mass in the Southern Ocean surrounding Antarctica with temperatures ranging from −0.8 to 2 °C (35 °F) and absolute salinities from 34.6 to 35.0 g/kg. As the densest water mass of the oceans, AABW is found to occupy the depth range below 4000 m of all ocean basins that have a connection to the Southern Ocean at that level. AABW forms the lower branch of the large-scale movement in the world's oceans through thermohaline circulation. AABW forms near the surface in coastal polynyas along the coastline of Antarctica, where high rates of sea ice formation during winter leads to the densification of the surface waters through brine rejection. Since the water mass forms near the surface, it is responsible for the exchange of large quantities of heat and gases with the atmosphere. AABW has a high oxygen content relative to the rest of the oceans' deep waters, but this depletes over time. This water sinks at four distinct regions around the margins of the continent and forms the AABW; this process leads to ventilation of the deep ocean, or abyssal ventilation. Formation and circulation Antarctic bottom water is formed in the Weddell and Ross Seas, off the Adélie Coast and by Cape Darnley from surface water cooling in polynyas and below the ice shelf. An important factor enabling the formation of Antarctic bottom water is the cold surface wind blowing off the Antarctic continent. The surface winds advect sea ice away from the coast, creating polynyas which opens up the water surface to a cold atmosphere during winter, which further helps form more sea ice. Antarctic coastal polynyas form as much as 10% of the overall Southern Ocean sea ice during a single season, amounting to about 2,000 km3 of sea ice. Surface water is enriched in salt from sea ice formation and cooled due to being exposed to a cold atmosphere during winter, which increases the density of this water mass. Due to its increased density, it forms overflows down the Antarctic continental slope and continues north along the bottom. It is the densest water in the open ocean, and underlies other bottom and intermediate waters throughout most of the southern hemisphere. The Weddell Sea Bottom Water is the densest component of the Antarctic bottom water. A major source water for the formation of AABW is the warm offshore watermass known as the circumpolar deep water (CDW; salinity > 35 g/kg and potential temperature > 0oC). These warm watermasses are cooled by coastal polynyas to form the denser AABW. Coastal polynyas that form AABW help prevent the intruding warm CDW water masses from gaining access to the base of ice shelves, hence acting to protect ice shelves from enhanced basal melting due to oceanic warming. In areas like the Amundsen Sea, where coastal polynya activity has diminished to the point where dense water formation is hindered, the neighboring ice shelves have started to retreat and may be on the brink of collapse. Evidence indicates that Antarctic bottom water production through the Holocene (last 10,000 years) is not in a steady-state condition; that is, bottom water production sites shift along the Antarctic margin over decade-to-century timescales as conditions for the existence of polynyas change. For example, the calving of the Mertz Glacier, which occurred on 12–13 February 2010, dramatically changed the environment for producing bottom water, reducing export by up to 23% in the region of Adélie Land. Evidence from sediment cores, containing layers of cross-bedded sediments indicating phases of stronger bottom currents, collected on the Mac. Robertson shelf and Adélie Land suggests that they have switched "on" and "off" again as important bottom water production sites over the last several thousand years. Atlantic Ocean The Vema Channel, a deep trough in the Rio Grande Rise of the South Atlantic at , is an important conduit for Antarctic Bottom Water and Weddell Sea Bottom Water migrating north. Upon reaching the equator, about one-third of the northward flowing Antarctic bottom water enters the Guiana Basin, mainly through the southern half of the Equatorial Channel at 35°W. The other part recirculates and some of it flows through the Romanche fracture zone into the eastern Atlantic. In the Guiana Basin, west of 40°W, the sloping topography and the strong, eastward flowing deep western boundary current might prevent the Antarctic bottom water from flowing west: thus it has to turn north at the eastern slope of the Ceará Rise. At 44°W, north of the Ceará Rise, Antarctic bottom water flows west in the interior of the basin. A large fraction of the Antarctic bottom water enters the eastern Atlantic through the Vema fracture zone. Indian Ocean In the Indian Ocean, the Crozet–Kerguelen Gap allows Antarctic bottom water to move toward the equator. This northward movement amounts to 2.5 Sv. It takes the Antarctic Bottom Water 23 years to reach the Crozet-Kerguelen Gap. South of Africa, Antarctic bottom water flows northwards through the Agulhas Basin and then east through the Agulhas Passage and over the southern margins of the Agulhas Plateau and then into the Mozambique Basin. Climate change Climate change and the subsequent melting of the Southern ice sheet have slowed the formation of AABW, and this slowdown is likely to continue. A complete shutdown of AABW formation is possible as soon as 2050. This shutdown would have dramatic effects on ocean circulation and global weather patterns. Potential for AABW Disruption Increased intrusion of warm Circumpolar Deep Water coupled with enhanced ice shelf basal melting can impact the formation of dense shelf waters. For surface water to become deep water, it must be very cold and saline. Much of the deep-water formation comes from brine rejection, where the water deposited is extremely saline and cold, making it extremely dense. The increased ice melt that occurred starting in the early 2000s has created a period of fresher water between 2011 and 2015 within the bottom water. This has been distinctly prevalent in Antarctic bottom waters near West Antarctica, primarily in the Weddell Sea area. While the freshening of the AABW has corrected itself over the past few years with a decrease in ice melt, the potential for more ice melt in the future still poses a threat. With the potential increase in ice melt at extreme-enough levels, it can have a serious impact on the ability for deep sea water to be formed. While this would create a slowdown referenced above, it may also create additional warming. Increased stratification coming from the fresher and warmer waters will reduce bottom and deep-water circulation and increase warm water flows around Antarctica. The sustained warmer surface waters would only increase the level of ice melt, stratification, and the slowdown of the AABW circulation and formation. Additionally, without the presence of those colder waters producing brine rejection which deposits to the AABW, there may eventually be no formation of bottom water around Antarctica anymore. This would impact more than Antarctica, as AABW plays a major role in bottom water formation and deep-sea circulation, which deposits oxygen to the deep sea and is a major carbon sink. Without these connections, the deep sea will become drastically changed with the potential for collapse in entire deep-sea communities. Some studies indicate that WSBW formation in the Weddell Sea is dominantly driven by wind-driven sea ice changes, however, and that increased sea ice formation overcompensates for the melting of ice sheets, rendering the effects of melting Antarctic glaciers on WSBW minimal. References Glossary of Physical Oceanography Steele, John H., Steve A. Thorpe and Karl K. Turekian, editors, Ocean Currents: A derivative of the Encyclopedia of Ocean Sciences, Academic Press, 1st ed., 2010 Environment of Antarctica Water masses Physical oceanography
Antarctic bottom water
[ "Physics", "Chemistry" ]
1,639
[ "Chemical oceanography", "Water masses", "Applied and interdisciplinary physics", "Physical oceanography" ]
4,340,898
https://en.wikipedia.org/wiki/Shannon%E2%80%93Weaver%20model
The Shannon–Weaver model is one of the first models of communication. Initially published in the 1948 paper "A Mathematical Theory of Communication", it explains communication in terms of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source produces the original message. The transmitter translates the message into a signal, which is sent using a channel. The receiver translates the signal back into the original message and makes it available to the destination. For a landline phone call, the person calling is the source. They use the telephone as a transmitter, which produces an electric signal that is sent through the wire as a channel. The person receiving the call is the destination and their telephone is the receiver. Shannon and Weaver distinguish three types of problems of communication: technical, semantic, and effectiveness problems. They focus on the technical level, which concerns the problem of how to use a signal to accurately reproduce a message from one location to another location. The difficulty in this regard is that noise may distort the signal. They discuss redundancy as a solution to this problem: if the original message is redundant then the distortions can be detected, which makes it possible to reconstruct the source's original intention. The Shannon–Weaver model of communication has been influential in various fields, including communication theory and information theory. Many later theorists have built their own models on its insights. However, it is often criticized based on the claim that it oversimplifies communication. One common objection is that communication should not be understood as a one-way process but as a dynamic interaction of messages going back and forth between both participants. Another criticism rejects the idea that the message exists prior to the communication and argues instead that the encoding is itself a creative process that creates the content. Overview and basic components The Shannon–Weaver model is one of the earliest models of communication. It was initially published by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication". The model was further developed together with Warren Weaver in their co-authored 1949 book The Mathematical Theory of Communication. It aims to provide a formal representation of the basic elements and relations involved in the process of communication. The model consists of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source of information is usually a person and decides which message to send. The message can take various forms, such as a sequence of letters, sounds, or images. The transmitter is responsible for translating the message into a signal. To send the signal, a channel is required. Channels are ways of transmitting signals, like light, sound waves, radio waves, and electrical wires. The receiver performs the opposite function of the transmitter: it translates the signal back into a message and makes it available to the destination. The destination is the person for whom the message was intended. Shannon and Weaver focus on telephonic conversation as the paradigmatic case of how messages are produced and transmitted through a channel. But their model is intended as a general model that can be applied to any form of communication. For a regular face-to-face conversation, the person talking is the source, the mouth is the transmitter, the air is the channel transmitting the sound waves, the listener is the destination, and the ear is the receiver. In the case of a landline phone call, the source is the person calling, the transmitter is their telephone, the channel is the wire, the receiver is another telephone and the destination is the person using the second telephone. To apply this model accurately to real-life cases, some of the components may have to be repeated. For the telephone call, for example, the mouth is also a transmitter before the telephone itself as a second transmitter. Problems of communication Shannon and Weaver identify and address problems in the study of communication at three basic levels: technical, semantic, and effectiveness problems (referred to as levels A, B, and C). Shannon and Weaver hold that models of communication should provide good responses to all three problems, ideally by showing how to make communication more accurate and efficient. The prime focus of their model is the technical level, which concerns the issue of how to accurately reproduce a message from one location to another location. For this problem, it is not relevant what meaning the message carries. By contrast, it is only relevant that the message can be distinguished from different possible messages that could have been sent instead of it. Semantic problems go beyond the symbols themselves and ask how they convey meaning. Shannon and Weaver assumed that the meaning is already contained in the message but many subsequent communication theorists have further problematized this point by including the influence of cultural factors and the context in their models. The effectiveness problem is based on the idea that the person sending the message has some goal in mind concerning how the person receiving the message is going to react. In this regard, effectivity means that the reaction matches the speaker's goal.The problem of effectivity concerns the question of how to achieve this. Many critics have rejected this aspect of Shannon and Weaver's theory since it seems to equate communication with manipulation or propaganda. Noise and redundancy To solve the technical problem at level A, it is necessary for the receiver to reconstruct the original message from the signal. However, various forms of noise can interfere and distort it. Noise is not intended by the source and makes it harder for the receiver to reconstruct the source's intention found in the original message. Crackling sounds during a telephone call or snow on a television screen are examples of noise. One way to solve this problem is to make the information in the message partially redundant. This way, distortions can often be identified and the original meaning can be reconstructed. A very basic form of redundancy is to repeat the same message several times. But redundancy can take various other forms as well. For example, the English language is redundant in the sense that many possible combinations of letters are meaningless. So the term "comming" does not have a distinct meaning. For this reason, it can be identified as a misspelling of the term "coming", thus revealing the source's original intention. Redundancy makes it easier to detect distortions but its drawback is that messages carry less information. Influence and criticism The Shannon–Weaver model of communication has been influential, inspiring subsequent work in the field of communication studies. Erik Hollnagel and David D. Woods even characterize it as the "mother of all models." It has been widely adopted in various other fields, including information theory, organizational analysis, and psychology. Many later theorists expanded this model by including additional elements in order to take into account other aspects of communication. For example, Wilbur Schramm includes a feedback loop to understand communication as an interactive process and George Gerbner emphasizes the relation between communication and the reality to which the communication refers. Some of these models, like Gerbner's, are equally universal in that they apply to any form of communication. Others apply to more specific areas. For example, Lasswell's model and Westley and MacLean's model are specifically formulated for mass media. Shannon's concepts were also popularized in John Robinson Pierce's Symbols, Signals, and Noise, which introduces the topic to non-specialists. Many criticisms of the Shannon–Weaver model focus on its simplicity by pointing out that it leaves out vital aspects of communication. In this regard, it has been characterized as "inappropriate for analyzing social processes" and as a "misleading misrepresentation of the nature of human communication". A common objection is based on the fact that it is a linear transmission model: it conceptualizes communication as a one-way process going from a source to a destination. Against this approach, it is argued that communication is usually more interactive with messages and feedback going back and forth between the participants. This approach is implemented by non-linear transmission models, also termed interaction models. They include Wilbur Schramm's model, Frank Dance's helical-spiral model, a circular model developed by Lee Thayer, and the "sawtooth" model due to Paul Watzlawick, Janet Beavin, and Don Jackson. These approaches emphasize the dynamic nature of communication by showing how the process evolves as a multi-directional exchange of messages. Another criticism focuses on the fact that Shannon and Weaver understand the message as a form of preexisting information. I. A. Richards criticizes this approach for treating the message as a preestablished entity that is merely packaged by the transmitter and later unpackaged by the receiver. This outlook is characteristic of all transmission models. They contrast with constitutive models, which see meanings as "reflexively constructed, maintained, or negotiated in the act of communicating". Richards argues that the message does not exist before it is articulated. This means that the encoding is itself a creative process that creates the content. Before it, there is a need to articulate oneself but no precise pre-existing content. The communicative process may not just affect the meaning of the message but also the social identities of the communicators, which are established and modified in the ongoing communicative process. References Information theory Claude Shannon Communication Communication studies
Shannon–Weaver model
[ "Mathematics", "Technology", "Engineering" ]
1,888
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]